LLAMA-3 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌

  Рет қаралды 47,886

Prompt Engineering

Prompt Engineering

Күн бұрын

Learn how to fine-tune the latest llama3 on your own data with Unsloth.
🦾 Discord: / discord
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
LINKS:
Announcement: llama.meta.com/llama3/
Meta Platform: meta.ai
unsloth.ai/
huggingface.co/unsloth
Notebook: tinyurl.com/4ez2rprt
Github Tutorial: github.com/PromtEngineer/Yout...
TIMESTAMPS:
[00:00] Fine-tuning Llama3
[00:30] Deep Dive into Fine-Tuning with Unsloth
[01:28] Training Parameters and Data Preparation
[05:36] Setting training parameters with Unsloth
[11:03] Saving and Utilizing Your Fine-Tuned Model
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu...

Пікірлер: 81
@engineerprompt
@engineerprompt 8 күн бұрын
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
@spicer41282
@spicer41282 Ай бұрын
Thank you! More fine tuning case studies please on Llama 3! Much appreciated 🙏 your presentation on this!
@engineerprompt
@engineerprompt Ай бұрын
Will be making alot more on it. Stay tuned.
@Joe-tk8cx
@Joe-tk8cx Ай бұрын
Thank you so much for sharing this was wonderful, I have a question, I am a beginner in LLM model world, which playlist on your channel can I start from ? Thank you
@lemonsqueeezey
@lemonsqueeezey Ай бұрын
thank you so much for this useful video!
@hadebeh2588
@hadebeh2588 Ай бұрын
Thank your very much for your great video. I ran the workbook but did not manage to find the GGUF files on Huggingsface. I put in my HF-Token, but that did not work. Do I have to change the code?
@KleiAliaj
@KleiAliaj Ай бұрын
Great video mate. How can i add more than one dataset ?
@KleiAliaj-us9ip
@KleiAliaj-us9ip Ай бұрын
great video. But how to add more than one datasets ?
@agedbytes82
@agedbytes82 Ай бұрын
Amazing, thanks!
@engineerprompt
@engineerprompt Ай бұрын
Glad you like it!
@user-vt1qs1ge7m
@user-vt1qs1ge7m 9 күн бұрын
can you make a video on how to pass a test csv to the finetuned model and get response column
@loicbaconnier9150
@loicbaconnier9150 Ай бұрын
Hello ilpossible to generate gguf, compilation problem … Did you try it ?
@StephenRayner
@StephenRayner Ай бұрын
Excellent thank you
@scottlewis2653
@scottlewis2653 Ай бұрын
Mediatek's Dimensity chips + Meta's Llama 3 AI = The dream team for on-device intelligence.
@VerdonTrigance
@VerdonTrigance Ай бұрын
How to actually train models? And I mean non-supervised training where I have a set of documents and want to learn on it and probably find author's 'style' or tendency?
@PYETech
@PYETech Ай бұрын
You need to create some process to transfer all the knowledge in these documents in the form of "prompt":"best output". Usually we use an team of agents to do it for us.
@shahzadiqbal7646
@shahzadiqbal7646 Ай бұрын
Can you make a video on how to use local llama 3 to understand large c++ or c# code base
@iCode21
@iCode21 26 күн бұрын
search for ollama,
@jannik3475
@jannik3475 Ай бұрын
Is there a way to sort of „brand“ llama 3. So that the model responds to „Who are you?“ a custom answer? Thank you!
@engineerprompt
@engineerprompt Ай бұрын
Yes, you can just add that as part of the system message
@SeeFoodDie
@SeeFoodDie Ай бұрын
Thanks
@skeiriyalance7274
@skeiriyalance7274 18 күн бұрын
how can i use my csv as dataset , im new
@RodCoelho
@RodCoelho Ай бұрын
How do you train a model by adding the knowledge in a book, which will like only have 1 column of text?
@engineerprompt
@engineerprompt Ай бұрын
In that case, you will have to convert the book into question answers and format it in the similar fashion. You can use an LLM to convert the book to QA using an LLM
@danielhanchen
@danielhanchen Ай бұрын
Fantastic work and always love your videos! :)
@engineerprompt
@engineerprompt Ай бұрын
Thank you
@metanulski
@metanulski Ай бұрын
Regarding the save option. Do I have to delete the parts that I dont what, or how does this work?
@engineerprompt
@engineerprompt Ай бұрын
You can just comment those parts. Put # in front of those lines which you don't need.
@kingofutopia
@kingofutopia Ай бұрын
Awesome, thanks
@engineerprompt
@engineerprompt Ай бұрын
🙏
@researchpaper7440
@researchpaper7440 Ай бұрын
great it was quick
@georgevideosessions2321
@georgevideosessions2321 Ай бұрын
Have you ever thought about writing a no-code fine-tuning on premise app?
@engineerprompt
@engineerprompt Ай бұрын
There is autotrain for that
@DemiGoodUA
@DemiGoodUA Ай бұрын
Hi, nice video. But how to finetune model on my codebase?
@engineerprompt
@engineerprompt Ай бұрын
You can use the same setup. Just replace the instruction and input with your code.
@DemiGoodUA
@DemiGoodUA Ай бұрын
@@engineerprompt how to divide code on "question - answer" pairs? or I can place whole codebase to single instruction
@metanulski
@metanulski Ай бұрын
One more comment :-). this Video is about fintung a model, but there is no real explanation why. We finetune with the standard Alpaca dataset, but there is no explanation why. It would be great if you could do a follow up and show us how to create datasets.
@dogsmartsmart
@dogsmartsmart Ай бұрын
Thank you! but Mac m3 max can use mlx to fine-tune?
@engineerprompt
@engineerprompt Ай бұрын
Yes
@CharlesOkwuagwu
@CharlesOkwuagwu Ай бұрын
Hi, please what if we have already downloaded a gguf file? How do we apply that locally?
@engineerprompt
@engineerprompt Ай бұрын
I am not sure if you can do that. Will need to do further research on it.
@pubgkiller2903
@pubgkiller2903 Ай бұрын
I have already finetune using unsloth for testing purpose.
@engineerprompt
@engineerprompt Ай бұрын
Great, how are the results looking?
@pubgkiller2903
@pubgkiller2903 Ай бұрын
@@engineerprompt great results and thanks for your support to AI community
@TheIITianExplorer
@TheIITianExplorer Ай бұрын
Bro can you tell me about unsloth, how it is different from the basics of using Qlora? And also I used Qlora for Fine-tuning llama 2, can I just paste llama 3 model I'd to use in place of that? I hope you understood my question, waiting for your reply 😊
@pubgkiller2903
@pubgkiller2903 Ай бұрын
@@TheIITianExplorer unsloth library is very useful library for finetune using LoRA technique . QLoRA is Quantization and LoRA so if use Unsloth then the same output you will get as unsloth already quantise the LLMs
@roopad8742
@roopad8742 Ай бұрын
What datasets did you fine tune it on? Have you run any benchmarks?
@modicool
@modicool Ай бұрын
One thing I am unsure of is how to transform my data into a training set. I have the target format: the written body of work, but no "instruction" or "input" of course. I've seen some people try to generate it with ChatGPT, but this seems counter-intuitive. There must be an established method of actually manipulating data into a training set. Where is that piece?
@engineerprompt
@engineerprompt Ай бұрын
You will need to have a {input, response} pair in order to fine-tune an instruct model. Unfortunately, there is no way around it unless you are just pre-training the base model.
@ashwinsveta
@ashwinsveta Ай бұрын
We fine
@user-lz8wv7rp1o
@user-lz8wv7rp1o Ай бұрын
great
@cucciolo182
@cucciolo182 Ай бұрын
Next week Gemini 2 with text to video 😂
@tamim8540
@tamim8540 Ай бұрын
Hello can I fine tune it using colab free version?
@engineerprompt
@engineerprompt Ай бұрын
This is using the free version
@jackdorsey3504
@jackdorsey3504 Ай бұрын
Sir, we cannot open the colab website...
@jackdorsey3504
@jackdorsey3504 Ай бұрын
Already solved...
@metanulski
@metanulski Ай бұрын
So 60 steps is to low. But what it a good number of steps?
@engineerprompt
@engineerprompt Ай бұрын
Usually you want to set epochs to 1 or 2
@metanulski
@metanulski Ай бұрын
@@engineerprompt So 60 to120 steps max, since one epoch is 60 steps?
@pfifo_fast
@pfifo_fast Ай бұрын
This video lacks alot of helpful info... Anyone can just open the examples and read them just the same as you did. I would have liked to be given extra detail and tips about how to actually do fine-tuning... Some of the topics I am struggling with include, how to load custom data, how to use a different prompt template, how to define validation data, when to use validation data, what learning rates are good, how do i determine how many epochs to run... Im sorry buddy, but I have to give this video a thumbs down as it really truly and honestly dosent provide any useful info that isnt already in the notebook.
@weka5286
@weka5286 5 күн бұрын
Hello, have you already found any other video or article about that? I am also struggling with the same issue.
@auhkba
@auhkba Күн бұрын
can we learn pictures instead of text?
@engineerprompt
@engineerprompt Күн бұрын
Yes, you can finetune something like paligemma
@asadurrehman3591
@asadurrehman3591 Ай бұрын
can i fintune using colab free gpu?
@engineerprompt
@engineerprompt Ай бұрын
Yes, this uses the free collab.
@asadurrehman3591
@asadurrehman3591 Ай бұрын
@@engineerprompt love you broooo
@HoneIrimana
@HoneIrimana Ай бұрын
They messed up releasing llama 3 because it believes it is sentient
@nikolavukcevic360
@nikolavukcevic360 Ай бұрын
Why you didnt provide any examples of training. It would make this video 10 times better.
@engineerprompt
@engineerprompt Ай бұрын
that is coming...
@anantkabra6825
@anantkabra6825 15 күн бұрын
Has anybody trued pushing to hugging face? I need help in that part, pls reply to the message incase you have
@engineerprompt
@engineerprompt 15 күн бұрын
when you create a api key, make sure to enable the write permission on that key otherwise, it wouldn't upload the model.
@petergasparik924
@petergasparik924 2 күн бұрын
Don't even try to run it on windows directly, just install python and all packages in WSL
@engineerprompt
@engineerprompt 2 күн бұрын
Agree, windows is not a good option for running any LLM tasks.
@Matlockization
@Matlockization 24 күн бұрын
It's a Zuckerberg free AI........that makes me wonder. And you have to agree to hand over contact info and what else, I wonder ?
@user-hn7cq5kk5y
@user-hn7cq5kk5y 21 күн бұрын
Don't share trash
@piffdaddy420
@piffdaddy420 28 күн бұрын
you really should just make videos in your own language because who the fk can even understand what you are saying?
Insanely Fast LLAMA-3 on Groq Playground and API for FREE
8:54
Prompt Engineering
Рет қаралды 24 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Watermelon Cat?! 🙀 #cat #cute #kitten
00:56
Stocat
Рет қаралды 11 МЛН
1🥺🎉 #thankyou
00:29
はじめしゃちょー(hajime)
Рет қаралды 79 МЛН
They RUINED Everything! 😢
00:31
Carter Sharer
Рет қаралды 17 МЛН
Ollama UI Tutorial - Incredible Local LLM UI With EVERY Feature
10:11
Matthew Berman
Рет қаралды 75 М.
What is Artificial General Intelligence #trendingvideo #trending
3:44
How To Run Llama 3 8B, 70B Models On Your Laptop (Free)
4:12
School of Machine Learning
Рет қаралды 11 М.
QLoRA-How to Fine-tune an LLM on a Single GPU (w/ Python Code)
36:58
LLaMA 3 Is HERE and SMASHES Benchmarks (Open-Source)
15:35
Matthew Berman
Рет қаралды 104 М.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 406 М.
AI vs ML vs DL vs Generative Ai
16:00
Krish Naik
Рет қаралды 28 М.
iPhone 15 Pro vs Samsung s24🤣 #shorts
0:10
Tech Tonics
Рет қаралды 11 МЛН
keren sih #iphone #apple
0:16
Muhammad Arsyad
Рет қаралды 529 М.
Дени против умной колонки😁
0:40
Deni & Mani
Рет қаралды 10 МЛН
Карточка Зарядка 📱 ( @ArshSoni )
0:23
EpicShortsRussia
Рет қаралды 662 М.