Full text tutorial (requires MLExpert Pro): www.mlexpert.io/prompt-engineering/fine-tuning-llama-2-on-custom-dataset
@kiranatmakuri37109 ай бұрын
Can you send me your email pls I have a question can’t ask in public
@williamfussell19568 ай бұрын
I keep having problems with the model.merge_and_unload()... It seems to be a bit different from the documentation on Hugging Face...is there something I am missing here? The error says that that 'LlamaForCausalLM' object has no attribute 'merge_and_unload'.... Any ideas?
This is great. A version for question answering would be helpful too.
@echos0110 ай бұрын
Excellent work! You are the hero!
@stawils10 ай бұрын
Good stuff coming, thank you in advance ❤
@parisapouya671610 ай бұрын
Awesome work! Thanks a ton!
@HeywardLiu10 ай бұрын
Awesome tutorial!
@vivekjyotibhowmik800810 ай бұрын
Can you provide the Google Collab notebook?
@NitroBrewbell10 ай бұрын
very helpful. Thanks for the videos.
@VaibhavPatil-rx7pc10 ай бұрын
Super excited
@krishchatterjee28199 ай бұрын
Excellent video! What changes in the input we need to make to use 8 bit quantization instead of 4 bit. Thanks.
@AbdulBasit-ff6tq10 ай бұрын
Do you have or plan to make a tutorial for something like bellow? Tutorial for the plane text fine-tuning and then tuning that model to make it an instruct tuned one?
@GregMatoga9 ай бұрын
Thank you for this! Is finetuning a good approach for a private/proprietary documentation Q&A?
@fabsync2 ай бұрын
Fantastic video! It will be nice to see a full tutorial on how to do it with pdf locally...
@lyovazi853310 ай бұрын
very good video
@DawnWillTurn10 ай бұрын
Any idea how can we deploy llama-2 on huggingface api? just like the falcon one, has some issue with the handler.
@williamgomezsantana5 ай бұрын
Incredible video!! Thank you very much, I have a question: isn't it mandatory to put characters like EOS at the end of the summary? for the LLM to finish the instruction?
@jensonjoy8310 ай бұрын
will you be able to add a tutorial for llama2-chat model
@experiment57629 ай бұрын
Great!! Do some videos regarding RLHF.
@techtraversal2198 ай бұрын
Thanks for sharing, really helpful. Waiting for my Llama model access to follow it step by step. Can I use any other model in place of this?
@srushtiharyan20335 ай бұрын
Did you get the access? And how long did it take?
@ikurious10 ай бұрын
Great video! Is there anyway to build my instruction dataset for instruct fine-tuning from classical text books?
@ikurious10 ай бұрын
@@user-xt6tu3xt3t but then how to convert in question & answer format?
@mauriciososa972210 ай бұрын
@@ikurious the best way is manualyl by a human
@sasukeuchiha-ck4hy10 ай бұрын
can you train the model on german data?
@tarunku937810 ай бұрын
I still don't get it i have my data locally , how should start finetuning it please tell
@chukypedro81810 ай бұрын
Super🎉
@shopbc55537 ай бұрын
Do you have an idea how GPT4 is so good with its responses from its base model when I upload documents to it? Could it be the parameter. size only or do you think other technologies are what determine the quality difference?
@tillwill3232Ай бұрын
parameter size and training data i guess? Also I dont think we know their exact network architecture since they didnt release their network publicly, can only access it via product
@GooBello-gr2ls9 ай бұрын
can i download the finetuned model after finetuning? is it in format .bin or .safetensor or else? cuz im current trying to do finetuning on textgen, but having troubles. with dataset (format) i guess.
@lisab13609 ай бұрын
do you already know how you can download the finetuned model?
@williamfussell19568 ай бұрын
Hi there, I am just reading through the repo and Im pretty sure this is the answer...i just wanted to make sure... The actual input to the model is only from the [text] field, is that correct? As the [text] field contains the prompt, the conversation and the summary...
@user-xy5re6qh3d7 ай бұрын
Hola, For me the validation log show No log with mistral instruct model. Please help anyone.
@danieladama810510 ай бұрын
🔥
@vitocorleon675310 ай бұрын
I need help please. I just want to be pointed in the right direction since I'm new to this and since I couldn't really find any proper guide to summarize the steps for what I want to accomplish. I want to integrate a LLama 2 70B chatbot into my website. I have no idea where to start. I looked into setting up the environment on one of my cloud servers(Has to be private). Now I'm looking into training/fine-tuneing the chat model using our data from our DBs(It's not clear for me here but I assume it involves two steps, first I have to have the data in a CSV format since it's easier for me, second I will need to format it in Alpaca or Openassistant formats). After that, the result should be a deployment-ready model ? Just bullet points I'd highly appreciate that.
@vitocorleon675310 ай бұрын
@nty3929 Oh :/ I’m still lost about this but thank you for your effort nevertheless!
@GregMatoga9 ай бұрын
@nty3929 Yeah, bots are ruthless here and youtube is having none of it, even at that cost. Guess they expect to see more technical conversations elsewhere
@elysiryuu5 ай бұрын
Thanks for the insight, is it possible to perform training locally, with 8 GB VRAM?
@stephenmartinez48833 ай бұрын
No
@karimbaig85738 ай бұрын
When you say you are tracking loss, what loss is that and how is that loss calculated for the task (summarization) at hand?
@anuranjankumar29046 ай бұрын
I have the same question. @karimbaig8573 were you able to figure out the answer?
@karimbaig85736 ай бұрын
Nope.
@tahahuraibb583310 ай бұрын
default_factory=lambda: ["q_proj", "v_proj"] Why did you not add this? Is it because HF does under the hood?
@venelin_valkov10 ай бұрын
I totally forgot about the `target_modules`. I retrained and updated the notebook/tutorial with those. The results are better! Here's the list: lora_target_modules = [ "q_proj", "up_proj", "o_proj", "k_proj", "down_proj", "gate_proj", "v_proj", ] I composed it from here: github.com/huggingface/transformers/blob/f6301b9a13b8467d1f88a6f419d76aefa15bd9b8/src/transformers/models/llama/convert_llama_weights_to_hf.py#L144 Thank you!
@williamfussell19568 ай бұрын
Is there a good resource for understanding 'target modules' for different models? @@venelin_valkov
@cancheers10 ай бұрын
should it be merged_model = trained_model.merge_and_unload()? cannot run, it is killed
@rone324310 ай бұрын
I have this problem as well😢
@kpratik4110 ай бұрын
Were you able to solve this?
@fl0284 ай бұрын
merged_model = trained_model.merge_and_unload()
@MecchaKakkoi5 ай бұрын
This looks like a great notebook, however, I always get a "CUDA out of memory" error when it executes the SFTTrainer function. It's fine up until then according to nvidia-smi but then memory just instantly maxes out. Does anyone know a way around this?
@rishabjain92755 ай бұрын
try reducing the sequence length
@fl0284 ай бұрын
I reduced per_device_train_batch_size=1,
@okopyl9 ай бұрын
Why do you use that kind of prompt for the training like `### Instruction`? When in fact Llama 2 prompts are like `[INST] `...
@g1rlss1mp9 ай бұрын
I think it's a LLaMA2-CHAT prompt. The base model was not finetuned.
@skahler9 ай бұрын
omg @ 15:06 😂😂😂
@JeeneyAI5 ай бұрын
ALL of these tutorials require more dependencies. Can't somebody post how to do this in pycharm with your own GPU? I can't make any of the tutorials I've found work and it's just an endless troubleshooting process as to why everything is different in all of them