The EASIEST way to finetune LLAMA-v2 on local machine!

  Рет қаралды 167,328

Abhishek Thakur

Abhishek Thakur

11 ай бұрын

In this video, I'll show you the easiest, simplest and fastest way to fine tune llama-v2 on your local machine for a custom dataset! You can also use the tutorial to train/finetune any other Large Language Model (LLM). In this tutorial, we will be using autotrain-advanced.
AutoTrain Advanced github repo: github.com/huggingface/autotr...
Steps:
Install autotrain-advanced using pip:
- pip install autotrain-advanced
Setup (optional, required on google colab):
- autotrain setup --update-torch
Train:
autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft
If you are on free version of colab, use this model instead: huggingface.co/abhishek/llama.... This is a smaller sharded version of llama-2-7b-hf by meta.
Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
My book, Approaching (Almost) Any Machine Learning problem, is available for free here: bit.ly/approachingml
Follow me on:
Twitter: / abhi1thakur
LinkedIn: / abhi1thakur
Kaggle: kaggle.com/abhishek

Пікірлер: 296
@linuxmanju
@linuxmanju 5 ай бұрын
Anyone comes across this in 2024 (jan ), the command switches with new autotrain version is autotrain llm --train --project-name josh-ops --model mistralai/Mistral-7B-Instruct-v0.2 --data-path . --use-peft --quantization int4 --lr 2e-4 --train-batch-size 12 --epochs 3 --trainer sft . Great, Video, thanks Abhishek
@BrusnickiRoberto
@BrusnickiRoberto 5 ай бұрын
After finetuning it, how to run it?
@vinodb4339
@vinodb4339 15 күн бұрын
​@@BrusnickiRobertoHey hi did you run it??
@BrusnickiRoberto
@BrusnickiRoberto 15 күн бұрын
@@vinodb4339 no
@tarungupta83
@tarungupta83 11 ай бұрын
That's Awesome, nothing better than this way of training large language model. Super easy ❤
@tarungupta83
@tarungupta83 11 ай бұрын
Appreciate it, and request to continue making such videos🎉
@syedshahab8471
@syedshahab8471 11 ай бұрын
Thank you for the on-point tutorial.
@andyjax100
@andyjax100 3 ай бұрын
Keeping it this simple is something very few people are able to do. Very well explained. This can be understood by even a beginner. Atleast the execution if not the intuition behind it. Kudos
@WeDuMedia
@WeDuMedia 2 ай бұрын
Incredibly helpful video, I appreciate that you took the time to create this! Great stuff
@charleskarpati1129
@charleskarpati1129 7 ай бұрын
Thank you Abhishek! This is phenomenal.
@MasterBrain182
@MasterBrain182 11 ай бұрын
Astonishing content Man 🔥🔥🔥 🚀
@AICoffeeBreak
@AICoffeeBreak 11 ай бұрын
Amazing, tutorials at light speed! Llama 2 was just released! 😮
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
🙏🏽
@AIOdysseyhub
@AIOdysseyhub 11 ай бұрын
😂😂Yeah exactly
@weebprogrammer2979
@weebprogrammer2979 11 ай бұрын
This man is a genius lol
@nirsarkar
@nirsarkar 11 ай бұрын
Excellent, thank you so much. I will try.
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
@arpitghatiya7214
@arpitghatiya7214 10 ай бұрын
Please make a video on Llama2 + RAG (instead of finetuning)
@bryanvann
@bryanvann 11 ай бұрын
Thanks for the tutorial! A couple questions for you. Is there an approach you're using to test quality and verity that the training data has influenced the weights in the model sufficiently to learn the new task? And second, can you use the same approach for unstructured training data such as using a large corpus of private data to do domain adaptation?
@JagadishSongapagounder
@JagadishSongapagounder 11 ай бұрын
Great Job :)
@sohailhosseini2266
@sohailhosseini2266 9 ай бұрын
Thanks for sharing!
@dr.mikeybee
@dr.mikeybee 7 ай бұрын
Nice job!
@stevenshaw124
@stevenshaw124 11 ай бұрын
what kind of GPUs do you have? how big was your dataset and how long did it take to train? what is the smallest fine-tuning data set size that would be reasonable?
@jdoejdoe6161
@jdoejdoe6161 11 ай бұрын
Hi Abh Your method is inspiring and commendable. How do we read the csv or json training dataset we prepared instead of the hugging face dataset you used?
@deltagamma1442
@deltagamma1442 11 ай бұрын
How do you set the training data? I see different people using different formats? Does it matter or is the only requirement that it has to be structured meaniningfully?
@abramswee
@abramswee 11 ай бұрын
thanks for sharing!
@ajaytaneja111
@ajaytaneja111 11 ай бұрын
Hi Abhishek, is the auto train using LORA or prompt tuning as the PEFT technique?
@user-nj7ry9dl3y
@user-nj7ry9dl3y 10 ай бұрын
For fine-tuning of the large language models (llama-2-13b-chat), what should be the format(.text/.json/.csv) and structure (like should be an excel or docs file or prompt and response or instruction and output) of the training dataset? And also how to prepare or organise the tabular dataset for training purpose?
@ConsultingjoeOnline
@ConsultingjoeOnline 4 ай бұрын
How do you convert it to work with Ollama? I setup the model file and it doesnt seem to know anything from my training.
@xthefoetusx
@xthefoetusx 11 ай бұрын
Great video! Would be great if in some future vid you could go into depth on the training hyperparameters and perhaps also talk about what size your custom datasets should be.
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
sometimes I do that. however, this model would have taken wayy too long to train. im training a model as i type here and if i get good results ill share both model and params 🙂
@emrahe468
@emrahe468 11 ай бұрын
@@abhishekkrthakur guess no good luck with the training :(
@boujlidamohamed
@boujlidamohamed 11 ай бұрын
First thank you for the great tutorial , I have one question : I am trying to finetune the model on Japanese , do you have any advice for that ? I have tried the same script as you did but it didn't work; it produced some gibberish after the training finished , I am guessing it is a tokenizer problem, what do you think ?
@aaronliruns
@aaronliruns 10 ай бұрын
Great tutorial! Can you also put up one video teaching on how to merge the fine tuned weights to the base model and do inference? Would like to see an end-to-end course. Thank you!
@adamocheri3513
@adamocheri3513 10 ай бұрын
+1 on this question !!!!
@devyanshrastogi
@devyanshrastogi 8 ай бұрын
any updates guys?? I really want to know how to merge the fine tuned model with the base model and do the inference. Do let me you have any resources or insights about the same
@kopamed5024
@kopamed5024 5 ай бұрын
@@devyanshrastogi also need this answered. have you guys had any success?
@spookyrays2816
@spookyrays2816 11 ай бұрын
Thank you brother
@cloudsystem3740
@cloudsystem3740 11 ай бұрын
thank you very much
@prachijadhav9098
@prachijadhav9098 11 ай бұрын
Nice video Abhishek! I am curious to know about custom data for LLMs. What is the ideal (good quality) data size (e.g., #rows), to fine-tune these models for good performance, not necessarily it should be big data of course. Thanks!
@manojreddy7618
@manojreddy7618 11 ай бұрын
Thank you for the video. I am new to this, so I am trying to set it up on my windows PC. When I am trying to install the latest version of autotrain-advanced==0.6.2, I get an error saying: trition==2.0.0.post1 cannot be found. Which I believe is only available on Linux. So is it possible to use autotrain-advanced on windows?
@_Zefyr_
@_Zefyr_ 9 ай бұрын
Hi I have a question , it´s posible to use "autotrain" without cuda, with rocm support of AMD GPU ?
@as-kw8dt
@as-kw8dt 14 күн бұрын
If there are a multiple input values how that have to be inserted in the cvs data ?
@tal7atal7a66
@tal7atal7a66 3 ай бұрын
thanks bro ❤
@YuniYoshi
@YuniYoshi 7 ай бұрын
There is only one thing I want to see. I want to see you using the final result and prove it actually works. Thank you.
@oliversilverstein1221
@oliversilverstein1221 10 ай бұрын
hello, thank you. i really need to know: does this pad appropriately? also, how does it internally split it into prompt completion? Can i make up roles like ### System? does it complete only the last message?
@0xeb-
@0xeb- 11 ай бұрын
How do you deal with response in the dataset that has newline characters?
@kishalmandal5676
@kishalmandal5676 11 ай бұрын
How can i load the model for inference if i stop training after 1 epoch out of 3 epochs.
@sd_1989
@sd_1989 11 ай бұрын
Thanks!
@deepakkrishna837
@deepakkrishna837 8 ай бұрын
Hi when we tried fine tuning MPT LLM using autotrain, getting the error ValueError: MPTForCausalLM does not support gradient checkpointing. Any help you can offer on this pleas?
@mautkajuari
@mautkajuari 11 ай бұрын
Informative video, hopefully one day I will get a task that requires me to finetune a LLM
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
or you can just do it for fun 🤗
@anantkabra6825
@anantkabra6825 8 ай бұрын
Hello I am getting this error can someone please help me out with it: ValueError: Batch does not contain any data (`None`). At the end of all iterable data available before expected stop iteration.
@user-we6vc9co1b
@user-we6vc9co1b 11 ай бұрын
Do you have to use [INST]...[/INST] for indicating the instructions? I think the original Llama 2 model was trained with these tags, so I am a bit puzzled if you have to use the tags in the csv or they are added internally ?!
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
in this video, im finetuning the base model. you can finetune it anyway you want. you can even take the chat model and finetune it this way. if you are using a different format for finetuning, you must use the same format while inference in order to get the best results.
@jeremyarancio1683
@jeremyarancio1683 11 ай бұрын
Nice vid Should we label input tokens to -100 to focus the training on the prediction? I see no one doing it
@r34ct4
@r34ct4 11 ай бұрын
Thanks for the comprehensive tutorial. Can this be done using chat logs to build a clone of your friend? I have done this with GPT3.5 finetuning using prompt->response. The prompts are questions generated by ChatGPT based on the chat log message. Can the same thing be done with Instruction->Input->Response? Thank you very much man.
@vasuchandra
@vasuchandra 11 ай бұрын
Thanks for the tutorial. On a Linux 5.15.0-71-generic #78-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux machine, I get following error when training llm with the small dataset. File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2819, in from_pretrained raise ValueError( ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom `device_map` to `from_pretrained`. What could be the problem? Is it possible to share the data.csv that you have with single row that I can take as reference to test my own data?
@jaivalani4609
@jaivalani4609 11 ай бұрын
Thank you ,what is diff between instruction and input
@am0x01
@am0x01 5 ай бұрын
In my experiment, it not create the [config.json] what am I doing wrong?
@mariusirgens5555
@mariusirgens5555 10 ай бұрын
Superb video! Does autotrain allow to export finetuned model as GGML file? Or can it be used with GGML file?
@aakritisrivastava4789
@aakritisrivastava4789 11 ай бұрын
I am trying to use the generated model using autotrain from_pretrained ,, but its giving me error does not appear to have a file named config.json. Does anyone have the code for predicting or help me with this issue
@unclecode
@unclecode 11 ай бұрын
Beautiful content, I have a side question, what tool you are using to have "copilot"-like suggestion in your terminal? Thx again for the video
@jessem2176
@jessem2176 11 ай бұрын
I use Hugginfaces co pilot. - it works pretty well and super easy to set up and free..
@ahmetekizx
@ahmetekizx 8 ай бұрын
@@jessem2176 Thanks for the recommendation, but did you mean HuggingFace Personal-copilot Blog?
@eltoro2339
@eltoro2339 11 ай бұрын
I added push_to_hub command but it didnt push.. how do I use it to test the output?
@srinivasanm48
@srinivasanm48 2 ай бұрын
When will I be able to see the model that I have trained? Once all the training is complete?
@sebastianandrescajasordone8501
@sebastianandrescajasordone8501 11 ай бұрын
I am running out of memory when testing it on the free-version of google colab, did you use the exact same tuning parameters as described in the video?
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
yes. you can reduce batch size. note, you need to use different model path if you are on colab or it will run out of memory. see description for more details
@FlyXing16
@FlyXing16 10 ай бұрын
Thanks Kaggle grand master :) you got an channel.
@Truizify
@Truizify 11 ай бұрын
Thanks for the tutorial! How would you modify the code to train on a dataset containing a single column of text? i.e. trying to perform domain-specific additional pretraining? I would remove the peft portion to do full finetuning, anything else?
@sanjaykotabagi4407
@sanjaykotabagi4407 11 ай бұрын
Hey, Can we connect. Even I need help on similar topic. We can discuss more ...
@user-bq2vt4zz2e
@user-bq2vt4zz2e 10 ай бұрын
Hi, I'm looking into something similar. Did you find a good way to do this?
@elmuchoconrado
@elmuchoconrado 10 ай бұрын
As always very useful and short without wasting anyone's time. Thank you. Just I'm a bit confused about the prompt formatting you have used here - "### Instruction: ### Input:... etc" while Llama official is "[INST] {{ system_prompt }}{{ user_message }} [/INST]" and on TheBloke's page it says "SYSTEM: {system_prompt} USER: {prompt} ASSISTANT:"
@ahmetekizx
@ahmetekizx 8 ай бұрын
I think this isn't mandatory, it is a suggestion.
@safaelaqrichi9096
@safaelaqrichi9096 11 ай бұрын
Thank you for this interesting video. How could we change the encoding to ''latin-1' in order to train on french language ? thank you.
@utoubp
@utoubp 5 ай бұрын
Hi Abhishek, Much appreciated. How would things change if we were to use simple fine tuning? That is, just a large single code file to learn from, to tune code-llama, phi2, etc..
@EduardoRodriguez-fu4ry
@EduardoRodriguez-fu4ry 11 ай бұрын
Great tutorial! Thank you! Maybe I missed it but, at which point do you enter your HF token?
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
You dont. You login using "huggingface-cli login" command. There's also a similar command for notebooks and colab. :)
@returncode0000
@returncode0000 11 ай бұрын
I just bought a RTX 4090 Founders Edition. Could you tell on a particular example were I could run into limits with card when training LLMs locally? I personally think that I'm safe for the next few years and I will not run in any problems.
@sandeelg_lite
@sandeelg_lite 11 ай бұрын
I trained model using autotrain in same way as you suggested and model file is stored. Now I need to use this model for prediction. Can you shed some light on this as well?
@Sehyo
@Sehyo 11 ай бұрын
How can I turn this into a gptq version after finetuning?
@jessem2176
@jessem2176 11 ай бұрын
Great Video. i love it and can't wait to try it. Now that Llama2 is out... is it better to FineTune a model or try to create your own Model?
@nehabidkar7377
@nehabidkar7377 10 ай бұрын
Thanks for this great explanation. Can you provide the link to you training data?
@yashvardhanjain1968
@yashvardhanjain1968 11 ай бұрын
Thanks! Is there a way to push the trained model to hub after its trained and not using the --push_to_hub while training? Also, when I try to use push to hub, I get a "you don't have rights to create a model under this namespace". I am using a read token to access the llama model. Do I need to change it to a write token? Is it possible to use two separate tokens? (sorry, I'm super new to Huggingface) Any help is much appreciated. Thanks!
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
yes. you need to use a write token. you can remove push to hub and then push the model manually using git commands if you wish
@protectorate2823
@protectorate2823 10 ай бұрын
Hello @abishekkrthakur can I train summarization models with autotrain advanced?
@abdellaziztekaya8596
@abdellaziztekaya8596 5 ай бұрын
Where can i find to code you worte and your dataset? I would like to use it as an exemple for testing
@aurkom
@aurkom 11 ай бұрын
How to change this for tasks like classification?
@0xeb-
@0xeb- 11 ай бұрын
How to shard as you mentioned towards the end?
@kunalpatil7705
@kunalpatil7705 10 ай бұрын
Thanks for the video. i have a doubt that how can i make a package of it so others can also use it offline by just installing the application
@ashishtater3363
@ashishtater3363 2 ай бұрын
I have llm downloaded can I fine tune it with downloading from huggingface.
@jas5945
@jas5945 11 ай бұрын
Very good tutorial. On what machine are you running this? I am trying to run it on a Macbook pro M1 but I keep getting "ValueError: No GPU found. Please install CUDA and try again." I have tried to do this directly on Huggingface and got "error 400: bad request"...so I cloned autotrain and ran it locally...still getting error 400. Do you have any pointers?
@nirsarkar
@nirsarkar 9 ай бұрын
Same error
@DevanshiSukhija
@DevanshiSukhija 11 ай бұрын
How is your ipython giving suggestions? I want the same set up. Please make a video on these types of set up that assists in coding and other processes.
@rajhammeersinghhada72
@rajhammeersinghhada72 6 ай бұрын
Why do we need --mixed-precsion and --quantization both? Aren't they both doing the same thing?
@mallorywestwood
@mallorywestwood 11 ай бұрын
Can we do this on a CPU? I am using a GGmL model.. please share your thoughts
@abdalgaderabubaker6078
@abdalgaderabubaker6078 11 ай бұрын
Any idea to fine-tune it on Apple chip M1/M2? Just have an installation issues with auto train-advanced 😢
@allentran3357
@allentran3357 11 ай бұрын
Would love to know how to do this as well!
@jas5945
@jas5945 11 ай бұрын
Bumping because running into so many issues with M1. Cannot believe how little resources are available for M1 right now given that macOS is so widely used in data science
@takuyayzu
@takuyayzu 11 ай бұрын
Hi, I've tried running auto trainer with the sharded model on a dataset I created & uploaded on HF. However, when running the auto trainer I quickly get the following error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 14.75 GiB total capacity; 10.15 GiB already allocated; 1.40 GiB free; 12.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I'm using the free tier of Google Colab and I know you mentioned that it should be working in it. Do you have any idea what might cause this and what can be done to solve this issue?
@takuyayzu
@takuyayzu 11 ай бұрын
It seems changing the value of "train_batch_size" helped solve the issue. I've changed it from 12 to 4, as I've seen other examples/guides use it as well. Will try with 4 then and then maybe with higher values (6, 8, etc).
@agostonhuszka8237
@agostonhuszka8237 11 ай бұрын
Thank for the tutorial! How can I fine-tune the language model with a domain-specific unlabeled dataset to improve performance on that specific domain? Is it effective to leave the instruction and input empty and only use domain-specific text for the output?
@sanjaykotabagi4407
@sanjaykotabagi4407 11 ай бұрын
Hey, Can we connect. Even I need help on similar topic. We can discuss more ...
@rohitdaddekar2900
@rohitdaddekar2900 11 ай бұрын
Hey, could you guide us how to train custom dataset on llama2? How to prepare our dataset for training?
@manishsharma2211
@manishsharma2211 11 ай бұрын
The way Abhishek side eyes before stopping the video and resuming is is soo crazy 🤣🤣😅
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
lol. big screen. button too far 🤣
@crimsonalchemist856
@crimsonalchemist856 11 ай бұрын
Hey Abhishek, Thanks for sharing this amazing tutorial. Can I do this on my RTX 3070Ti 8GB GPU? If yes, what batch size would be preferable?
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
8GB sounds a bit low for this. maybe try bs=1 or 2? but tbh, im not sure if it will work. Might work fine for a smaller model!
@nirsarkar
@nirsarkar 10 ай бұрын
Can this be done on Apple Silicon, I have M2 with 24G memory?
@StEvUgnIn
@StEvUgnIn 5 ай бұрын
I did the same with LLama-2, but --push_to_hub doesn't push at all.
@bhaveshbadjatya2914
@bhaveshbadjatya2914 10 ай бұрын
When tying to use inference API for finetuned model I am getting 'error': "Could not load model XXXX/XXXX with any of the following classes: (,) How to resolve this ?
@SorinBuda
@SorinBuda 11 ай бұрын
Any idea why Autotrain says `llm` is not available, only app? AutoTrain advanced CLI: error: invalid choice: 'llm' (choose from 'app')
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
please update to latest version
@eunoia7151
@eunoia7151 11 ай бұрын
How do I use a dataset in the huggingface hub?
@oxydol3456
@oxydol3456 2 ай бұрын
which machine is recommended for fine-tuning LLAMA? windows?
@codeguero8933
@codeguero8933 11 ай бұрын
If i understand this model is training in the local machine? and is saved locally too ? Or model continues on hugging face servers?
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
its training locally. its also saved locally. if you use --push-to-hub arg, it will push model to huggingface servers
@abhisekpanigrahi1033
@abhisekpanigrahi1033 10 ай бұрын
How can we create sharded version of Llama 2
@marioricoibanez144
@marioricoibanez144 11 ай бұрын
Hey! Fantastic video, but i do not understand at all the division into smaller chunks of the model in order to work in free version of collab, can you explain it? Thank you!
@abhishekkrthakur
@abhishekkrthakur 11 ай бұрын
chunks are loaded into ram first. since larger chunks didnt fit in ram with all the other stuff, i created a version with smaller shards :)
@shaileshtiwari8483
@shaileshtiwari8483 10 ай бұрын
Is Gpu Machine necessary for llama 7b to be trained?
@chichen8425
@chichen8425 3 ай бұрын
I know it could be too much but could you also make a video of how to prepare the data? I have like 'question' and 'answer' but I am strugging to make it to a trainable data set into that kind of csv so I could use it!
@ravigarimella3166
@ravigarimella3166 11 ай бұрын
I am getting a "No GPU found. Please install CUDA and try again." Even after Installing CUDA I am getting this error. When I check with nvcc -V I get NVIDIA Cuda Compiler Driver message. Is there an issue with the path?
@satyamgupta2182
@satyamgupta2182 9 ай бұрын
did you come across a solution @ravigarimella3166
@BTC198
@BTC198 11 ай бұрын
What GPUs were you running?
@AkK-iq3bz
@AkK-iq3bz Ай бұрын
how do you get access to the llama model?
@jdoejdoe6161
@jdoejdoe6161 11 ай бұрын
Please show how you used the trained mode for inference
@BrusnickiRoberto
@BrusnickiRoberto 5 ай бұрын
Yes. Please!
@dhruvilshah7770
@dhruvilshah7770 3 ай бұрын
Can you make a video for fine tuning in silicon macs ?
@jaivalani4609
@jaivalani4609 11 ай бұрын
How do we evaluate model any auto.api ?
@prathampundir5924
@prathampundir5924 Ай бұрын
can i train llama3 also with these steps?
@DavidJones-cw1ip
@DavidJones-cw1ip 9 ай бұрын
Any chance you have the python scripts available somewhere? Thanks in advance.
@Sanguen666
@Sanguen666 10 ай бұрын
where is the load dataset part in ur code and in the jupyter notebook? did u even bother testing it b4 publishing?
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 267 М.
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 123 М.
Incredible magic 🤯✨
00:53
America's Got Talent
Рет қаралды 55 МЛН
Final muy increíble 😱
00:46
Juan De Dios Pantoja 2
Рет қаралды 51 МЛН
Training Your Own AI Model Is Not As Hard As You (Probably) Think
10:24
Steve (Builder.io)
Рет қаралды 462 М.
Run LLAMA-v2 chat locally
8:10
Abhishek Thakur
Рет қаралды 35 М.
Unleash the power of Local LLM's with Ollama x AnythingLLM
10:15
Tim Carambat
Рет қаралды 106 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 437 М.
fine tuning llama-2 to code
27:18
Chris Hay
Рет қаралды 12 М.