LoRA explained (and a bit about precision and quantization)

  Рет қаралды 52,547

DeepFindr

DeepFindr

Күн бұрын

▬▬ Papers / Resources ▬▬▬
LoRA Paper: arxiv.org/abs/2106.09685
QLoRA Paper: arxiv.org/abs/2305.14314
Huggingface 8bit intro: huggingface.co/blog/hf-bitsan...
PEFT / LoRA Tutorial: www.philschmid.de/fine-tune-f...
Adapter Layers: arxiv.org/pdf/1902.00751.pdf
Prefix Tuning: arxiv.org/abs/2101.00190
▬▬ Support me if you like 🌟
►Link to this channel: bit.ly/3zEqL1W
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
►E-Mail: deepfindr@gmail.com
▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
Music from #Uppbeat (free for Creators!):
uppbeat.io/t/danger-lion-x/fl...
License code: M4FRIPCTVNOO4S8F
▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
All Icons are from flaticon: www.flaticon.com/authors/freepik
▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
00:20 Model scaling vs. fine-tuning
00:58 Precision & Quantization
01:30 Representation of floating point numbers
02:15 Model size
02:57 16 bit networks
03:15 Quantization
04:20 FLOPS
05:23 Parameter-efficient fine tuning
07:18 LoRA
08:10 Intrinsic Dimension
09:20 Rank decomposition
11:24 LoRA forward pass
11:49 Scaling factor alpha
13:40 Optimal rank
14:16 Benefits of LoRA
15:20 Implementation
16:25 QLoRA
▬▬ My equipment 💻
- Microphone: amzn.to/3DVqB8H
- Microphone mount: amzn.to/3BWUcOJ
- Monitors: amzn.to/3G2Jjgr
- Monitor mount: amzn.to/3AWGIAY
- Height-adjustable table: amzn.to/3aUysXC
- Ergonomic chair: amzn.to/3phQg7r
- PC case: amzn.to/3jdlI2Y
- GPU: amzn.to/3AWyzwy
- Keyboard: amzn.to/2XskWHP
- Bluelight filter glasses: amzn.to/3pj0fK2

Пікірлер: 45
@khangvutien2538
@khangvutien2538 6 ай бұрын
This is one of the easiest to follow explanations of LoRA that I’ve seen. Thanks a lot.
@DeepFindr
@DeepFindr 6 ай бұрын
Glad you found it useful!
@vimukthisadithya6239
@vimukthisadithya6239 2 күн бұрын
This is a perfect explanation for LoRA I found so far !!!
@InturnetHaetMachine
@InturnetHaetMachine 11 ай бұрын
Another great video. I appreciate that you don't skip on giving context and lay a good foundation. Makes understanding a lot easier. Thanks!
@teleprint-me
@teleprint-me 10 ай бұрын
I've been scouring for a video like this. You're the best explanation so far!
@chrisschrumm6467
@chrisschrumm6467 11 ай бұрын
Nice job with summarizing transfer learning and LoRA!
@k_1_1_2_3_5
@k_1_1_2_3_5 3 ай бұрын
What an excellent video!! Congrats!!
@aurkom
@aurkom 11 ай бұрын
Awesome! Waiting for a video on implementing LoRA from scratch in pytorch.
@omgwenxx
@omgwenxx 4 ай бұрын
Amazing video, feel like I finally understood every aspect of LoRA, thank you!
@DeepFindr
@DeepFindr 4 ай бұрын
Glad it was helpful :)
@mohamedezzat5048
@mohamedezzat5048 3 ай бұрын
Thanks a lot Amazing explanation, very clear and straightforward
@marjanshahi979
@marjanshahi979 5 ай бұрын
Amazing explanation! Thanks a lot!
@aron2922
@aron2922 8 ай бұрын
Another great video, keep it up!
@beyond_infinity16
@beyond_infinity16 2 ай бұрын
Explained quite well !
@abhirampemmaraju6339
@abhirampemmaraju6339 5 күн бұрын
Very good explanation
@binfos7434
@binfos7434 7 ай бұрын
Really Helpful!
@user-in2dd6by9q
@user-in2dd6by9q 2 ай бұрын
great video to explain lora! thanks
@henrywang4010
@henrywang4010 7 ай бұрын
Great video! Liked and subscribed
@dennislinnert5476
@dennislinnert5476 11 ай бұрын
Amazing!
@nomad_3d
@nomad_3d 11 ай бұрын
Good summary! Next time it would be great if you add headings to the tables that you show on the video. Sometimes it is hard to follow. For example, what is computational efficiency? is it inference time or inference time increase over the increase in performance (e.g. accuracy, recall, etc.)? Thanks.
@flecart
@flecart 2 ай бұрын
good job!
@moonly3781
@moonly3781 8 ай бұрын
I'm interested in fine-tuning a Large Language Model to specialize in specific knowledge, for example about fish species, such as which fish can be found in certain seas or which are prohibited from fishing. Could you guide me on how to prepare a dataset for this purpose? Should I structure it as simple input-output pairs (e.g., 'What fish are in the Mediterranean Sea?' -> 'XX fish can be found in the Mediterranean Sea'), or is it better to create a more complex dataset with multiple columns containing various details about each fish species? Any advice on dataset preparation for fine-tuning an LLM in this context would be greatly appreciated. Thanks in advance!"
@sougatabhattacharya6703
@sougatabhattacharya6703 4 ай бұрын
Good explanation
@msfasha
@msfasha 3 ай бұрын
Brilliant
@unclecode
@unclecode 9 ай бұрын
Yes, indeed was hrlpful! Do you have a video on quantization?
@SambitTripathy
@SambitTripathy Ай бұрын
After watching many LoRA videos, this one finally makes me satisfied. I have a question: I see in the fine tuning code, they talk about merging lora adapters. What is that? Is this h + = x @ (W_A @ W_B) * alpha ? Can you mix and match adapters to improve the evaluation score?
@ahmadalis1517
@ahmadalis1517 10 ай бұрын
XAI techniques on LLMs is really interesting topic! When you would consider it?
@ad_academy
@ad_academy 6 ай бұрын
Good video
@ibongamtrang7247
@ibongamtrang7247 7 ай бұрын
Thanks
@keremkosif1348
@keremkosif1348 Ай бұрын
thanks
@Canbay12
@Canbay12 3 ай бұрын
Thank you very much for this amazing vide. However, although this was probably only for demo purposes of a forward pass after LoRA finetuning; the modified forward pass method you`ve shown might be mislieading; since the forward pass of the function is assumed to be entirely linear. So, does the addition of the LoRA finetuned weights to the base model weights happen directly within model weights file (like .safetensors) or can it be done on a higher level on pytorch or tensorflow?
@ScoobyDoo-xu6oi
@ScoobyDoo-xu6oi Ай бұрын
Do you know what is \Delta W? How is it defined?
@xugefu
@xugefu 5 ай бұрын
Thanks!
@DeepFindr
@DeepFindr 5 ай бұрын
Thank you!
@darshandv10
@darshandv10 6 ай бұрын
What softwares do you use to make videos?
@Menor55672
@Menor55672 4 ай бұрын
How do you make the illustrations ?
@prashantlawhatre7007
@prashantlawhatre7007 11 ай бұрын
please make video on QLoRA
@ArunkumarMTamil
@ArunkumarMTamil 3 ай бұрын
how is Lora fine-tuning track changes from creating two decomposition matrix? How the ΔW is determined?
@ScoobyDoo-xu6oi
@ScoobyDoo-xu6oi Ай бұрын
same question
@poketopa1234
@poketopa1234 26 күн бұрын
I think the lora is scaled by the square root of the rank, not the rank.
@alkodjdjd
@alkodjdjd 10 ай бұрын
As clear as mud
@anudeepk7390
@anudeepk7390 5 ай бұрын
Is it a compliment or no? Cause mud is not clear.
@iloos7457
@iloos7457 4 ай бұрын
​@@anudeepk7390😂😂😂😂😂😂
@kutilkol
@kutilkol 3 ай бұрын
Ideot read paper. Lol
@susdoge3767
@susdoge3767 5 ай бұрын
gold
World’s Largest Jello Pool
01:00
Mark Rober
Рет қаралды 107 МЛН
小宇宙竟然尿裤子!#小丑#家庭#搞笑
00:26
家庭搞笑日记
Рет қаралды 13 МЛН
НЫСАНА КОНЦЕРТ 2024
2:26:34
Нысана театры
Рет қаралды 1,5 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 359 М.
What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED
8:22
AI Coffee Break with Letitia
Рет қаралды 38 М.
QLoRA-How to Fine-tune an LLM on a Single GPU (w/ Python Code)
36:58
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 227 М.
QLoRA: Efficient Finetuning of Quantized LLMs | Tim Dettmers
30:48
Applied Machine Learning Days
Рет қаралды 2,8 М.
But what are Hamming codes? The origin of error correction
20:05
3Blue1Brown
Рет қаралды 2,3 МЛН
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 191 М.
Understanding 4bit Quantization: QLoRA explained (w/ Colab)
42:06
code_your_own_AI
Рет қаралды 39 М.