Рет қаралды 9,320
RELATED LINKS
Paper Title: LoRA: Low-Rank Adaptation of Large Language Models
LoRA Paper: arxiv.org/abs/2106.09685
QLoRA Paper: arxiv.org/abs/2305.14314
LoRA official code: github.com/microsoft/LoRA
Parameter-Efficient Fine-Tuning (PEFT) Adapters paper: arxiv.org/abs/1902.00751
Parameter-Efficient Fine-Tuning (PEFT) library: github.com/huggingface/peft
HuggingFace LoRA training: huggingface.co/docs/diffusers...
HuggingFace LoRA notes: huggingface.co/docs/peft/conc...
⌚️ ⌚️ ⌚️ TIMESTAMPS ⌚️ ⌚️ ⌚️
0:00 - Intro
0:58 - Adapters
1:48 - Twitter ( / ai_bites )
2:13 - What is LoRA
3:17 - Rank Decomposition
4:28 - Motivation Paper
5:02 - LoRA Training
6:53 - LoRA Inference
8:24 - LoRA in Transformers
9:20 - Choosing the rank
9:50 - Implementations
MY KEY LINKS
KZfaq: / @aibites
Twitter: / ai_bites
Patreon: / ai_bites
Github: github.com/ai-bites