Рет қаралды 96,925
In this video, I go over how LoRA works and why it's crucial for affordable Transformer fine-tuning.
LoRA learns low-rank matrix decompositions to slash the costs of training huge language models. It adapts only low-rank factors instead of entire weight matrices, achieving major memory and performance wins.
🔗 LoRA Paper: arxiv.org/pdf/2106.09685.pdf
🔗 Intrinsic Dimensionality Paper: arxiv.org/abs/2012.13255
About me:
Follow me on LinkedIn: / csalexiuk
Check out what I'm working on: getox.ai/