The Attention Mechanism in Large Language Models

  Рет қаралды 79,061

Serrano.Academy

Serrano.Academy

10 ай бұрын

Attention mechanisms are crucial to the huge boom LLMs have recently had.
In this video you'll see a friendly pictorial explanation of how attention mechanisms work in Large Language Models.
This is the first of a series of three videos on Transformer models.
Video 1: The attention mechanism in high level (this one)
Video 2: The attention mechanism with math: • The math behind Attent...
Video 3: Transformer models • What are Transformer M...
Learn more in LLM University! llm.university

Пікірлер: 158
@arvindkumarsoundarrajan9479
@arvindkumarsoundarrajan9479 4 ай бұрын
I have been reading the "attention is all you need" paper for like 2 years. Never understood it properly like this ever before😮. I'm so happy now🎉
@RG-ik5kw
@RG-ik5kw 10 ай бұрын
Your videos in the LLM uni are incredible. Builds up true understanding after watching tons of other material that was all a bit loose on the ends. Thank you!
@calum.macleod
@calum.macleod 10 ай бұрын
I appreciate your videos, especially how you can apply a good perspective to understand the high level concepts, before getting too deep into the maths.
@malikkissoum730
@malikkissoum730 7 ай бұрын
Best teacher on the internet, thank you for your amazing work and the time you took to put those videos together
@EricMutta
@EricMutta 6 ай бұрын
Truly amazing video! The published papers never bother to explain things with this level of clarity and simplicity, which is a shame because if more people outside the field understood what is going on, we may have gotten something like ChatGPT about 10 years sooner! Thanks for taking the time to make this - the visual presentation with the little animations makes a HUGE difference!
@gunjanmimo
@gunjanmimo 10 ай бұрын
This is one of the best videos on KZfaq to understand ATTENTION. Thank you for creating such outstanding content. I am waiting for upcoming videos of this series. Thank you ❤
@TheMircus224
@TheMircus224 6 ай бұрын
These videos where you explain the transformers are excellent. I have gone through a lot of material however, it is your videos that have allowed me to understand the intuition behind these models. Thank you very much!
@pruthvipatel8720
@pruthvipatel8720 9 ай бұрын
I always struggled with KQV in attention paper. Thanks a lot for this crystal clear explanation! Eagerly looking forward to the next videos on this topic.
@aadeshingle7593
@aadeshingle7593 9 ай бұрын
One of the best intuitions for understanding multi-head attention. Thanks a lot!❣
@JyuSub
@JyuSub 2 ай бұрын
Just THANK YOU. This is by far the best video on the attention mechanism for people that learn visually
@user-bw5np7zz5m
@user-bw5np7zz5m Ай бұрын
I love your clear, non-intimidating, and visual teaching style.
@SerranoAcademy
@SerranoAcademy Ай бұрын
Thank you so much for your kind words and your kind contribution! It’s really appreciated!
@nealdavar939
@nealdavar939 Ай бұрын
The way you break down these concepts is insane. Thank you
@apah
@apah 10 ай бұрын
So glad to see you're still active Luis ! You and Statquest's Josh Stamer really are the backbone of more ml professionals than you can imagine
@bobae1357
@bobae1357 3 ай бұрын
best description ever! easy to understand. I've been suffered to understanding attention. Finally I can tell I know it!
@saeed577
@saeed577 3 ай бұрын
THE best explanation of this concept. That was genuinely amazing.
@mohandesai
@mohandesai 10 ай бұрын
One of the best explainations of attention I have seen without getting lost in the forest of computations. Looking forward to future videoas
@SerranoAcademy
@SerranoAcademy 10 ай бұрын
Thank you so much!
@amoghjain
@amoghjain 5 ай бұрын
Thank you for making this video series for the sake of a learner and not to show off your own knowledge!! Great anecdotes and simple examples really helped me understand the key concepts!!
@mohameddjilani4109
@mohameddjilani4109 7 ай бұрын
I really enjoyed how you give a clear explanation of the operations and the representations used in attention
@anipacify1163
@anipacify1163 3 ай бұрын
Omg this video is on a whole new level . This is prolly the best intuition behind the transformers and attention. Best way to understand. I went thro' a couple of videos online and finally found the best one . Thanks a lot ! Helped me understand the paper easily
@ajnbin
@ajnbin 5 ай бұрын
Fantastic !!! The explanation itself is a piece of art. The step by step approach, the abstractions, ... Kudos!! Please more of these
@kevon217
@kevon217 9 ай бұрын
Wow, clearest example yet. Thanks for making this!
@ccgarciab
@ccgarciab 3 ай бұрын
This is such a good, clear and concise video. Great job!
@sayamkumar7276
@sayamkumar7276 10 ай бұрын
This is one of the clearest, simplest and the most intuitive explanations on attention mechanism.. Thanks for making such a tedious and challenging concept of attention relatively easy to understand 👏 Looking forward to the impending 2 videos of this series on attention
@abu-yousuf
@abu-yousuf 6 ай бұрын
amazing explanation Luis. Can't thank you enough for your amazing work. You have a special gift to explain things. Thanks.
@arulbalasubramanian9474
@arulbalasubramanian9474 7 ай бұрын
Great explanation. After watching a handful of videos this one really makes it real easy to understand.
@soumen_das
@soumen_das 9 ай бұрын
Hey Louis, you are AMAZING! Your explanations are incredible.
@JorgeMartinez-xb2ks
@JorgeMartinez-xb2ks 6 ай бұрын
El mejor video que he visto sobre la materia. Muchísimas gracias por este gran trabajo.
@karlbooklover
@karlbooklover 10 ай бұрын
best explanation of embeddings I've seen, thank you!
@docodemo727
@docodemo727 6 ай бұрын
this video is really teaching you the intuition. much better than the others I went through that just throw formula to you. thanks for the great job!
@dr.mikeybee
@dr.mikeybee 10 ай бұрын
Nicely done! This gives a great explanation of the function and value of the projection matrices.
@RamiroMoyano
@RamiroMoyano 9 ай бұрын
This is amazingly clear! Thank for your your work!
@hyyue7549
@hyyue7549 5 ай бұрын
If I understand correctly, the transformer is basically a RNN model which got intercepted by bunch of different attention layers. Attention layers redo the embeddings every time when there is a new word coming in, the new embeddings are calculated based on current context and new word, then the embeddings will be sent to the feed forward layer and behave like the classic RNN model.
@agbeliemmanuel6023
@agbeliemmanuel6023 10 ай бұрын
Wooow thanks so much. You are a treasure to the world. Amazing teacher of our time.
@drdr3496
@drdr3496 3 ай бұрын
This is a great video (as are the other 2) but one thing that needs to be clarified is that the embeddings themselves do not change (by attention @10:49). The gravity pull analogy is appropriate but the visuals give the impression that embedding weights change. What changes is the context vector.
@pranayroy
@pranayroy 3 ай бұрын
Kudos to your efforts in clear explanation!
@dragolov
@dragolov 10 ай бұрын
Deep respect, Luis Serrano! Thank you so much!
@alijohnnaqvi6383
@alijohnnaqvi6383 4 ай бұрын
What a great video man!!! Thanks for making such videos.
@sari54754
@sari54754 6 ай бұрын
The most easy to understand video for the subject I've seen.
@iliasp4275
@iliasp4275 13 күн бұрын
Excellent video. Best explanation on the internet !
@davutumut1469
@davutumut1469 10 ай бұрын
amazing, love your channel. It's certainly underrated.
@justthefactsplease
@justthefactsplease 2 ай бұрын
What a great explanation on this topic! Great job!
@hkwong74531
@hkwong74531 4 ай бұрын
I subscribe your channel immediately after watching this video, the first video I watch from your channel but also the first making me understand why embedding needs to be multiheaded. 👍🏻👍🏻👍🏻👍🏻
@sathyanukala3409
@sathyanukala3409 3 ай бұрын
Excellent explanation. Thank you very much.
@perpetuallearner8257
@perpetuallearner8257 10 ай бұрын
You're my fav teacher. Thank you Luis 😊
@user-dg2gt2yq3c
@user-dg2gt2yq3c 2 ай бұрын
It's so great, I finally understand these qkvs, it bothers me so long. Thank you so much !!!
@notprof
@notprof 8 ай бұрын
Thank you so much for making these videos!
@VenkataraoKunchangi-uy4tg
@VenkataraoKunchangi-uy4tg 20 күн бұрын
Thanks for sharing. Your videos are helping me in my job. Thank you.
@prashant5611
@prashant5611 9 ай бұрын
Amazing! Loved it! Thanks a lot Serrano!
@kafaayari
@kafaayari 10 ай бұрын
Well the gravity example is how I understood this after a long time. you are true legend.
@tvinay8758
@tvinay8758 10 ай бұрын
This is an great explanation of attention mechanism . I have enjoyed your maths for machine learning on coursera. Thank you for creating such wonderful videos
@orcunkoraliseri9214
@orcunkoraliseri9214 3 ай бұрын
Wooow. Such a good explanation for embedding. Thanks 🎉
@aaalexlit
@aaalexlit 8 ай бұрын
That's an awesome explanation! Thanks!
@eddydewaegeneer9514
@eddydewaegeneer9514 Ай бұрын
Great video and very intuitive explenation of attention mechanism
@erickdamasceno
@erickdamasceno 10 ай бұрын
Great explanation. Thank you very much for sharing this.
@cyberpunkdarren
@cyberpunkdarren 3 ай бұрын
Very impressed with this channel and presenter
@ignacioruiz3732
@ignacioruiz3732 3 ай бұрын
Outstanding video. Amazing to gain intuition.
@debarttasharan
@debarttasharan 10 ай бұрын
Incredible explanation. Thank you so much!!!
@caryjason4171
@caryjason4171 2 ай бұрын
This video helps to explain the concept in a simple way.
@LuisOtte-pk4wd
@LuisOtte-pk4wd 4 ай бұрын
Luis Serrano you have a gift for explain! Thank you for sharing!
@jeffpatrick787
@jeffpatrick787 5 ай бұрын
This was great - really well done!
@bbarbny
@bbarbny 13 күн бұрын
Amazing video, thank you very much for sharing!
@satvikparamkusham7454
@satvikparamkusham7454 10 ай бұрын
This is the most amazing video on "Attention is all you need"
@orcunkoraliseri9214
@orcunkoraliseri9214 3 ай бұрын
I watched a lot about attentions. You are the best. Thank you thank you. I am also learning how to explain of a subject from you 😊
@DeepakSharma-xg5nu
@DeepakSharma-xg5nu 3 ай бұрын
I did not even realize this video is 21 minutes long. Great explanation.
@drintro
@drintro 4 ай бұрын
Excellent description.
@vishnusharma_7
@vishnusharma_7 10 ай бұрын
You are great at teaching Mr. Luis
@maysammansor
@maysammansor 3 ай бұрын
you are a great teacher. Thank you
@MikeTon
@MikeTon 4 ай бұрын
This clarifies EMBEDDED matrices : - In particular the point on how a book isn't just a RANDOM array of words, Matrices are NOT a RANDOM array of numbers - Visualization for the transform and shearing really drives home the V, Q, K aspect of the attention matrix that I have been STRUGGLING to internalize Big, big thanks for putting together this explanation!
@bankawat1
@bankawat1 8 ай бұрын
Thanks for the amazing videos! I am eagrly waiting for the third video. If possible please do explain the bit how the K,Q,V matrices are used on the decoder side. That would be great help.
@SulkyRain
@SulkyRain 5 ай бұрын
Amazing explanation 🎉
@jayanthkothapalli9.2
@jayanthkothapalli9.2 2 ай бұрын
Wow wow wow! I enjoyed the video. Great teaching sir❤❤
@user-uq7kc2eb1i
@user-uq7kc2eb1i 5 ай бұрын
This video is really clear!
@bengoshi4
@bengoshi4 10 ай бұрын
Yeah!!!! Looking forward to the second one!! 👍🏻😎
@WhatsAI
@WhatsAI 10 ай бұрын
Amazing explanation Luis! As always...
@SerranoAcademy
@SerranoAcademy 10 ай бұрын
Merci Louis! :)
@traveldiaries347
@traveldiaries347 7 ай бұрын
Very well explained ❤
@thelookerful
@thelookerful 9 ай бұрын
This is wonderful !!
@surajprasad8741
@surajprasad8741 6 ай бұрын
Thanks a lot Sir, clearly understood.
@naimsassine
@naimsassine 5 ай бұрын
super good job guys!
@serkansunel
@serkansunel 4 ай бұрын
Excellent job
@khameelmustapha
@khameelmustapha 10 ай бұрын
Brilliant explanation.
@EigenA
@EigenA 5 ай бұрын
Great video!
@bravulo
@bravulo 6 ай бұрын
Thanks. I saw also your "Math behind" video, but still missing the third in the series.
@SerranoAcademy
@SerranoAcademy 5 ай бұрын
Thanks! The third video is out now! kzfaq.info/get/bejne/p8eHgLKKy5rWmWw.html
@divikchoudhary8873
@divikchoudhary8873 25 күн бұрын
This is just Gold!!!!!
@muhammadsaqlain3720
@muhammadsaqlain3720 7 ай бұрын
Thanks my friend.
@sukhpreetlotey1172
@sukhpreetlotey1172 3 ай бұрын
First of all thank you for making these great walkthroughs of the architecture. I would really like to support your effort on this channel. let me know how I can do that. thanks
@SerranoAcademy
@SerranoAcademy 2 ай бұрын
Thank you so much, I really appreciate that! Soon I'll be implementing subscriptions, so you can subscribe to the channel and contribute (also get some perks). Please stay tuned, I'll publish it here and also on social media. :)
@ProgrammerRajaa
@ProgrammerRajaa 10 ай бұрын
Your videos are so awesome plse upload more video thanks a lot
@epistemophilicmetalhead9454
@epistemophilicmetalhead9454 17 күн бұрын
Word embeddings Vectorial representation of a word. The values in a word embedding describe various features of the words. Similar words' embeddings have a higher cosine similarity value. Attention The same word may mean different things in different contexts. How similar the word is to other words in that sentence will give you an idea as to what it really means. You start with an initial set of embeddings and take into account different words from the sentence and come up with new embeddings (trainable parameters) that better describe the word contextually. Similar/dissimilar words gravitate towards/away from each other as their updated embeddings show. Multi-head attention Take multiple possible transformations to potentially apply to the current embeddings and train a neural network to choose the best embeddings (contributions are scaled by how good the embeddings are)
@waelmashal7594
@waelmashal7594 22 күн бұрын
Great video
@ernesttan8090
@ernesttan8090 5 ай бұрын
wonderful!
@shashankshekharsingh9336
@shashankshekharsingh9336 Ай бұрын
thank you sir 🙏, love from india💌
@TemporaryForstudy
@TemporaryForstudy 9 ай бұрын
oh my god never understood V,K,Q as matrix transformations, thanks luis, love from india
@deeplearningwithjay
@deeplearningwithjay 3 ай бұрын
You are amazing !
@preetijani9658
@preetijani9658 6 ай бұрын
Amazing
@ramelgov7891
@ramelgov7891 4 ай бұрын
amazing explanation! What software is used to make the visuals (graphs, transformations etc.) Thanks!
@SerranoAcademy
@SerranoAcademy 4 ай бұрын
Thank you so much! I use Keynote for the slides.
@samirelzein1095
@samirelzein1095 10 ай бұрын
The great Luis!
@benhargreaves5556
@benhargreaves5556 5 ай бұрын
Unless I'm mistaken, I think the linear transformations in this video incorrectly show the 2D axis as well as the object changing position, but in fact the 2D axis would stay exactly the same but with the 2D object rotating around it for example.
@today-radio-in-the-zone
@today-radio-in-the-zone Ай бұрын
Thanks for your great effort to make people understand it. I, however, would like ask one thing such that you have explained V is the scores. scores of what? My opninion is that the V is the key vector so that the V makes QKT matrix to vector space again. Please make it clear for better understanding. Thanks!
@liminal6823
@liminal6823 10 ай бұрын
Fantastic.
@tristanwheeler2300
@tristanwheeler2300 8 ай бұрын
Thanks so much for making this video. It's difficult to find people explaining these concepts on a higher level. One thing I missed was how we are able to have two different apples in the matrix. If something like this is possible then I'm guessing we have several instances every single word floating around - the ones with several different contextual potentials very scattered in the matrix, while the ones without so much variation in meaning closer together. So is this process where the positions of the words in the matrix is reevaluated based on the "gravitational pull" from the associations of the other words in the sentence also a process deciding whether or not to continue to use an existing instance of the word or to create an entirely new version of the word in a new position in the matrix?
@samore11
@samore11 7 ай бұрын
Was the example of attention using the apples self-attention or just attention?
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 204 М.
What are Transformer Models and how do they work?
44:26
Serrano.Academy
Рет қаралды 97 М.
I Built a Shelter House For myself and Сat🐱📦🏠
00:35
TooTool
Рет қаралды 25 МЛН
New Gadgets! Bycycle 4.0 🚲 #shorts
00:14
BongBee Family
Рет қаралды 13 МЛН
Универ. 13 лет спустя - ВСЕ СЕРИИ ПОДРЯД
9:07:11
Комедии 2023
Рет қаралды 693 М.
What is Retrieval-Augmented Generation (RAG)?
6:36
IBM Technology
Рет қаралды 529 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 244 М.
AI Language Models & Transformers - Computerphile
20:39
Computerphile
Рет қаралды 325 М.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 986 М.
A friendly introduction to Recurrent Neural Networks
22:44
Serrano.Academy
Рет қаралды 563 М.
Attention Is All You Need
27:07
Yannic Kilcher
Рет қаралды 616 М.
Attention Is All You Need - Paper Explained
36:44
Halfling Wizard
Рет қаралды 96 М.
keren sih #iphone #apple
0:16
Muhammad Arsyad
Рет қаралды 365 М.
ПРОБЛЕМА МЕХАНИЧЕСКИХ КЛАВИАТУР!🤬
0:59
Корнеич
Рет қаралды 3,7 МЛН
Xiaomi Note 13 Pro по безумной цене в России
0:43
Простые Технологии
Рет қаралды 2,1 МЛН
С ноутбуком придется попрощаться
0:18
Up Your Brains
Рет қаралды 328 М.