Transformer Neural Networks Derived from Scratch

  Рет қаралды 123,662

Algorithmic Simplicity

Algorithmic Simplicity

Күн бұрын

#transformers #chatgpt #SoME3 #deeplearning
Join me on a deep dive to understand the most successful neural network ever invented: the transformer. Transformers, originally invented for natural language translation, are now everywhere. They have fast taken over the world of machine learning (and the world more generally) and are now used for almost every application, not the least of which is ChatGPT.
In this video I take a more constructive approach to explaining the transformer: starting from a simple convolutional neural network, I will step through all of the changes that need to be made, along with the motivations for why these changes need to be made.
*By "from scratch" I mean "from a comprehensive mastery of the intricacies of convolutional neural network training dynamics". Here is a refresher on CNNs: • Why do Convolutional N...
Chapters:
00:00 Intro
01:13 CNNs for text
05:28 Pairwise Convolutions
07:54 Self-Attention
13:39 Optimizations

Пікірлер: 216
@ullibowyer
@ullibowyer 28 күн бұрын
I now realise that the key to understanding transformers is to ask why they work, not how. Thanks!
@algorithmicsimplicity
@algorithmicsimplicity 27 күн бұрын
Thank you so much!
@algorithmicsimplicity
@algorithmicsimplicity 10 ай бұрын
Video about Diffusion/Generative models coming next, stay tuned!
@mahmirr
@mahmirr 9 ай бұрын
Was coming to comment this, thanks
@arslanjutt4282
@arslanjutt4282 8 ай бұрын
Please make video
@micmac8171
@micmac8171 4 ай бұрын
Please!
@abdullahbaig7517
@abdullahbaig7517 29 күн бұрын
This gem is underrated. This is the only video that after watching, I feel like I know how transformers work. Thanks!
@rah-66comanche94
@rah-66comanche94 10 ай бұрын
Amazing video ! I really appreciate that you explained the Transformer model *from scratch*, and didn't just give a simplistic overview of it 👍 I can definitely see that *a lot* of work was put into this video, keep it up !
@korigamik
@korigamik 3 ай бұрын
Would you share the source code for the animations?
@StratosFair
@StratosFair 4 ай бұрын
I am currently doing my PhD in machine learning (well, on its theoretical aspects), and this video is the best explanation of transformers I've seen on KZfaq. Congratulations and thank you for your work
@IllIl
@IllIl 10 ай бұрын
Dude, your explanations are truly next level. This really opened my eyes to understanding transformers like never before. Thank you so much for making these videos. Really amazing resource that you have created.
@tdv8686
@tdv8686 9 ай бұрын
Thanks for your explanation; This is probably the best video on KZfaq about the core of transformer architecture so far, other videos are more about the actual implementation but lack the fundamental explanation. I 100% recommend it to everyone on the field.
@asier6734
@asier6734 9 ай бұрын
I love the algorithmic way of explaining what mathematics does. Not too deep, not too shallow, just the right level of abstraction and detail. Please please explain RNNs and LSTMs, I'm unable to find a proper explanation. Thanks !
@RoboticusMusic
@RoboticusMusic 9 ай бұрын
Thank you for not using slides filled with math equations. If someone understands the math they're probably not watching these videos, if they're watching these videos they're not understanding the math. It's incredible that so many KZfaq teachers decide to add math and just point at it for an hour without explaining anything their audience can grasp, and then in the comments you can tell everybody golf clapped and understood nothing except for the people who already grasp the topic. Thank you again for thinking of a smart way to teach simple concepts.
@xt3708
@xt3708 9 ай бұрын
amen. the power of out of the box teachers is infinite.
@Alpha_GameDev-wq5cc
@Alpha_GameDev-wq5cc 23 күн бұрын
I still remember when all the cool acronyms I had to deal with was just FNNs, CNNs, ADAM, RNNs, LSTMs and the newest kid on the block, GANs.
@newbie8051
@newbie8051 22 күн бұрын
Damn FNN's and CNN's are basic stuff we were taught in our 4semester of our undergrad. Adam and RNNs were in the "additional resources" section for an Introdcutory course for Deep Learning I took in the same semester. Encountered LSTMs through personal projects lol Still haven't used GANs and Autoencoders, but it they were talk of the town back then due to the diffusion models.
@Alpha_GameDev-wq5cc
@Alpha_GameDev-wq5cc 21 күн бұрын
@@newbie8051 yea I did FNN from scratch in high school, I was really hopeful for getting into Ai Research and then the transformers arrived in my college year…
@TropicalCoder
@TropicalCoder 9 ай бұрын
Very nicely done. Your graphics had a calming, almost hypnotic effect.
@TTTrouble
@TTTrouble 9 ай бұрын
I’ve watched so many video explainers on transformers and this is the first one that really helped show the intuition in a unique and educational way. Thank you, I will need to rewatch this a few times but I can tell it has unlocked another level of understanding with regard to the attention mechanism that has evaded me for quite some time.(darned KQV vectors…) Thanks for your work!
@chrisvinciguerra4128
@chrisvinciguerra4128 9 ай бұрын
It seems like whenever I want to dive deeper into the workings of a subject, I always only find videos that simply define the parts to how something works, like it is from a textbook. You not only explained the ideas behind why the inner workings exist the way they do and how they work, but acknowledged that it was an intentional effort to take a improved approach to learning.
@ryhime3084
@ryhime3084 9 ай бұрын
This was so helpful. I was reading through how other models work like ELMo and it makes sense how they came up with ideas for those, but the transformer it just seemed like it popped out of nowhere with random logic. This video really helps to understand their thought process.
@Magnetic-Milk
@Magnetic-Milk 6 ай бұрын
Not so long ago I was searching for hours trying to understand transformers. In this 18 min video I learned more than I learned in 3 hours of researching. This is best computer science video I have ever watched in my entire life.
@briancase6180
@briancase6180 9 ай бұрын
This a truly great introduction. I've watched other also excellent introductions, but yours is superior in a few ways. Congrats and thanks! 🤙
@ChrisCowherd
@ChrisCowherd 8 ай бұрын
This video is by far the clearest and best explained I've seen! I've watched so many videos on how transformers work and still came away lost. After watching this video (and the previous background videos) I feel like I finally get it. Thank you so much!
@xt3708
@xt3708 9 ай бұрын
Absolutely love how you explain the process of discovery, in other words figure out one part which then causes a new problem, which then can be solved with this method, etc. The insight into this process for me was even more valuable than understanding this architecture itself.
@CharlieZYG
@CharlieZYG 9 ай бұрын
Wonderful video. Easily the best video I've seen on explaining transformer networks. This "incremental problem-solving" approach to explaining concepts personally helps me understand and retain the information more efficiently.
@ItsRyanStudios
@ItsRyanStudios 9 ай бұрын
This is AMAZING I've been working on coding a transformer network from scratch, and although the code is intuitive, the underlying reasoning can be mind bending. Thank you for this fantastic content.
@diegobellani
@diegobellani 9 ай бұрын
Wow just wow. This video makes you understanding really the reason behind the architecture, something that even reading the original paper you don't really get.
@benjamindilorenzo
@benjamindilorenzo 3 ай бұрын
This is the best Video on Transformers i have seen on whole youtube.
@jcorey333
@jcorey333 4 ай бұрын
This is one of the genuinely best and most innovative explanations of transformers/attention I've ever seen! Thank you.
@igNights77
@igNights77 8 ай бұрын
Explained thoroughly and clearly from basic principles and practical motivations. Basically the perfect explanation video.
@user-eu2li6vf3z
@user-eu2li6vf3z 8 ай бұрын
Cant wait for more content from your channel. Brilliantly explained.
@Muhammed.Abd.
@Muhammed.Abd. 9 ай бұрын
That is the possibly the best explanation of Attention I have ever seen!
@declanbracken2577
@declanbracken2577 13 күн бұрын
There are many explanations of what a transformer is and how it works, but this one is the best I've seen. Really good work.
@jackkim5869
@jackkim5869 3 ай бұрын
Truly this is the best explanation of transformers I have seen so far. Especially great logical flow makes it easier to understand difficult concepts. Appreciate your hard work!
@RalphDratman
@RalphDratman 9 ай бұрын
This is by far the best explanation of the transformer architecture. Well done, and thank you very much.
@corydkiser
@corydkiser 9 ай бұрын
This was top notch. Please do one for RetNets and Liquid Neural Nets.
@MichaelBrown-gt4qi
@MichaelBrown-gt4qi 4 күн бұрын
I've started binge watching all your videos. 😁
@giphe
@giphe 9 ай бұрын
Wow! I knew about attention mechanisms but this really brought my understanding to a new level. Thank you!!
@Muuip
@Muuip 8 ай бұрын
Great concise visual presentation! Thank you, much appreciated! 👍👍
@MalTramp
@MalTramp 26 күн бұрын
This was an excellent video on the global design structure for transformer. Love all your videos!
@ArtOfTheProblem
@ArtOfTheProblem 9 ай бұрын
Really well done, I haven't seen your channel before and this is a breath of fresh air. I've been working on my GPT + transformer video for months and this is the only video online which is trying to simplify things through an indepdnent realization approach. Before I watched this video my 1 sentence summary of why Transformers matter was: "They contain layers that have weights which adapt based on context" (vs. using deeper networks with static layers). and this video helped solidify that further, would you agree? I also wanted to boil down the attention heads as "mini networks" (or linear functions) connected to each token which are trained to do this adaptation. One network pulls out what's important in each word given the context around it, the other networks combines these values to decide the important those two words in that context, and this is how the 'weights adapt' I still wonder how important the distinction of linear layer vs. just a single layer, I like how you pulled that into the optimization section. i know how hard this stuff is to make clear and you did well here
@maxkho00
@maxkho00 8 ай бұрын
My one-sentence summary of why transformers matter would be "they are standard CNNs, except the words are re-ordered in a way that makes the CNN's job easier first before being fed ". Also, a single NN layer IS a linear layer; I'm not sure what you mean by saying you don't know how important the distinction between the two is.
@ArtOfTheProblem
@ArtOfTheProblem 8 ай бұрын
thanks@@maxkho00
@ronakbhatt4880
@ronakbhatt4880 5 ай бұрын
What a simple but perfect explanation!! You deserve 100s time more subscriber.
@JunYamog
@JunYamog 5 ай бұрын
Your visualization and explanation are very good. Helped me understand a lot. I hope you can put more videos, it must be not easy otherwise you would have done it. Keep it up.
@terjeoseberg990
@terjeoseberg990 9 ай бұрын
I wasn’t aware that they were using a convolutional neural network in the transformer, so I was extremely confused about why the positional vectors were needed. Nobody else in any of the other videos describing transformers pointed this out. Thanks.
@Hexanitrobenzene
@Hexanitrobenzene 9 ай бұрын
"they were using a convolutional neural network in the transformer" No no, Transformers do not have any convolutional layers, the author of the video just chose CNN as a starting point in the process "Let's start with the solution that doesn't work well, understand why it doesn't work well and try to improve it, changing the solution completely along the way". The main architecture in natural language processing before transformers was RNN, recurrent neural network. Then in 2014 researchers improved it with attention mechanism. However, RNNs do not scale well, because they are inherently sequential, and scale is very important for accuracy. So, researchers tried to get rid of RNNs and succeded in 2017. CNNs were also tried, but, to my not-very-deep knowledge, were less succesful. Interesting that the author of the video chose CNN as a starting point.
@terjeoseberg990
@terjeoseberg990 9 ай бұрын
@@Hexanitrobenzene, I suppose I’ll have to watch this video again. I’ll look for what you mentioned.
@Hexanitrobenzene
@Hexanitrobenzene 9 ай бұрын
@@terjeoseberg990 A little off topic, but... Not long ago I noticed that KZfaq deletes comments with links. Ok, automatic spam protection. (Still, the thing that it does this silently frustrates a lot...) But, does it also delete comments where links are separated into words with "dot" between them ? I tried to give you a resource I learned this from, but my comment got dropped two times...
@Hexanitrobenzene
@Hexanitrobenzene 9 ай бұрын
...Silly me, I figured I could just give you the title you can search for: "Dive into deep learning". It's an open textbook with code included.
@terjeoseberg990
@terjeoseberg990 9 ай бұрын
@@Hexanitrobenzene, The best thing to do when KZfaq deletes comments is to provide a title or something so I can find it. A lot of words are banned too.
@TeamDman
@TeamDman 9 ай бұрын
I've had to watch this a few times, great explanation!
@anatolyr3589
@anatolyr3589 2 ай бұрын
yeah! this "functional" approach to the explanation rather than "mechanical" is truly amazing 👍👍👍👏👏👏
@rogerzen8696
@rogerzen8696 5 ай бұрын
Good job! There was a lot of intuition in this explanation.
@halflearned2190
@halflearned2190 6 ай бұрын
Hey man, I watched your video months ago, and found it excellent. Then I forgot the title, and could not find it again for a long time. It doesn't show up when I search for "transformers deep learning", "transformers neural network", etc. Consider changing the title to include that keyword? This is such a good video, it should have millions of views.
@algorithmicsimplicity
@algorithmicsimplicity 6 ай бұрын
Thanks for the tip.
@SahinKupusoglu
@SahinKupusoglu 9 ай бұрын
This video was all I needed for LLMs/transformers!
@yonnn7523
@yonnn7523 8 ай бұрын
best explainer of transformers I saw so far, thnx!
@TeamDman
@TeamDman 2 ай бұрын
I keep coming back to this because it's the best explanation!!
@pravinkool
@pravinkool 6 ай бұрын
Fantastic! Loved it! Exactly what I needed.
@AdhyyanSekhsaria
@AdhyyanSekhsaria 9 ай бұрын
Great explanation. Havent found this perspective before.
@hadadvitor
@hadadvitor 9 ай бұрын
fantastic video, congratulations on and thank you for making it
@adityachoudhary151
@adityachoudhary151 4 ай бұрын
really made me appreciate NN even more. Thanks for the video
@clray123
@clray123 9 ай бұрын
Great video, maybe you could cover retentive network (from the RetNet paper) in the same fashion next - as it aims to be a replacement for the quadratic/linear attention in transformer (I'm curious as to how much of the "blurry vector" problem their approach suffers from).
@quocanhad
@quocanhad 3 ай бұрын
you deserve my like bro, really awesome video
@_MrKekovich
@_MrKekovich 9 ай бұрын
FINALLY I have something me basic understanding. Thank you so much!
@dmlqdk
@dmlqdk 3 ай бұрын
Thank you for answering my questions!!
@algorithmicsimplicity
@algorithmicsimplicity 3 ай бұрын
Thanks for the tip! I'm always happy to answer questions.
@antonkot6250
@antonkot6250 25 күн бұрын
The best explanation I found so far!
@lakshay510
@lakshay510 3 ай бұрын
Halfway through the video and I pressed the subscribed button. Very intutive and easy to understand. Keep up the good work man :) 1 suggestion: Change the title of video and you'll get more traction.
@algorithmicsimplicity
@algorithmicsimplicity 3 ай бұрын
Thanks, any title in particular you'd recommend?
@c1tywi
@c1tywi 28 күн бұрын
This video is gold! Subscribed.
@iustinraznic5811
@iustinraznic5811 9 ай бұрын
Amazing explainations and video!
@minhsphuc12
@minhsphuc12 8 ай бұрын
Thank you so much for this video.
@marcfruchtman9473
@marcfruchtman9473 9 ай бұрын
Very interesting. Thank you for the video.
@christrifinopoulos8639
@christrifinopoulos8639 5 ай бұрын
The visualisation was amazing.
@TaranovskiAlex
@TaranovskiAlex 8 ай бұрын
thank you for the explanation!
@user-js7ym3pt6e
@user-js7ym3pt6e 4 ай бұрын
Amazing, continue like this.
@shantanuojha3578
@shantanuojha3578 Ай бұрын
Awesome video bro. i always like some intutive explanation.
@algorithmicsimplicity
@algorithmicsimplicity Ай бұрын
Thanks so much!
@anilaxsus6376
@anilaxsus6376 9 ай бұрын
best explanation i have seen so far. Basically The transformer is cnn with a lot of extra upgrades. Good to know.
@nara260
@nara260 5 ай бұрын
thank a lot lot! this visual lecture cleared the dense fogs over my cognitive picture of the transformer.
@mvlad7402
@mvlad7402 Ай бұрын
Excellent explanation! All kudos to the author!
@IzUrBoiKK
@IzUrBoiKK 9 ай бұрын
As both a math enthusiasts and a programme (who obv also works on AI) I rly liked this vid. I can confirm that this is one of the best and genuine explanation of transformers...
@ArtOfTheProblem
@ArtOfTheProblem 9 ай бұрын
the first so far this year
@rishikakade6351
@rishikakade6351 Ай бұрын
Insane that this website is free. Thanks!
@yash1152
@yash1152 9 ай бұрын
2:36 wow, just 50k words... that soud pretty easy for computers. amazing.
@AerialWaviator
@AerialWaviator 9 ай бұрын
Very fascinating topic with an excellent dive and insights into how neural networks derive results. One thing I was left wondering is why is there no scoring vector describing the probability a word is a noun, verb. or adjective? Encoding a words context (regardless of language), should provide a great deal of context and thus eliminating many convolutional pairings, reducing computational effort. Thanks for a new found appreciation of transformers.
@ArtOfTheProblem
@ArtOfTheProblem 9 ай бұрын
this is a good question and it's also a GOFAI type approach where we make the mistake thinking we can inject some human semantic idea to improve a network. but the reality is it will do this automatically without our help. For example papers back in 1986 show tiny networks automatically grouping words into nouns or verbs, it's amazing. let me know if you want more details
@vedantkhade4395
@vedantkhade4395 4 ай бұрын
This video is damn impressive mann
@christianjohnson961
@christianjohnson961 9 ай бұрын
Can you do a video on tricks like layer normalization, residual connections, byte pair encoding, etc.?
@rafa_br34
@rafa_br34 29 күн бұрын
I'd love to see you explain how KANs work.
@kul6420
@kul6420 26 күн бұрын
I may be too late to the party but glad I found this channel.
@palyndrom2
@palyndrom2 10 ай бұрын
Great video
@albertmashy8590
@albertmashy8590 9 ай бұрын
This was amazing
@arongil
@arongil 9 ай бұрын
Great, thank you!
@cem_kaya
@cem_kaya 9 ай бұрын
Thank you so much
@iandanforth
@iandanforth 9 ай бұрын
I wish this had tied in specifically to the nomenclature of the transformer such as where these operations appear in a block, if they are part of both encoder and decoder paths, how they relate to "KQV" and if there's any difference between these basic operations and "cross attention".
@ArtOfTheProblem
@ArtOfTheProblem 9 ай бұрын
I"ll be doing this, but in short, the little networks he showed connected to each pair are KQ (word pair representation) and the V is the value network., all of this can be done in the decoder only model as well. and cross attention is the same thing but you are using two separate sequences looking at each other (such as two sentences in a translation network). it's nice to know that GPT for example is decorder only, and so doesn't even need this
@frederik7054
@frederik7054 9 ай бұрын
The video is of great quality! With which tool did you create this? Manim?
@algorithmicsimplicity
@algorithmicsimplicity 9 ай бұрын
Yep all my videos so far have been done in Manim.
@TheSonBAYBURTLU
@TheSonBAYBURTLU 9 ай бұрын
Thank you 🙂
@Tigerfour4
@Tigerfour4 9 ай бұрын
Great video, but it left me with a question. I tried to compare what you arrived at (16:25) to the original transformer equations, and if I understand it correctly, in the original we don't add the red W2X matrix, but we have a residual connection instead, so it is as if we would add X without passing it through an additional linear layer. Am I correct in this observation, and do you have an explanation for this difference?
@algorithmicsimplicity
@algorithmicsimplicity 9 ай бұрын
Yes that's correct, the transformer just adds x without passing it through an additional linear layer. Including the additional linear layer doesn't actually change the model at all, because when the result of self attention is run through the MLP in the next layer, the first thing the MLP does is apply a linear transform to the input. Composition of 2 linear transforms is a linear transform, so we may as well save computation and just let the MLP's linear transform handle it.
@komalsinghgurjar
@komalsinghgurjar 7 ай бұрын
Sir I like your videos very much. Love from India ♥️♥️.
@domasvaitmonas8814
@domasvaitmonas8814 2 ай бұрын
Thanks. Amazing video. One question though - how do you train the network to output the "importance score"? I get the other part of the self-attention mechanism, but the score seems a bit out of the blue.
@algorithmicsimplicity
@algorithmicsimplicity 2 ай бұрын
The entire model is trained end-to-end to solve the training task. What this means is you have some training dataset consisting of a bunch of input/label pairs. For each input, you run the model on that input, then you change the parameters in the model a bit, evaluate it again and check if the new output is closer to the training label, if it is you keep the changes. You do this process for every parameter in all layers and in all value and score networks, at the same time. By doing this process, the importance score generating networks will change over time so that they produce scores which cause the model's outputs to be closer to the training dataset labels. For standard training tasks, such as predicting the next word in a piece of text, it turns out that the best way for the score generating networks to influence the model's output is by generating 'correct' scores which roughly correspond to how related 2 words are, so this is what they end up learning to do.
@sairaj6875
@sairaj6875 9 ай бұрын
Thank you!!
@Supreme_Lobster
@Supreme_Lobster 9 ай бұрын
Thanks. I had read the original Transformer paper and I barely understood the underlying ideas.
@laithalshiekh3792
@laithalshiekh3792 9 ай бұрын
Your video is amazing
@cloudysh
@cloudysh 6 ай бұрын
This is perfect
@Baigle1
@Baigle1 7 ай бұрын
I think they were actually used as far back or more as 2006, in compressor algorithm competitions publicly
@AN-ch3ly
@AN-ch3ly 3 ай бұрын
Great video, but I was wondering how one aspect of the transformer is handled in the real world. How are importance scores assigned to pairs in order to determine their importance? Basically, on a massive scale, how can important scores be automatically assigned in order to get the correct importance for a pair for a given sentence?
@algorithmicsimplicity
@algorithmicsimplicity 3 ай бұрын
The entire model is trained end-to-end to solve the training task. What this means is you have some training dataset consisting of a bunch of input/label pairs. For each input, you run the model on that input, then you change the parameters in the model a bit, evaluate it again and check if the new output is closer to the training label, if it is you keep the changes. By doing this process, the score generating networks will change over time so that they produce scores which cause the model's outputs to be closer to the training dataset labels. It turns out that the best way for the score generating networks to influence the model's output is by generating 'correct' scores which roughly correspond to how related 2 words are, so this is what they end up learning.
@AurL_69
@AurL_69 4 ай бұрын
Holy pepperoni you're great !
@introstatic
@introstatic 9 ай бұрын
This is brilliant. Could you give a hint where to look for details of the idea of the pairwise convolution layer? Can't find anything with this exact wording.
@algorithmicsimplicity
@algorithmicsimplicity 9 ай бұрын
Yeah it's a term I made up so you won't find it in any sources, sorry about that. Usually sources will just talk about self attention in terms of key, query and value lookups, so you can look at those to get a more detailed understanding of the transformer. The value transform is equivalent to the linear representation function I use in the pairwise convolution, the key and query attention scores are equivalent to the bi-linear form scoring function I use (with the bi-linear form weight matrix given by Q^TK). I chose to use this unusual terminology because, personally, I feel the key, query and value terminology comes out of nowhere, and I wanted to connect the transformer more directly to its predecessor (the CNN).
@introstatic
@introstatic 9 ай бұрын
@algorithmicsimplicity, this is a surprising connection. Thanks a lot for the explanation.
@dsagman
@dsagman 9 ай бұрын
@@algorithmicsimplicityit would be great if you could make this connection between terminology in video form. maybe next time?
@seraine22
@seraine22 8 ай бұрын
Thanks!
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
Thank you for your support!
@GaryBernstein
@GaryBernstein 9 ай бұрын
Can you explain how the NN produces the important-word-pair information-scores method described after 12:15 from the sentence problem raised at 10:17? Well it’s just another trained set of values. I supposs it scores pairs importance over the pairs’ uses in ~billions of sentences.
@algorithmicsimplicity
@algorithmicsimplicity 9 ай бұрын
The importance-scoring neural network is trained in exactly the same way that the representation neural network is. Roughly speaking, for every weight in the importance-scoring neural network you increase the value of that weight slightly and then re-evaluate the entire transformer on a training example. If the new output is closer to the training label, then that was a good change so the weight stays at its new value. If the new output is further away, then you reverse the change to that weight. Repeat this over and over again on billions of training examples and the importance-scoring neural network weights will end up set to values so that that the produced scores are useful.
@user-km3kq8gz5g
@user-km3kq8gz5g 5 ай бұрын
You are amazing
@Einken
@Einken 9 ай бұрын
Transformers, more than meets the eye.
@nightchicken3517
@nightchicken3517 8 ай бұрын
I really love SoME
@cezarydziemian6734
@cezarydziemian6734 9 ай бұрын
Wow, great video, but have some problem understaning one thing. I'm trying to understand it watching all 3 videos and what I have trouble to understand is how these pairs of words (vectors) from the first layer are match together into new vectors. For exaple, for "catsat" pair, we have twe vectors: [0001] and [0100]. How are they transformed to the vector [1.3, -0.9...]? If this is just the result of some internal neural net, where did the data (wages) for this net came from? Or if they started fom random numebers, how ware they trained?
@algorithmicsimplicity
@algorithmicsimplicity 8 ай бұрын
The pair vectors are first concatenated together into one vector e.g. [00010100], and this vector is then run through the neural network which produces the output vector. The output is the result of the weights in the neural network. Initially, those weights are completely random (usually sampled from a normal distribution centred at 0), and then they are updated during training. The neural network is trained on a labelled training dataset of input and output pairs. For example, ChatGPT was trained to do next word prediction on billions of passages of text scraped from the internet. In this case, each training example is a random part of a text passage (e.g. "the cat sat on the") and the output is the next word that occurs in the text (e.g. "mat"). For every training example an update step is performed on the neural network to update all of the weights of all of the layers. The update step works as follows: 1) Evaluate the neural network on the input. 2) For every weight in every layer, increase the value of that weight by a small amount (e.g. 0.001) and then re-evaluate the entire neural network on the input. If the new output is closer to the target (e.g. the vector output is closer to the one-hot encoding of "mat") then it was good to change that weights value, so it keeps the new value. If the new output is further away from the target, then it was a bad change, so reverse it. And that's it. Just keep repeating that update step for billions of different inputs and all of the weights in all layers will eventually be set to values which allow the transformer as a whole to map inputs to outputs correctly. Also I should point out that in practice there is a faster way to do the update step which is called backprop. Backprop computes exactly the same result as the update process I described, it is just faster computationally (you only need to evaluate the model twice instead of once for every weight), but it is also more difficult to understand.
@pi5549
@pi5549 9 ай бұрын
I really hope you push forwards with this approach. I can't find anywhere a clear and complete from-the-ground-up exposition of Transformers. Sorry to say I can't find it here either. You start with image ConvNets. I think you might break this down into (1) construct a representation that captures long-range information, and (2) a classifier, and observe that once we have the representation we could use it for tasks other than classification. What's jarring here is that I've only seen conv-nets in the context of classification, and classification-of-a-sentence is almost meaningless, unless we just want a sentiment-analyzer or something trivial. I'd like to see a section that explains "First we get the representation, then we can USE that to construct a next-word-predictor". If the initial problem/scenario isn't well framed, all the internals feel fuzzy, as they're not representing steps towards a clear goal. I really hope you consider running at this again.
@ArtOfTheProblem
@ArtOfTheProblem 9 ай бұрын
i'm working on a video now which is attempting to do this, been on it for months. one key thing here I notice where you get fuzzy, is thinking CNN's 'only do classification', and that this is different than nex word prediction. Because next word prediction is a type of classification (the output class is the next letter or word). and so you could have just a plain old fully connected network do "next word prediction" based on training it that way. please let me know what else you are thinking as it might help me with my script in my video I will open with RNN's applied to next work prediction (starts in 1986) then I will explain where they break and why we need parallel approaches why simple brute force doesn't work (too many parameters, and hard to train) why transformers helped (compressed many layers into fewer adaptive layers)
@pi5549
@pi5549 8 ай бұрын
@@ArtOfTheProblem uff, I'm not sure my comment made any sense at all. I'll reply more on your latest micro-vid, which considers an Attention block as a dynamic-routing layer.
@korigamik
@korigamik 3 ай бұрын
Man can you tell us what you used to create the animations and how you edit the videos?
@algorithmicsimplicity
@algorithmicsimplicity 3 ай бұрын
The animations were made with the Manim Python library (www.manim.community/ ) and edited with KDenLive.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 185 М.
MAMBA from Scratch: Neural Nets Better and Faster than Transformers
31:51
Algorithmic Simplicity
Рет қаралды 123 М.
Countries Treat the Heart of Palestine #countryballs
00:13
CountryZ
Рет қаралды 26 МЛН
Why You Should Always Help Others ❤️
00:40
Alan Chikin Chow
Рет қаралды 107 МЛН
СНЕЖКИ ЛЕТОМ?? #shorts
00:30
Паша Осадчий
Рет қаралды 6 МЛН
Would you like a delicious big mooncake? #shorts#Mooncake #China #Chinesefood
00:30
And this year's Turing Award goes to...
15:44
polylog
Рет қаралды 103 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 263 М.
Gail Weiss: Thinking Like Transformers
1:07:12
Formal Languages and Neural Networks Seminar
Рет қаралды 12 М.
Let's build GPT: from scratch, in code, spelled out.
1:56:20
Andrej Karpathy
Рет қаралды 4,4 МЛН
How a Transformer works at inference vs training time
49:53
Niels Rogge
Рет қаралды 47 М.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 996 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,2 МЛН
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
Why do Convolutional Neural Networks work so well?
16:30
Algorithmic Simplicity
Рет қаралды 39 М.
Countries Treat the Heart of Palestine #countryballs
00:13
CountryZ
Рет қаралды 26 МЛН