No video

Mistral / Mixtral Explained: Sliding Window Attention, Sparse Mixture of Experts, Rolling Buffer

  Рет қаралды 25,082

Umar Jamil

Umar Jamil

Күн бұрын

In this video I will be introducing all the innovations in the Mistral 7B and Mixtral 8x7B model: Sliding Window Attention, KV-Cache with Rolling Buffer, Pre-Fill and Chunking, Sparse Mixture of Experts (SMoE); I will also guide you in understanding the most difficult part of the code: Model Sharding and the use of xformers library to compute the attention for multiple prompts packed into a single sequence. In particular I will show the attention computed using BlockDiagonalCausalMask, BlockDiagonalMask and BlockDiagonalCausalWithOffsetPaddedKeysMask.
I will also show you why the Sliding Window Attention allows a token to "attend" other tokens outside the attention window by linking it with the concept of Receptive Field, typical of Convolutional Neural Networks (CNNs). Of course I will prove it mathematically.
When introducing Model Sharding, I will also talk about Pipeline Parallelism, because in the official mistral repository they refer to microbatching.
I release a copy of the Mistral code commented and annotated by me (especially the most difficult parts): github.com/hkproj/mistral-src...
Slides PDF and Python Notebooks: github.com/hkproj/mistral-llm...
Prerequisite for watching this video: • Attention is all you n...
Other material for better understanding Mistral:
Grouped Query Attention, Rotary Positional Encodings, RMS Normalization: • LLaMA explained: KV-Ca...
Gradient Accumulation: • Distributed Training w...
Chapters
00:00:00 - Introduction
00:02:09 - Transformer vs Mistral
00:05:35 - Mistral 7B vs Mistral 8x7B
00:08:25 - Sliding Window Attention
00:33:44 - KV-Cache with Rolling Buffer Cache
00:49:27 - Pre-Fill and Chunking
00:57:00 - Sparse Mixture of Experts (SMoE)
01:04:22 - Model Sharding
01:06:14 - Pipeline Parallelism
01:11:11 - xformers (block attention)
01:24:07 - Conclusion

Пікірлер: 110
@ankush4617
@ankush4617 7 ай бұрын
👏 Keep up the great job, Umar!
@rahulsawant2093
@rahulsawant2093 7 ай бұрын
I haven't seen a channel with such informative videos on Data Science. Please continue doing this.... Great thanks to you and the team.
@pauldevillers797
@pauldevillers797 15 күн бұрын
Amazing explanations, best channel around to dive deep into LLM !!!! Only note is that Mixtral7x8B paper clearly states that they did not observe any pattern on topic selection for a given expert, but they did exhibit some patterns on syntactic.
@mamotivated
@mamotivated 7 ай бұрын
Absolutely well written, clearly explained and very valuable content as always Umar. Keep perfecting your craft. 100k subs by Dec 2024 , you are opening lots of doors in AI education.
@RayGuo-bo6nr
@RayGuo-bo6nr 7 ай бұрын
Thanks! Great Job! 谢谢 !
@umarjamilai
@umarjamilai 7 ай бұрын
谢谢你的支持
@varunsaagars
@varunsaagars 7 ай бұрын
Requesting SSS4 and Mamba explanations. Great work😊
@HimanshuSharma-eg5li
@HimanshuSharma-eg5li 7 ай бұрын
What's SSS4?
@unclecode
@unclecode 7 ай бұрын
Structured State-Space Sequence (S4) or Selective State Space Model, sort of linearity for attention mechanism.@@HimanshuSharma-eg5li
@umarjamilai
@umarjamilai 7 ай бұрын
You're welcome: kzfaq.info/get/bejne/brePp9So1brUhok.html
@pratyushrao7979
@pratyushrao7979 5 ай бұрын
@@umarjamilaiBruh is too OP
@andikunar7183
@andikunar7183 7 ай бұрын
Amazing content, you are a great explainer/teacher, thanks a lot!!!
@andikunar7183
@andikunar7183 7 ай бұрын
Danke!
@umarjamilai
@umarjamilai 7 ай бұрын
Thank you for your support!
@user-hd7xp1qg3j
@user-hd7xp1qg3j 7 ай бұрын
Thanks for listening for the request made last time for moe, thanks. You explain and elucidate the stuff in a very understandable way
@wilsvenleong96
@wilsvenleong96 7 ай бұрын
Your content is god-given! I live for your content! Thank you so very much!
@snowflareai
@snowflareai 7 ай бұрын
Thanks!
@umarjamilai
@umarjamilai 7 ай бұрын
Thank you very much for your support! Let's connect on LinkedIn
@goelnikhils
@goelnikhils 7 ай бұрын
What a explanation of Sliding Window Attention, KV Cache , Rolling Buffer Cache , Mistral . Amazing Work. Amazing Content. I have been following Umar and whatever content he creates that is top notch.
@kozer1986
@kozer1986 7 ай бұрын
Amazing!!! Simply amazing! Haven't seen a channel with such explanation on those topics!!!
@jman5447
@jman5447 3 ай бұрын
Thank you! Your clear explaination really make my life easier!
@manishsharma2211
@manishsharma2211 7 ай бұрын
One heck of a video umar, thank you. PS : @ 16:44 the kernel will move in the next 3*3 grid only when stride is 1 [ just FYI who might have doubt in this ]
@AndreasAlexandrou-to5pw
@AndreasAlexandrou-to5pw 5 ай бұрын
Excellent as always. Thank you!
@jasonma3449
@jasonma3449 5 ай бұрын
exceptionally clear illustration on the SWA concept!
@aam1819
@aam1819 5 ай бұрын
Fantastic explanation! Thank you!
@karanjakhar
@karanjakhar 7 ай бұрын
Great content. Well explained. Loved it. Please keep up the great job. Thanks.
@justjeremiah4255
@justjeremiah4255 7 ай бұрын
Great video as usual, Umar! Thank you, sir.
@rajgothi2633
@rajgothi2633 4 ай бұрын
Really good explanation... Please keep uploading such content. It inspire many researcher.
@Paluth
@Paluth 2 ай бұрын
Thank you very much, your videos are excellent as always. Keep up the good work, if you have the time!
@angelinakoval8360
@angelinakoval8360 6 ай бұрын
Thank you for the video, a lot of new information for me!
@ryan-reynolds-q3u
@ryan-reynolds-q3u 4 ай бұрын
Thank you! I understood a lot from this.
@unclecode
@unclecode 7 ай бұрын
👏 I support and subscribe to anyone who demystifies AI and helps democratize it. Keep up the fantastic job, Umar! Thanks!
@umarjamilai
@umarjamilai 7 ай бұрын
Thank you very much for your support! I wish you, your family and loved ones a happy new year!
@unclecode
@unclecode 7 ай бұрын
@@umarjamilai your welcome, I wish the same for you and your loved ones. Would you please let me know do you have any content focus on the transformer last step, where a linear layer picks up the next token based in the output of decoder. Basically the head MLP. Thx again.
@umarjamilai
@umarjamilai 7 ай бұрын
@@unclecode If you watch my video on how to code a transformer from scratch, you will learn all about the transformer, including the normalization and the last layer. I believe the best way to learn a model is to code is from scratch and see it in action.
@unclecode
@unclecode 7 ай бұрын
@@umarjamilai Roget that
@akashkumar-jg4oj
@akashkumar-jg4oj 5 ай бұрын
This is literal gold!!!
@rraviteja
@rraviteja 5 ай бұрын
Super content & explanation thanks please upload videos regularly
@harshitkumar5147
@harshitkumar5147 2 ай бұрын
This is just awesome!
@hichamelkaissi7786
@hichamelkaissi7786 7 ай бұрын
Quality content.. Thank you immensely ❤
@prasannaprabhakar1323
@prasannaprabhakar1323 16 күн бұрын
Thank you!
@raahuldutta
@raahuldutta 7 ай бұрын
Again another great video😊
@yukewang3164
@yukewang3164 6 ай бұрын
great explaination, very helpful, thanks!
@GrifinsBrother
@GrifinsBrother 6 ай бұрын
Amazing job, keep going!
@michellem6685
@michellem6685 3 ай бұрын
amazing explanation
@trungquang1581
@trungquang1581 4 ай бұрын
great job, thanks a lot man
@baothach9259
@baothach9259 5 ай бұрын
This video is so good!!!!
@gangs0846
@gangs0846 5 ай бұрын
Helped alot thank you
@utkarshjain3814
@utkarshjain3814 6 ай бұрын
bro is doing god's work. Keep it up!
@cfalguiere
@cfalguiere 7 ай бұрын
Thanks for sharing
@Itay12353
@Itay12353 3 ай бұрын
You Are King!
@alessiocaffi5992
@alessiocaffi5992 7 ай бұрын
watching your vids is worth the time even for ppl not too much into AI yet. got here from trying to understand Karpathy's vids, great Job. Would be nice if someone on yt would make a vid on how to create an attoGPT/ attoLM or call it bookGPT (bookLM) from any book, e.g DanteGPT🙂 , so to train, on consumer PC without advanced GPUs.
@islamtorky1762
@islamtorky1762 7 ай бұрын
Great work! Can you do a video for flash attention? Thanks!
@anshul.singhs
@anshul.singhs 7 ай бұрын
Thanks! was waiting for it, can you do mamba and S4 next?
@jatinarora6680
@jatinarora6680 6 ай бұрын
Very detailed explanation! Thanks for the video. Could you also make a video on vision transformers like BEiT.
@haralc
@haralc 3 ай бұрын
Thanks
@ihitsuperhuman3227
@ihitsuperhuman3227 3 ай бұрын
thanks
@zhenfutaofang2534
@zhenfutaofang2534 7 ай бұрын
Amazing Video !!! 加油
@umarjamilai
@umarjamilai 7 ай бұрын
谢谢你!我在中国有个微信小组关于AI和深度学习,你想交流在领英给我发消息,我Invite你参加。
@zhenfutaofang2534
@zhenfutaofang2534 7 ай бұрын
ok@@umarjamilai
@amitshukla1495
@amitshukla1495 7 ай бұрын
Wohooo 🥳
@Yassjams
@Yassjams 4 ай бұрын
Amazing video ! can you do Falcon architecture explanation 🙏🙌
@lukeskywalker7029
@lukeskywalker7029 5 ай бұрын
Another great one! Any chance you'll take on "The Era of 1-bit LLMs" paper next? ;)
@aamir122a
@aamir122a 6 ай бұрын
Open source Multi model modal models (MMLLM ) are also becoming main stream , please do an episode on them as well.
@user-xg6ez8mj7i
@user-xg6ez8mj7i 7 ай бұрын
Great content as always. can you do a video about ControlNet?
@subhamkundu5043
@subhamkundu5043 6 ай бұрын
Amazing content. Are you going to put some video on coding a MOE model from scratch?
@random-ds
@random-ds 5 ай бұрын
Thank you for this great video. I have a question though. When mistral released the intruct-v2, do they follow the exact architecture and change only the data and way of training, or, they can also twist a little bit the classic architecture of mistral? Thanks in advance!
@AndreasAlexandrou-to5pw
@AndreasAlexandrou-to5pw 5 ай бұрын
A question on batching; As far as I understand, batching inputs together has minimal cost on inference. I.e. 100 forward passes through all the decoder layers take roughly the same amount of time irrespective of your batch size. The video mentions that compute is wasted whilst calculating attention for the padding tokens, and thus concludes that unrolling the batch is preferrable? I don't see how this makes sense from a performance standpoint. Compute is very underutilised during attention, so the "wasted attentions" do not really cost anything. On the other hand, unrolling the batch increases the number of forward passes by your batch size. For example; a batch of 5 inputs with a length of 100, takes 100 forward passes in the first case, but takes 500 passes after unrolling. Am I missing something here? Doesn't unrolling completely nullify the performance boost from the wasted attentions?? Edit: Tested this: - Sq length: 1024, batch size 1: takes ~ 38 seconds. - Sq length: 1024, batch size 4: takes ~ 39 seconds. - Sq length: 4096, batch size 1: takes ~ 155 seconds.
@waynelau3256
@waynelau3256 4 ай бұрын
Hey Umar, great video! I have some questions, how does SWA work at training? Because I am trying to wrap my head around how the previoius context is fed to the window. From my understanding in the mistral model, one of the tokens is catered to the previous attetntions. In this case, wouldn't this make it autoregressive and not parallelizable, because the previous attention needs to be computed?
@siqb
@siqb 6 ай бұрын
When we are training or even inference and use as input "[SOS] Love that", do we use the embedding of 'that' for passing to the softmax to predict 'can'?
@umarjamilai
@umarjamilai 6 ай бұрын
Only during inference. During training you just compare the entire output with the target to calculate the loss.
@XartakoNP
@XartakoNP 3 ай бұрын
Around min 14 - you explain that the sliding window attention will result in fewer dot products. From your explanation I derive that the sliding window mask is applied after the Q@Kt operation, where we perform all the dot products within the Q and K tensors. Is that operation fused in some way or is there a trick to achieve it the reduction in the number of dot products?
@kenilshah-hb6fy
@kenilshah-hb6fy 3 ай бұрын
I have one point! At 5:46, table is shown in which 2nd row 2nd column. You have written No. of Encoder Layers. My question: If the Mistral is Decoder layer, then why we are considering 32 as the No. of Encoder layer ?
@vinc6966
@vinc6966 7 ай бұрын
Great video, but I have two questions about sliding window attention: 1. How applying mask to tokens outside of sliding window attention makes it more efficient? Since we still have to perform calculations on NxN matrix, but with some zeros. Are floating point operations on zeros faster? 2. Receptive field increases as depth increases. Consequently, in mistral only last layer can attend to all tokens, so tokens have less time to communicate. If we have a task that requires N steps to be solved and ALL OF information from the tokens, will the model be able to solve it? Thanks
@umarjamilai
@umarjamilai 7 ай бұрын
Hi! 1. When you know that the two matrices you're multiplying will have many zeros in the output, you can use the "sparse attention", which basically represents matrices in a way very similar to Python dictionaries, so we only store the values of the non-zero indices. There are many deep learning frameworks that support sparse matrix multiplication, if I remember correctly DeepSpeed supports sparse attention calculation. 2. It is wrong to say that the last layer will attend to all tokens. One token only attends to W preceding tokens, where W is the size of the sliding window. But because of how the information gets "accumulated" in the embedding after each layer, we can claim that the information "flows" from one token to another even if they are outside the window. You're right in saying that the information that's carried this way is less "strong" (it's like you hear a news from a friend instead of reading it by yourself on the newspaper: every intermediate person will alter the real story). If a task requires the information of all the tokens, it MAY (we can't be sure) still able to perform it, but it all depends on how many layers you have and what's the size of the sliding window. Have a nice day!
@vinc6966
@vinc6966 7 ай бұрын
@@umarjamilai Okay, I think that answers my questions ;) Thanks a lot!
@sahilc7750
@sahilc7750 4 ай бұрын
is there a way to learn different boiler plate codes and how they operated provided by Xformers ? There github is not very intuitive.
@Anson-rr6ej
@Anson-rr6ej 5 ай бұрын
Great videos. Are the 8 experts and gating funtion in each layer are different ? So total there are 8 x 32 experts, is this correct?
@umarjamilai
@umarjamilai 5 ай бұрын
Yes, each layer has different experts: 8 per layer, so in total 8x32.
@Anson-rr6ej
@Anson-rr6ej 5 ай бұрын
@@umarjamilai Thank you!
@madhusudhanreddy9157
@madhusudhanreddy9157 7 ай бұрын
Please create a one vecotr database with LLM RAG Implementation video sir
@MrNathanShow
@MrNathanShow 6 ай бұрын
Is the xformers part primarily used for training or more for just if we had a service and wanted to support the generation of the outputs. Also, for each expert are they trained independently? Or are they trained with the same dataset? From what I understand the MOE layer is just a feed forward lin layer that are weights. I think I might be wrong though... Thank you!
@MrNathanShow
@MrNathanShow 6 ай бұрын
Ok, so each "expert" is technically just a feedforward output that is gate controlled by a linear series of weights. The top two are selected to post process the token at the end.
@MrNathanShow
@MrNathanShow 6 ай бұрын
The whole data set is used to train each of these experts.
@pratyushrao7979
@pratyushrao7979 5 ай бұрын
I had a query regarding the rolling buffer cache. Why did they not use a Queue for storing the vectors instead of a rolling buffer cache? I know there's an issue with the implementation of a queue, but wouldn't that be time wise way less complex? Instead of O(n) you can roll back in O(1).
@umarjamilai
@umarjamilai 5 ай бұрын
You can implement it however you like, but you should always avoid shrinking and growing tensors, because it may move data around the GPU memory, which is slow.
@pratyushrao7979
@pratyushrao7979 5 ай бұрын
@@umarjamilaiOkay thank you. Your explanation was great!
@elieelezra2734
@elieelezra2734 7 ай бұрын
Hi Sir, great work as usual. However, I have a question regarding the gate in the 'Sparse Mixture of Experts' section. Is it a simple one layer network that produces 8 logits? Thanks! Keep up the good work !
@umarjamilai
@umarjamilai 7 ай бұрын
Yes, for every token in the sequence it produces 8 numbers. The two highest numbers indicate which FFN the token should run through.
@elieelezra2734
@elieelezra2734 7 ай бұрын
Correct me if I'm wrong, it means that the behavior of this kind of block is not the same during training and during inference. During training token embedding goes through the 8 feed forward neural networks, then the output of the two best are selected according to the output of the gate, whereas during inference, the embedding token goes through the two best feed forward neural networks according to the gate. Again thanks a lot for your time and your explanation, I really appreciate it@@umarjamilai
@tryit-wv8ui
@tryit-wv8ui 7 ай бұрын
Yep the same question here@@elieelezra2734 @umarjamilai
@tryit-wv8ui
@tryit-wv8ui 7 ай бұрын
Is the next assertion from elie elezra below is correct@@umarjamilai ?
@umarjamilai
@umarjamilai 7 ай бұрын
@@tryit-wv8ui hi! The behavior during training and inference IS EXACTLY THE SAME: what I have shown for inference is exactly what happens during training. Because that's how the gate function is trained in producing logits and selecting the best feed forward networks for each token and that's also the reason why some feed forward networks will "specialize" in particular subsets of the tokens (for example some may specialize on Japanese tokens, others on English tokens etc..)
@user-tb4sg6lo8f
@user-tb4sg6lo8f 7 ай бұрын
At 9:25, why are Q and K the same matrices in the case of self-attention? There are different linear layers for mapping the input sequence to queries and values, isn't there?
@umarjamilai
@umarjamilai 7 ай бұрын
I recommend you watch my previous video on the Transformer, where I explain the origin of the Q, K and V matrices.
@siqb
@siqb 6 ай бұрын
Yup. Q, K, V are 3 different projections of the input. If they were literally the same, the QKt will be a symmetric matrix.
@umarjamilai
@umarjamilai 6 ай бұрын
@@siqb you're right. I should have mentioned that. Because I was talking about the "tokens" they "refer to" and not to the single values they are made up of, it may have caused confusion. Thanks for pointing out
@GrifinsBrother
@GrifinsBrother 6 ай бұрын
But your explanation about specialising of experts is wrong. Because it is stated in the paper, that knowledge of each expert is distributed equally and there is no any specialisation. Check "Routing analysis" block of the paper.
@umarjamilai
@umarjamilai 6 ай бұрын
The paper on the actual performance of the mixture of experts came AFTER I published my video. What I was talking about is not what happens actually (since I didn't have the data on the actual performance back then), but on what's the intuition behind creating a mixture of experts: the idea is that each model - hopefully - specializes in a subset of the data. It may also happen that each model does NOT specialize, like in the case of Mamba. I believe the authors of Mamba also hoped in some kind of specialization, but in reality it didn't happen.
@user-ri9xz1dc6l
@user-ri9xz1dc6l 5 ай бұрын
amazing
@farzinhaddadpour7192
@farzinhaddadpour7192 7 ай бұрын
Thanks!
@umarjamilai
@umarjamilai 7 ай бұрын
Thank you very very very much!
@avogadroarts4366
@avogadroarts4366 6 ай бұрын
Thanks
@reginoldlu
@reginoldlu 7 ай бұрын
Thanks!
@reginoldlu
@reginoldlu 7 ай бұрын
谢谢!Request the flashattention and falshattention2! keep working!!😀
@reginoldlu
@reginoldlu 7 ай бұрын
I just connected with you on linkedin
@ml.9106
@ml.9106 4 ай бұрын
Thanks!
@mihirrege206
@mihirrege206 5 ай бұрын
Thanks!
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
The Tragedy of systemd
47:18
linux.conf.au
Рет қаралды 1,1 МЛН
Slow motion boy #shorts by Tsuriki Show
00:14
Tsuriki Show
Рет қаралды 10 МЛН
A teacher captured the cutest moment at the nursery #shorts
00:33
Fabiosa Stories
Рет қаралды 55 МЛН
Understanding Mixture of Experts
28:01
Trelis Research
Рет қаралды 8 М.
Mistral 8x7B Part 1- So What is a Mixture of Experts Model?
12:33
Sam Witteveen
Рет қаралды 40 М.
Rotary Positional Embeddings: Combining Absolute and Relative
11:17
Efficient NLP
Рет қаралды 28 М.
The math behind Attention: Keys, Queries, and Values matrices
36:16
Serrano.Academy
Рет қаралды 228 М.
Slow motion boy #shorts by Tsuriki Show
00:14
Tsuriki Show
Рет қаралды 10 МЛН