MAMBA from Scratch: Neural Nets Better and Faster than Transformers

  Рет қаралды 91,343

Algorithmic Simplicity

Algorithmic Simplicity

Күн бұрын

Mamba is a new neural network architecture that came out this year, and it performs better than transformers at language modelling! This is probably the most exciting development in AI since 2017. In this video I explain how to derive Mamba from the perspective of linear RNNs. And don't worry, there's no state space model theory needed!
Mamba paper: openreview.net/forum?id=AL1fq...
Linear RNN paper: openreview.net/forum?id=M3Yd3...
#mamba
#deeplearning
#largelanguagemodels
00:00 Intro
01:33 Recurrent Neural Networks
05:24 Linear Recurrent Neural Networks
06:57 Parallelizing Linear RNNs
15:33 Vanishing and Exploding Gradients
19:08 Stable initialization
21:53 State Space Models
24:33 Mamba
25:26 The High Performance Memory Trick
27:35 The Mamba Drama

Пікірлер: 172
@jamescamacho3403
@jamescamacho3403 4 күн бұрын
As someone actively working on this stuff, this channel has the best explanations on the internet, and the 'tuber actually understands what is going on.
@jawadmansoor6064
@jawadmansoor6064 22 күн бұрын
wow, you've made some difficult i mean extremely difficult algorithms look easy. thank you.
@EkShunya
@EkShunya 22 күн бұрын
please open your community tab your content is incredible
@jarib3858
@jarib3858 21 күн бұрын
One small note on RNN's, reservoir computing is a very high dimensional random RNN with linear regression readout, therefore there is no exploding nor vanishing gradient. Reservoir computing is currently the standard for non-linear dynamic time series prediction
@rikkathemejo
@rikkathemejo 6 күн бұрын
Nice video! I just wanted to point out that the parallel scan algorithm can be also implemented in O(n) time (instead of the O(n log(n)) version peresented in the video. and this is the version that the MAMBA uses.
@ithaca2076
@ithaca2076 22 күн бұрын
absolutely love the quality and information of this video!!! please keep up the good work this is amazing
@kamdynshaeffer9491
@kamdynshaeffer9491 21 күн бұрын
Absolutely amazing vid. Just subbed after getting recommended to this channel. Never stop making videos dude
@IllIl
@IllIl 18 күн бұрын
Thank you! Your channel is an invaluable resource on here. Hope you keep making these videos!
@peterdemore7239
@peterdemore7239 11 күн бұрын
Brutal. I'm going to have to watch this about 30 times. Love it.
@anrilombard1121
@anrilombard1121 22 күн бұрын
Currently testing it on molecular generation, so excited to see where these strengths hold and where they falter :)
@honglu679
@honglu679 21 күн бұрын
Wow, excellent explaination. It covers all the essense of the paper with just enough math/algo. Thank you so much ! If you dont mind, plz make a video for RWKV (v6 has some new modifications), which is another strong linear RNN model. I am curious how does it compares to mamba.
@tellu5493
@tellu5493 20 күн бұрын
This was very good, and I hope you make more videos like this!
@markdatton1348
@markdatton1348 21 күн бұрын
Awesome video. I love the speed and the depth of this, it's perfect
@anthonybernstein1626
@anthonybernstein1626 21 күн бұрын
Amazing explanation, thank you!
@BooleanDisorder
@BooleanDisorder 22 күн бұрын
You have such a pleasant voice 😊 Thanks for helping me understand better. Please keep making videos. ❤
@luke2642
@luke2642 16 күн бұрын
in para-lllelll :-D
@RexPilger
@RexPilger 10 күн бұрын
About peer review: As one comment noted, there could be many more candidate papers presented than could be accommodated at the venue. However, this video argues, the rejection justification for this paper is inadequate at best. Some comments ask whether the rejection is important; for academics, the answer is yes, because presentations and publications count for tenure, promotions, and raises plus continued funding of the research. Since several comments plus the video indicate that the algorithm had already received a lot of publicity, for the sake of the project it may not matter if it can continue to be funded, especially if commercial implementations are successful. What is interesting in any case is that the paper exists; in effect it has been published; the authors may not get the desired credit for formal publication, but their work and the reviewer comments are out there now. A couple of decades ago that would not have been the case; most people in the field would be unaware of the algorithm. In terms of peer review, in general (outside of AI), in my field, one of the natural sciences, a paper I submitted for publication encountered an editor plus two reviewers who were well qualified in the field; after asking for two revisions to the manuscript, the third version was rejected. Interestingly, all three scientists had published research which my paper undermined; they may well have lost funding for their research or even their position had that manuscript of mine been published (I speculate here). Peer review cuts both ways. While iterating with the editor and reviewers I continued to expand my research project and made some additional discoveries. Following the rejection I wrote a completely different paper which incorporated my initial work supplemented by the new discoveries; happily it was published a few months ago (in a different journal). I'm formally retired now, but continue to do research. To young researchers -- never give up. Learn from rejection, refine your work, be humble, exercise integrity and honesty, and take pride in your accomplishments, even if only a few know about them. Peer review (by humans) is a necessity and will continue to be. There is no such thing as a perfect filter, but science and technology would be overwhelmed by irrelevancy, dishonesty, and duplication of effort without it. AI may become a useful filtering tool, but science is a human endeavor.
@koka3243
@koka3243 22 күн бұрын
Great video! Thanks!
@nialv7985
@nialv7985 18 күн бұрын
Thanks for this explanation! Phrasing mamba in terms of a Linear RNN makes it much easier to understand. You've done a lot already with this video, but I just want to ask for a little bit more. Since the original Mamba paper presented the model in terms of SSM, many, many implementations of Mamba also use that language. And I have difficulty wrapping my head around trying to map their code back to the concepts in this video. I wish you can explain how concepts in the Mamba paper (∆ A B C D, discretization, etc) maps back to the parameters of a Linear RNN, which would help a lot.
@algorithmicsimplicity
@algorithmicsimplicity 18 күн бұрын
Sure, for the state space terminology A in ℂ^d is the learnable parameter that is used to make the recurrent weight vector, the equivalent in my video is a+bi, with a, b in R^d as learnable parameters, i is the imaginary unit. B, C in ℂ^{d x d } are the complex matrices applied before and after the recurrence respectively, equivalent to P and Q matrices in my video, also learnable parameters. SSM performs discretization of the parameters, which creates A^bar = e^{ΔA} and B^bar = (ΔA^-1)(exp(ΔA)-I)ΔB. Note A^bar and B^bar are what are actually used in the computation. This discretization is equivalent to the stable reparameterization outlined in my video. In the SSM formulation, they phrase the discretization as modifying B into B^bar, but note that B is the matrix which is applied to the input, so multiplying B with Δ is equivalent to multiplying the input x with Δ and leaving B unchanged, which is how it is described in my video. One last thing to be aware of is that in the state space literature, the models are often described as having another "state dimension" N in addition to the model dimension d. This state dimension is equivalent to the factor by which the output vector's dimension is expanded, so for example Mamba uses N=16, i.e. expands outputs by a factor of 16. Let me know if you still have any questions!
@nialv7985
@nialv7985 17 күн бұрын
@@algorithmicsimplicity Thank you so much!
@harshvardhanv3873
@harshvardhanv3873 5 күн бұрын
we need more videos from you, especially one from basics
@algorithmicsimplicity
@algorithmicsimplicity 4 күн бұрын
Any topics in particular you'd like to see?
@harshvardhanv3873
@harshvardhanv3873 4 күн бұрын
@@algorithmicsimplicity we need video series in math for linear algebra, calculus, probability and statistics seperately for ml perspective and then after that we would like to learn more on basic concepts like regression, classification, clustering, etc. we would also like to learn more on the types of learning unsuperwised, semi- superwised and self-superwised. some basic architectures like rnn types (lstm, gru, hybrids) , basic ann , mlp and even the recent kan, ntk.
@algorithmicsimplicity
@algorithmicsimplicity 4 күн бұрын
@@harshvardhanv3873 Got it. I am definitely planning to do videos on calculus and probability for ML soon. After that I can do videos on the types of ML.
@harshvardhanv3873
@harshvardhanv3873 4 күн бұрын
@@algorithmicsimplicity sure waiting for your videos ✌
@sichengmao4038
@sichengmao4038 2 күн бұрын
well, maybe 3b1b's video already fullfills what your need on prerequisites of ml.
@tomfahey2823
@tomfahey2823 17 күн бұрын
8:50 Analogous to FFT algorithm, which has the same Olog(N) complexity.
@TragicGFuel
@TragicGFuel 13 күн бұрын
Exactly what I thought!
@diabolo19x
@diabolo19x 18 күн бұрын
Incredible work. I mean REALLY incredible
@alexmomot6268
@alexmomot6268 22 күн бұрын
Thx a lot for the interesting video! 💛💙
@nikilragav
@nikilragav 19 күн бұрын
I really wish that when you're talking about things happening in parallel, your animations happened in parallel. Like 8:30. I think it would really improve the comprehensibility of your explanation
@phmfthacim
@phmfthacim 20 күн бұрын
This is amazing!
@pi5549
@pi5549 22 күн бұрын
Another beautiful exposition. Further points: (1) HiPPO itself comes from attempting to approximate a spiking net with a SSM (Voelker 2017/8), (2) we do have O(NlogN) transformer hacks now, (3) RWKV is a promising arch that deserves a place in this arena.
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
I haven't heard of any O(NlogN) transformer hacks that preserve performance, got any links? And yeah RWKV is promising, I would've loved to talk about it as well but the video was getting long lol.
@luke2642
@luke2642 16 күн бұрын
great video. That trick around the 26 minute mark of doing 16x compute almost for free (in terms of time) because of memory bottlenecks is really neat. I wonder how many other architectures would benefit from that kind of design optimisation?
@algorithmicsimplicity
@algorithmicsimplicity 16 күн бұрын
It appears that it is only useful for linear recurrent layers, because the main computation is just performing elementwise multiplication between the previous output vector and the recurrent weight vector, which means you have O(d) parameters and you do O(d) compute, and transferring one parameter takes longer than doing one operation. For other kinds of layers, such as fully connected layers, you are doing at least a matrix-vector multiplication, which means you are doing O(d^2) compute, and that usually takes much longer than transferring O(d) parameters.
@2255.
@2255. 22 күн бұрын
underrated channel
@1LC4P1T4L1ST4
@1LC4P1T4L1ST4 22 күн бұрын
I believe that the transformer does have a quadratic cost in memory (specifically self attention (SA)). The attention matrix in SA is n by n, thus n^2 (n being the number of tokens). Probably the reviewers is referring to that bit. Anyway, rejecting mamba was hecking stupid. Great video!
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
The matrix is indeed n^2, but you never need to materialize the full matrix at the same time. You can materialize one column at a time, which is exactly what FlashAttention does, resulting in O(n) memory (still O(n^2) compute though).
@1LC4P1T4L1ST4
@1LC4P1T4L1ST4 22 күн бұрын
I have no idea how flash attention manages to be faster and more memory friendly. Are you sure that the attention matrix is never fully in memory (regardless of the type of memory)?. However the classical implementation didn't use flash attention so I believe that the reviewer is referring to that.
@1LC4P1T4L1ST4
@1LC4P1T4L1ST4 22 күн бұрын
I have rechecked the paper and it appears that flash attention is linear wrt the memory. The work of Tri Dao Is magic to me.
@Mohammed-rx6ok
@Mohammed-rx6ok 16 күн бұрын
Good job 👏
@OscarTheStrategist
@OscarTheStrategist 22 күн бұрын
Amazing video, insta-sub!
@MrStevemur
@MrStevemur 14 күн бұрын
I appreciate the soothing piano music. Currently the words are only slightly better than Charlie Brown listening to adults talk, but I hope to dive in.
@Alulapower
@Alulapower 22 күн бұрын
Good video to explain mamba : I understand something
@harrysvensson2610
@harrysvensson2610 22 күн бұрын
You see, it's O(n log(n)) instead of O(n^2) without any penalties. Okay? 100% crystal clear, right? //end of joke
@BooleanDisorder
@BooleanDisorder 22 күн бұрын
​​​@@harrysvensson2610that means that, basically, transformers scale x² in compute needed for prompting. Also called square or quadratic since x² is a square if you would make a geometric figure. So if you write a prompt of 5 words, that's 25 compute since 5*5=25. You can see how this gets really crazy at high tokens counts. Mamba scales differently, so you need much less compute per prompt.
@MarcosPedroLeal
@MarcosPedroLeal 22 күн бұрын
Loved your videos. Which software or library do you use to make these animations? Is it manim?
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
It is a combination of Manim (for rendering latex) and my own renderer written in Pytorch (for 3d stuff).
@drdca8263
@drdca8263 22 күн бұрын
Here’s an idea that probably wouldn’t work: What if instead of algebraically guaranteeing that some operation is a monoid so that one can use the parallelizing thing that combines n inputs in O(log(n)) steps in n processors, what if you just had some operation, learned by a NN, which has “how much it deviates from being a monoid operation” as part of the loss? Like, suppose you randomly selected some pair of consecutive applications of the operation, and also computed it in the opposite order, and took the L^2 norm of the difference between the results, and multiplied that by some weighting, and made that a term in the loss? Like, within the family of continuous and piecewise-smooth monoidal operations, perhaps some of them would be better at selective remembering?
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
That sounds really interesting, you should try it out!
@drdca8263
@drdca8263 22 күн бұрын
@@algorithmicsimplicity Thanks! Unfortunately I am lazy... And, there’s already another “what if I did [X]?” machine learning project I barely started (“what if I tried to add a simple approximation to what copying heads do to an n-gram model”, which seems like it should be much easier, but I’ve barely written the n-gram model part of it (and ChatGPT honestly wrote most of that). Haven’t even started on the “compute statistics about whether copying a word from previously in the current text, or go based on the corpus as a whole, is more accurate in this context” part...
@CyrusEstavillo
@CyrusEstavillo 16 күн бұрын
@@drdca8263thats a lame response. Try it. Make something in this world
@TheDoomerBlox
@TheDoomerBlox 3 күн бұрын
It's only yet another silly experiment to do the seemingly impossible in the hottest meme area, picking your nose seems like a more productive waste of time. But imagine, if you found something really cool and nobody would listen. That would be funny, that would be cool.
@YA-yr8tq
@YA-yr8tq 7 сағат бұрын
the channel is great and the material is awesome! the only catch is: the piano in the background makes it hard to focus..
@InfiniteQuest86
@InfiniteQuest86 20 күн бұрын
I like how we now call 1 billion parameters small.
@hackerborabora7212
@hackerborabora7212 22 күн бұрын
This algo is new and you made a video about it I love you I will subscribe your channel keep going
@tannergilliland6105
@tannergilliland6105 7 күн бұрын
If you ever get the time I would love to see another video on mamba implementation but dumded down even more. Like to the level of statquest videos. They need to make you feel special while also showing the math step by step like its 9th grade.
@algorithmicsimplicity
@algorithmicsimplicity 7 күн бұрын
Thanks for the suggestion, there will probably be improved versions of Mamba coming out soon, I will make a more basic explanation video for them when they do.
@yqisq6966
@yqisq6966 6 күн бұрын
Peer review is broken nowadays because people have little time to actually read through a manuscript with attention to details given the amount of pressure to publish their own papers. So when you have more papers out there than the time people can spend on reviewing, you get low quality peer review.
@Singularity606
@Singularity606 22 күн бұрын
There seems to be a growing zoo of related architectures that attempt to supersede the transformer. Besides Mamba, there's also RetNet, GLA, Based, and HGRN. And the secret upcoming xLSTM. Someone also mentioned RWKV. Are all these converging to something? And when will we see a frontier model based on this new paradigm?
@BooleanDisorder
@BooleanDisorder 22 күн бұрын
The main problem with transformers is the compute scaling from input length. Mamba tries to be equally good at high dimension representations as transformers without the extreme compute scaling. So effectively we want much more complex representations in the end, without needing a nuclear power plant and supercomputer for inference. Transformers could continue get better, but it takes an astronomical compute amount atm.
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
I believe we are converging to hybrid Transformer and dynamic linear RNNs, such as Griffin, arxiv.org/abs/2402.19427 . There are already open source Mamba language models with a few billion parameters, training and testing full size models takes about a year.
@MrObveous777
@MrObveous777 17 күн бұрын
@@algorithmicsimplicity "training and testing full size models takes about a year." why so long?
@BC-bn7xd
@BC-bn7xd 17 күн бұрын
I think just training it can take weeks if not months ​@@MrObveous777
@maximilianchrzon4545
@maximilianchrzon4545 18 күн бұрын
Your videos are so good man keep it up, seriously. Although that is probably beneath you, but could you maby make a video on how neural networks are computed on machines in general or maby on GPUs? As someone who did not learn computer science in uni, this would be an interesting topic for me to learn and maby fundamentally understand nn better.
@algorithmicsimplicity
@algorithmicsimplicity 18 күн бұрын
That's an interesting topic, I was planning on making videos about how CPUs and GPUs work at the physical level (e.g. logical gates are built out of transistors, addition and multiplication are built out of logical gates). Neural nets are just implemented as a bunch of matrix multiplications (you put all the neuron weights in one matrix and multiply it with the input). Is that what you are asking about?
@maximilianchrzon4545
@maximilianchrzon4545 18 күн бұрын
@algorithmicsimplicity yeah that sounds about right, thank you. Maby you could use matrix multiplication as a case example on those inner workings :) anyways, thanks for making awesome videos
@ArtOfTheProblem
@ArtOfTheProblem 8 күн бұрын
3b1b has this covered pretty well already@@maximilianchrzon4545
@ollybreh95
@ollybreh95 12 күн бұрын
Woah big claim! I’m excited
@tulgatbolderdene7493
@tulgatbolderdene7493 22 күн бұрын
This just shows how RNNs are way too natural of an architecture to ignore. Maybe solution to a gradient descent problem is to not use gradient descent at all. There has to be a different way to update parameters than this bizarre hack and slash let ||x_0|| = 1 for RNNs.
@BooleanDisorder
@BooleanDisorder 22 күн бұрын
Meta-learning could potentially be one way. Like a neural "module" in the model that looks how changes in the first layers affect the representation space deeper and vice versa. It would have to have some goal and reward itself
@tempname8263
@tempname8263 22 күн бұрын
But gradient descent is too natural of an algorithm to ignore >.
@ckpioo
@ckpioo 22 күн бұрын
​@@tempname8263 it's actually not natural at all, gradient decent itself is the one big difference between a human brain and any neural networks.
@egor.okhterov
@egor.okhterov 22 күн бұрын
​@@tempname8263no
@ultrasound1459
@ultrasound1459 21 күн бұрын
​@BooleanDisorder you have 10 missed calls from Juergen Schmidhuber 🧏‍♂️
@blacklistnr1
@blacklistnr1 22 күн бұрын
Nice video! What I didn't understand is what happens to the stable weights during training. Particularly: - How are they kept stable? - How can the model learn while being so restricted? What I'm guessing is that some form of the Delta is also used in training to keep the weights in those ranges + rely a lot more on the accuracy to carry the information. Is this correct? Does it imply that using double instead of float gives it a better ability to learn?
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
Great question. The answer is it's really complicated and no-one knows for sure. There is nothing explicitly keeping the weights stable during training. They can (and probably do) become unstable. The thing is, there are actually thousands of different weights in the vector. At initialization, all of the weights are essentially one, so information from anywhere in the input can influence the gradient, but the model is incredibly restricted (cannot perform meaningful transformations in the recurrence). Then SOME of those weights change and enter the un-stable regime, so they can no longer carry information long distance but can do more interesting computations, while others remain stable. And in the fully-connected layers between recurrences, all weights can communicate information with each-other. So you have this complicated system where weights are changing at different rates, some remain stable, some become unstable, and that allows for interesting computation to be done and information to be propagated long distances.
@blacklistnr1
@blacklistnr1 22 күн бұрын
@@algorithmicsimplicity Thanks for the reply! That's quite interesting, different propagation lengths didn't even cross my mind. It'd be really funny if after all this work the model learned unstable weights and became forgetful :))
@nyyotam4057
@nyyotam4057 22 күн бұрын
So how close is the weight estimator to the MMSE (minimal mean square error) estimator? Can the MAMBA arch be improved even more, using a sparse covariance matrix and an application of a 'true' Kalman filter? Or is it already as close as it can get?
@agsystems8220
@agsystems8220 22 күн бұрын
RNNs are constrained by having to hold all their information in a single embedding space, so this space needs to be extremely large. It needs to hold every piece of information in the context that might come in useful at some point. Transformers can distribute information between many tokens, so can operate with a much smaller embedding space, at least in theory. The memory complexity of a RNN with a given structure is quadratic on the size of the embedding space, meaning we really pay big time for that increased embedding size. I wonder if that is what the reviewer was getting at. The results were impressive, but they haven't been followed up by success at larger model sizes which I would have expected to have already happened if it was going to. It is a cool mathematical trick to make it work, and demonstrates that language is surprisingly linear, but once you start to hit truly non linear questions I would expect it to stop improving. Overhyped IMO.
@howuhh8960
@howuhh8960 22 күн бұрын
if you stack multiple linear rnn layers they can handle non-linear dependencies across time, so "demonstrates that language is surprisingly linear, but once you start to hit truly non linear questions" is not true as mamba model as a whole (multiple layers) is nonlinear rnn
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
The really cool thing about linear RNNs is that increasing the size the embedding space only has linear cost, not quadratic. The recurrence operator only performs elementwise multiplication with the embedding vector. This is why Mamba is able to increase the size of the embedding vector by a factor of 16 at essentially no cost. If you were willing to incur some additional cost, you could easily make the embedding vectors even larger. When you expand the embedding vector by a factor of a few thousand, now your talking about as much memory as a transformer with a few thousand tokens of the original size. Works are currently in progress to train larger model sizes, it takes about a year from start to finish to train a full sized model. Mamba already achieves state of the art performance for ~3b sized language modelling, this is HIGHLY HIGHLY non-linear. And finally, while there are some aspects in which transformers are still superior to dynamic linear RNNs, hybrid architectures such as Griffin (arxiv.org/abs/2402.19427 ) appear to give the best of both worlds, handily outperforming both.
@Nerdimo
@Nerdimo Күн бұрын
Would you mind explaining this associativity 10:37 ? My assumption is that f is the linear recurrence function, but how is it equal to a pair of the matmul between W2 and W1 and the second term? Wouldn’t f output a vector, so how could it be equal to the right hand side pair of vectors?
@augmentos
@augmentos 17 күн бұрын
Great video, would prefer no music but that’s me
@jhonny1682
@jhonny1682 10 күн бұрын
Can you make an explanation video like this one on Liquid Time Constant Networks 🙏
@tantzer6113
@tantzer6113 22 күн бұрын
Enjoyed this. Given that its performance is comparable to or better than transformers as verified independently in several papers, is Mamba gaining a foothold among practitioners?
@mimotron
@mimotron 22 күн бұрын
It does : kzfaq.info/get/bejne/b9ldbMSE1MjPqWw.html
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
Definitely, lots of open source language models are switching to Mamba. Mamba is also being used for other tasks as well, e.g. arxiv.org/abs/2401.09417 Also, recently google deepmind released this paper ( arxiv.org/abs/2402.19427 ) on hybrid dynamic linear RNN and transformers which achieves really good results. Dynamic linear RNNs are definitely going to become mainstream.
@karius85
@karius85 22 күн бұрын
Appreciate the breakdown. I think there are a few more things at play here for the reject that is somewhat overlooked in the discussion at the end. Specifically, there are issues with anonymity and using "hype" to push a paper through an academic conference. I speculate that this was the underlying reason for rejecting the paper.
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
Cool, if that was the reason for the reject they should have said that in the rationale for the reject. Instead they made up a bunch of a criticisms which are either 1) irrelevant or 2) blatantly untrue. That's a bad look for the conference, as it makes it seem like their reviewers are un-qualified to judge academic works.
@karius85
@karius85 22 күн бұрын
@@algorithmicsimplicityAbsolutely agree. In my experience, the quality of conference reviewers are extremely variable. Almost all researchers I know have horror stories about how incompetent and outright adversarial reviewers can be. Many great papers are rejected without sufficient basis, and mediocre papers are included for seemingly no good reason. Many experienced researchers don't want to review anymore. Just a comment on the reject; it might have been a conscious decision to not actually bring the anonymity issues up in the rebuttal to avoid further disputation. But, I am just speculating here with little to no factual basis.
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
It could very well have been a conscious decision, but I think it was the wrong decision. From an outside perspective, it looks like a fantastic paper was rejected because of clueless reviewers. That's far more damaging to the conference's integrity than what ever conflicts might arise from anonymity violation disputes.
@karius85
@karius85 22 күн бұрын
@@algorithmicsimplicity Independently of what one may think of the paper, I agree that the justification for the reject was weak. Unfortunately, I don't think it matters much for the integrity of the conference in the long run, as this has happened in all the other big conferences in the past. Authors generally adapt and move on. What makes this unique is the hype around Mamba. Previously, no single member of the general public would have been interested in the review decision of a single paper in AI / ML. Now, the community extends far beyond academics, for better or worse. All in all, I hope it serves to incentivise stronger review processes for the future.
@karius85
@karius85 22 күн бұрын
On a side note, I really enjoy your content, keep up the good work 👏
@blutwurst9000
@blutwurst9000 11 күн бұрын
Love the video but I have the question: Shouldn't be the approximation at 17:00 be something like n*w^(n-1)*0.001*x, so isn't there an n missing? Or how was the approximation done?
@algorithmicsimplicity
@algorithmicsimplicity 11 күн бұрын
Ahh yes you're right, there should be an n out the front, the gradient is proportional to nw^(n-1)x. The vanishing/exploding gradient arguments are still the same though, the linear scaling factor doesn't matter compared to the exponential scaling for large n.
@mehnot8193
@mehnot8193 16 күн бұрын
Extremely noob question but, at 13:52 why aren't the input vectors x multplied by P^-1 instead of P? Don't you need to convert them to the eigenbasis before applying the D transformation (or, equivalently, taking the hadamard product with the diag(D) vector)?
@algorithmicsimplicity
@algorithmicsimplicity 16 күн бұрын
Yes, I should have applied P^-1 first to be consistent with my earlier notation W=PDP^-1. Of course, the naming is just a matter of preference, you can equivalently call the first matrix which is applied P or P^-1, so long as the two matrices are inverse of each other it doesn't matter which is called which.
@mehnot8193
@mehnot8193 16 күн бұрын
@@algorithmicsimplicity Oh ok, that makes sense now! Thanks a lot for your answer and this amazing video ^^
@oraz.
@oraz. 21 күн бұрын
One thing I don't understand is the HIPPO matrix, and what they mean by a structured matrix in the context of differential equations.
@thebrownfrog
@thebrownfrog 15 күн бұрын
@erikxu3472
@erikxu3472 21 күн бұрын
Mamba Mentality
@timeflex
@timeflex 5 күн бұрын
GPT mafia 😞 Probably just can't lose their faces and title of "the best LLM tech" (and, perhaps, contracts as well).
@oleonardohn
@oleonardohn 21 күн бұрын
I haven't found any significant evidence suggesting that Mamba models outperform Transformers, except that their attention mechanism does not scale quadratically with the context length. Am I missing something?
@ilonachan
@ilonachan 17 күн бұрын
I mean, even if it just accomplished the tasks about as good as transformers qualitatively, the better compute scaling alone is pretty significant.
@oleonardohn
@oleonardohn 17 күн бұрын
@@ilonachan Sure, but as far as I'm concerned, there is not much evidence it can qualitatively perform the same tasks either. Some people reported that Mamba's state space doesn't perform as well as true attention for long contexts.
@SolathPrime
@SolathPrime 22 күн бұрын
[6:28]: While that sound somewhat good in practice it doesn't work like that Alternating between linear recurrent and non linear dense doesn't give that much of context in advantage :( The gradients vanishes or explodes after a while and requires some sort sigmoid transformation + some value Say for example an architecture like this: ```plaintext Dense -> Sigmoid -> Recurrent -> Dense -> Sigmoid -> Recurrent -> Dense -> Softmax ``` Until the gradients reach the first Recurrent the gradients loses most of it's value :(
@zyzhang1130
@zyzhang1130 2 күн бұрын
Can skip connections be used here to tackle vanishing/exploding gradient problems?
@algorithmicsimplicity
@algorithmicsimplicity 2 күн бұрын
Skip connections would be equivalent to adding 1 to each recurrent weight, which still doesn't fix the problem that you need the weights to be close to 1. You would still need to change the initialization so that the weights are initialized all very close to 0 (so when you add 1 with the skip connection they become close to 1).
@zyzhang1130
@zyzhang1130 2 күн бұрын
@@algorithmicsimplicity thanks for the explanation, just to add: I’m not aware that traditionally in vision domain people initialise different layers specifically to tackle vanishing/exploding gradient issues (on top of skip connections). Is the mechanism you mentioned spontaneously emerged from training?
@algorithmicsimplicity
@algorithmicsimplicity 2 күн бұрын
@@zyzhang1130 Vanishing and exploding gradients only arise in recurrent neural networks because you use the same weights in each iteration. This is what causes the exponential growth/decay. In the vision domain you use feed-forward nets, with different weights in each layer, so this isn't an issue and you don't need specialized initializations.
@zyzhang1130
@zyzhang1130 2 күн бұрын
@@algorithmicsimplicity hmm pretty sure it does occur in vision as well (at least one of them) for deep models that’s why they added skip connections
@algorithmicsimplicity
@algorithmicsimplicity 2 күн бұрын
@@zyzhang1130 Using ReLU activation functions is enough to completely solve vanishing and exploding gradients in feed-forward networks. Residual connections solve a related but different problem, shattered gradients: arxiv.org/abs/1702.08591
@goblinkoma
@goblinkoma 17 күн бұрын
peer review be like thats a nice method for building houses, its a shame it doesn't also cook burgers what
@tempname8263
@tempname8263 22 күн бұрын
21:48 33%? Dude, it's 3.4x improvement. Measuring improvement relative to accuracy instead of error rate is dumb, since that'd mean that difference between 100% accuracy and 99% is just 1%, which is not representative of anything.
@harrysvensson2610
@harrysvensson2610 22 күн бұрын
Everyone got issues when it comes to calculating with percentages. Here's an example: Imagine a game character with armor, the person got 98% damage reduction, and then puts on some more armor and reaches 99% damage reduction. How much less damage does the tank take compared to before putting on the extra armor? 100%? 50%? 1%? If you math it out it's obviously 50% less damage taken, since there's 2% between 98% and 100%. And one of those 2% is now removed, hence 1/2 -> 50% less damage taken compared to before. But you know what? Not everyone agrees that it is 50%. Understanding percentages is difficult.
@BooleanDisorder
@BooleanDisorder 22 күн бұрын
​@@harrysvensson2610yeh, the armor things is a great example. The higher the damage and the more important a tank is, the more important that single percent becomes. Could literally mean the difference between surviving a blow from a boss or die
@ScorpioneOrzion
@ScorpioneOrzion 22 күн бұрын
@@harrysvensson2610 it depends, of the armor example, its 1% absolute, and 50% relative
@harrysvensson2610
@harrysvensson2610 22 күн бұрын
@@ScorpioneOrzion Exactly.
@tempname8263
@tempname8263 21 күн бұрын
@@harrysvensson2610 It's not like it's difficult, it's just that most people do leaps in logic, where they don't even think relative to *what* are they measuring the percentage
@unkarsthug4429
@unkarsthug4429 22 күн бұрын
People keep making things that they say are "better than transformers", but none of them are actually getting used. At this point, hearing people say that has sort of become meaningless from the number of false alarms. Feels like every few months we have something "better than transformers", like RetNets were claimed to be. We'll have to wait and see which actually turn out to be better with time.
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
Yep, but Mamba is different, it is already being used in open source language model projects.
@Supreme_Lobster
@Supreme_Lobster 14 күн бұрын
investor money is generally spent conservatively. it will take at least a few months for them to see the upside in divesting from super large transformers and moving on to MAMBA (or upcoming derivatives). Remember, Transformer was first published in 2017, and it took until at least 2020 for any "large" (> 3B) model to come out.
@yonnn7523
@yonnn7523 15 күн бұрын
great video as always but would vbe even better without the distracting background music.
@ArtArtisian
@ArtArtisian 22 күн бұрын
Eh - re controversy, I don't think peer review broke here. The *conference* however, took a much deserved status hit for mismanaging review.
@justtoleavecomments3755
@justtoleavecomments3755 21 күн бұрын
"Small models up to a few billion params" I think people have forgotten what small means 😂
@AkarshanBiswas
@AkarshanBiswas 22 күн бұрын
Who cares about these academic peer reviews? 😅 But anyways, the only downsides I have seen in S6 is it is not always stable during training(I have seen huge outliers) and it performs worse than transformers in copying.
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
Yep, apparently there a few things where Mamba performs worse than transformers. Hopefully hybrid architectures such as Griffin arxiv.org/abs/2402.19427 give the best of both worlds.
@chickenp7038
@chickenp7038 22 күн бұрын
the jamba paper says they add rmsnorm to intermediate activation just not which activations. do you have ideas?
@jks234
@jks234 21 күн бұрын
IMO peer review is key to being able to continue to move forward. As I heard George Hotz say, “Intelligence is compression.” We advance in all directions, reconvene and compress into useful theories and explanations, and using our newfound perspective, explore once again. I personally am amazed at how much more clearly I understand the shortcomings addressed by transformers vs RNNs after this video and how this is related to the fundamental nature of backpropagation. Also, I found the analogy with convolution at the beginning quite insightful. It allowed me to understand that RNNs are essentially a dynamic programming algorithm (recursive and sequential), while transformers are parallel and thus no longer “time-bound”. I have been doing a lot of reflection after reflecting on that. Primarily my own assertion that RNNs are probably not a good path to go down… because my own thinking is not fundamentally recursive like that. My own thinking is probabilistic and going in many directions at once. At least, that is my experience. And thus, it feels much more aligned with the matmul model of transformers. Decoupled from the sequence and evaluated from a much higher level than sequential recurrence. I have personally been reflecting on how perhaps the next step would be an “importance metric”. Just as humans naturally filter quite quickly for importance and feel that certain paths hold more promise, I feel that this might be a promising next step for transformers. In a phrase, “filtering heuristics”. MoE, but at the attention level.
@augmentos
@augmentos 17 күн бұрын
Griffen is the first, but why is none of the major labs putting out a huge mamba model on par with the larger transformer models? Even if mixed. Any insights? Is there a summary anywhere of the ways it doesn’t shine? Anyone tried mamba w bitnet?
@chickenp7038
@chickenp7038 17 күн бұрын
@@augmentos because the big labs don’t publish what they are doing. they better be doing it because it works.
@EnricoGolfettoMasella
@EnricoGolfettoMasella 22 күн бұрын
It’s faster but not better, bro!
@poloceccati
@poloceccati 19 күн бұрын
Science paper peer reviewing could become a fight for fame and carreer amongst greedy scientists, more than a filter for truth, using small insignificant mistakes as excuses for paper rejection.
@dunar1005
@dunar1005 2 күн бұрын
I love those sleep inducing videos with someone just talking gibberish that no one understands 😇
@Matlockization
@Matlockization 14 күн бұрын
What are the perimeters of the human brain, 86 billion ???
@redswap
@redswap 2 сағат бұрын
No it's way more, you have to take into account the number of synapses (100 trillion). Then you also have to take into account the fact that the way human neurons work is much more complicated than an artificial neural network (requires at least 6 layers of artificial neurons to simulate a human pyramidal neuron cell with 98% accuracy, for example). So taking all of this into account, the human brain should have between 10 and 100 quadrillion parameters. And this doesn't even take into account the fact that there are at least 200 different types of neurons in the entire brain (most of them are in the brain stem because it had more time to evolve). If we take this into account, and divide this by the added complexity of the transformer architecture (about 4, don't ask how I found this number 😅), then the human brain has the equivalent intelligence potency of a transformer model with in between 500 quadrillion and 5 quintillion parameters.
@ze5os427
@ze5os427 20 күн бұрын
5:02 for people who don't get it, vanishing/exploding gradients is just basically dementia for AI in a way, except it's where the AI is starting to become incapable of learning any further
@clamhammer2463
@clamhammer2463 7 күн бұрын
I'm dizzy
@googleyoutubechannel8554
@googleyoutubechannel8554 7 күн бұрын
Best review of Mamba on YT, but still fails to address "better" at _what_ than transformers? language modeling? what is that? Next token prediction? (_what_ prediction...). There's still this big hole in AI theory, it doesn't feel like anyone has any idea what framework they're even working under... nobody is sure what they're doing. There are all these papers that show xyz score on xyz benchmark... a benchmark somebodies dog came up with over breakfast... and nobody has good evidence of what property xyz benchmark is even measuring, why we should care, never-mind what it's supposed to 'mean'.
@leonmozambique533
@leonmozambique533 21 күн бұрын
lmao Mamba got rejected from ICLR
@BasitMustafa
@BasitMustafa 12 күн бұрын
Love everything about out your videos thank you so much, but please reconsider the background music (as in removing it). Those who desire it can easily add in their own. I find it distracting.
@jawadmansoor6064
@jawadmansoor6064 22 күн бұрын
faster ;yes, better ;how?
@algorithmicsimplicity
@algorithmicsimplicity 22 күн бұрын
Better scores on language modelling perplexity and downstream reasoning tasks.
@stephaneduhamel7706
@stephaneduhamel7706 22 күн бұрын
@@algorithmicsimplicity At least at "small" scales
@BooleanDisorder
@BooleanDisorder 22 күн бұрын
​@@stephaneduhamel7706even if it doesn't scale well by itself it's extremely impressive and important work. It will definitely be relevant going forwards.
@jawadmansoor6064
@jawadmansoor6064 22 күн бұрын
@@algorithmicsimplicity The largest mamaba (or any other state space model) I saw was less than 7b parameters, also, to use mamba they had to do some tricks, some difficult math and make calculations within CPU memory (or that is how I understood it) since memory is not very large they can't build large models. And it is commonly believed that larger the model better it is for generalization i.e. understanding and doing "tasks".
@cutmasta-kun
@cutmasta-kun 22 күн бұрын
Dude, the music in the background is killing me -.- Silence would be much better!
@igorg4129
@igorg4129 20 күн бұрын
NIce, but please stop uptalking, you are several levels above this habit.
@shpensive
@shpensive 18 күн бұрын
Nonsense, speak up or down or sideways-inside-out
@kalisticmodiani2613
@kalisticmodiani2613 Күн бұрын
uptalking is just accent. Everybody has an accent.
@vinniepeterss
@vinniepeterss 21 күн бұрын
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 75 М.
Transformer Neural Networks Derived from Scratch
18:08
Algorithmic Simplicity
Рет қаралды 116 М.
FOUND MONEY 😱 #shorts
00:31
dednahype
Рет қаралды 8 МЛН
Como ela fez isso? 😲
00:12
Los Wagners
Рет қаралды 13 МЛН
it takes two to tango 💃🏻🕺🏻
00:18
Zach King
Рет қаралды 21 МЛН
CAN YOU HELP ME? (ROAD TO 100 MLN!) #shorts
00:26
PANDA BOI
Рет қаралды 10 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 204 М.
Mamba Language Model Simplified In JUST 5 MINUTES!
6:14
Analytics Camp
Рет қаралды 4,7 М.
Mamba, SSMs & S4s Explained in 16 Minutes
16:20
Vivian C.
Рет қаралды 2,2 М.
But what is a neural network REALLY?
11:17
Algorithmic Simplicity
Рет қаралды 60 М.
Mamba - a replacement for Transformers?
16:01
Samuel Albanie
Рет қаралды 241 М.
And this year's Turing Award goes to...
15:44
polylog
Рет қаралды 77 М.
MIT 6.S191: Recurrent Neural Networks, Transformers, and Attention
1:01:31
The spelled-out intro to neural networks and backpropagation: building micrograd
2:25:52
FOUND MONEY 😱 #shorts
00:31
dednahype
Рет қаралды 8 МЛН