Can We Build an Artificial Hippocampus?

  Рет қаралды 192,038

Artem Kirsanov

Artem Kirsanov

Күн бұрын

To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/
The first 200 of you will get 20% off Brilliant’s annual premium subscription.
My name is Artem, I'm a computational neuroscience student and researcher. In this video we discuss the Tolman-Eichenbaum Machine - a computational model of a hippocampal formation, which unifies memory and spatial navigation under a common framework.
Patreon: / artemkirsanov
Twitter: / artemkrsv
OUTLINE:
00:00 - Introduction
01:13 - Motivation: Agents, Rewards and Actions
03:17 - Prediction Problem
05:58 - Model architecture
06:46 - Position module
07:40 - Memory module
08:57 - Running TEM step-by-step
11:37 - Model performance
13:33 - Cellular representations
17:48 - TEM predicts remapping laws
19:37 - Recap and Acknowledgments
20:53 - TEM as a Transformer network
21:55 - Brilliant
23:19 - Outro
REFERENCES:
1. Whittington, J. C. R. et al. The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation. Cell 183, 1249-1263.e23 (2020).
2. Whittington, J. C. R., Warren, J. & Behrens, T. E. J. Relating transformers to models and neural representations of the hippocampal formation. Preprint at arxiv.org/abs/2112.04035 (2022).
3. Whittington, J. C. R., McCaffary, D., Bakermans, J. J. W. & Behrens, T. E. J. How to build a cognitive map. Nat Neurosci 25, 1257-1272 (2022).
CREDITS:
Icons by biorender.com and freepik.com
Brain 3D models were created with Blender software using publicly available BrainGlobe atlases (brainglobe.info/atlas-api)
Animations were made using open-source Python packages Matplotlib and RatInABox ( github.com/TomGeorge1234/RatI... )
Rat free 3D model: skfb.ly/oEq7y
This video was sponsored by Brilliant

Пікірлер: 313
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/. The first 200 of you will get 20% off Brilliant’s annual premium subscription.
@KnowL-oo5po
@KnowL-oo5po Жыл бұрын
your videos are amazing you are the Einstein of today
@RegiJatekokMagazin
@RegiJatekokMagazin Жыл бұрын
@@KnowL-oo5po Business of today.
@josephvanname3377
@josephvanname3377 Жыл бұрын
Brilliant needs to have a course on reversible computation.
@ironman5034
@ironman5034 Жыл бұрын
I would be interested to see code for this, if it is available of course
@muneebdev
@muneebdev Жыл бұрын
I would love to see a more technical video explaining how a TEM transformer would work.
@waylonbarrett3456
@waylonbarrett3456 Жыл бұрын
I have many mostly "working" "TEM transformer" models although I've never called them that. This idea is not new; just its current sysnthesis. Basically, all of the pices have been around for a while and I've been building models out of them. I don't ever have enough time or help to get them off the ground.
@jonahdunkelwilker2184
@jonahdunkelwilker2184 Жыл бұрын
Yes same, I would love a more technical video on how this works too! Ur content is so awesome, currently studying CogSci and I wanna get into neuroscience and ai/agi development, thank u for all the amazing content:))
@mryan744
@mryan744 Жыл бұрын
Yes please
@Arthurein
@Arthurein Жыл бұрын
+1, yes please!
@GuinessOriginal
@GuinessOriginal Жыл бұрын
Predictive coding sounds a bit like what LLMs do.
@666shemhamforash93
@666shemhamforash93 Жыл бұрын
A more technical video exploring the architecture of the TEM and how it relates to transformers would be amazing - please give us a part 3 to this incredible series!
@kyle5519
@kyle5519 5 ай бұрын
It's a path integrating recurrent neural network feeding into a Hopfield network
@---capybara---
@---capybara--- Жыл бұрын
I just finished my final for behavioral neuroscience, lost like 30% of my grade to late work due to various factors this semester, but this is honestly inspiring and makes me wonder how the fields of biology and computer science will intersect in the coming years. Cheers, to the end of a semester!
@joesmith4546
@joesmith4546 Жыл бұрын
Computer scientist here: they do! I’m absolutely no expert on neuroscience, but computer science (a subfield of mathematics) has many relevant topics. One very interesting result is that if you start from the perspective of automata (directed graphs with labeled transitions and defined start and “accept” states) and you try to characterize the languages that they recognize, you very quickly find as you layer on more powerful models of memory that language recognition and computation are essentially the exact same process, even though they seem distinct. If you want to learn more about this topic, I have a textbook recommendation: Michael Spiders Theory of Computation, 3rd edition Additionally, you may be interested in automated theorem proving as another perspective on machine learning that you may not be familiar with. Neither automata nor automated theorem proving directly describe the behavior of neural circuits, of course, but they may provide good theoretical foundations for understanding what is required for knowledge, memory, and signal processing in the brain, however obfuscated by evolution these processes may be.
@NeuraLevels
@NeuraLevels Жыл бұрын
"Perfection is enemy of efficiency" - they say, but in the long run, quality wins when we run for trascendent work instead of immediate rewards. BTW, the same happend to me. Mine was the best work in the class. the only which also incorporated beauty, and the most efficient design, but the professor took 9/20 points because a 3 days delay. His lessons I never learned. I am not an average genius. Nor are you! No one has achieved what I predicted on human brain internal synergy. Here the result (1min. video). kzfaq.info/get/bejne/aJuHpqal27WpaWw.html
@jeffbrownstain
@jeffbrownstain 10 ай бұрын
Look up Micheal Levin and his TAME framework (technological approach to mind everywhere), cognitive light cones and the computational boundary of the self. He's due for an award of some type for his work very soon.
@SuperNovaJinckUFO
@SuperNovaJinckUFO Жыл бұрын
Watching this I had a feeling there was some similarities to transformer networks. Basically what a transformer does is create a spatial representation of a word (with words of similar meaning being mapped closer together), and then the word is encoded in the context of its surroundings. So you basically have a position mapping, and a memory mapping. It will be very interesting so see what a greater neuroscientific understanding will allow us to do with neural network architectures.
@cacogenicist
@cacogenicist 11 ай бұрын
That is rather reminiscent of the mental lexicon networks mapped out by psycholinguists -- using priming in lexical decision tasks, and such. But in human minds, there are phonological as well as semantic relationships.
@alexkonopatski429
@alexkonopatski429 Жыл бұрын
A technical video about TEM transformers would be amazing!!
@silvomuller595
@silvomuller595 Жыл бұрын
Please don't stop making these videos. Your channel is the best! Neuroscience is underrepresented. Golden times are ahead.
@memesofproduction27
@memesofproduction27 Жыл бұрын
A renaissance even... maybe
@MrHichammohsen1
@MrHichammohsen1 Жыл бұрын
This series should win an award or something!
@timothytyree5211
@timothytyree5211 Жыл бұрын
I would also love to see a more technical video explaining how a TEM transformer would work.
@anywallsocket
@anywallsocket Жыл бұрын
Your visual aesthetic is SO smooth on my brain, I just LOVE it
@al3k
@al3k Жыл бұрын
Finally, someone talking about "real" artificial intelligence.. I've been so bored of the ML models... just simple algos.. What we are looking for is something far more intricate.. Goals.. 'Feelings' about memories and current situations... Curiosity... Real learning and new assumptions...A need to grow and survive.. and a solid basis for benevolance, and a fundamental understanding of sacrifice and erring..
@xenn4985
@xenn4985 3 ай бұрын
What the video is talking about is using simple algos to build an AI, you reductive git.
@marcellopepe2435
@marcellopepe2435 Жыл бұрын
A more technical video sounds good!
@inar.timiryasov
@inar.timiryasov Жыл бұрын
Amazing video! Both the content and production. Definitely looking forward for a TEM-transformer video!
@astralLichen
@astralLichen Жыл бұрын
This is incredible! Thank you for explaining these concepts so well! A more detailed video would be great, especially if it went into the mathematics.
@jasonabc
@jasonabc Жыл бұрын
For sure would love to see a video on the transformer/hopfield networks and the relationship to the hippocampus. Great stuff keep up the good work.
@BHBalast
@BHBalast Жыл бұрын
Im amazed by the animations, and recap at the end was a great idea.
@dandogamer
@dandogamer Жыл бұрын
Absolutely loved this, as someone who's coming from the ML side of things it's very interesting to know how these models are trying to mimic the inner workings of the hippocampus
@Wlodzislaw
@Wlodzislaw Жыл бұрын
Great job explaining TEM, congratulations!
@yassen6331
@yassen6331 Жыл бұрын
Yes please we would love to see more detailed videos. Thank you for this amazing content🙏
@ianmatejka3533
@ianmatejka3533 Жыл бұрын
Yet another outstanding video. Like many of the other comments here, I would also love to see an in-depth technical video on the TEM transformer. Please make a part 3!
@tenseinobaka8287
@tenseinobaka8287 Жыл бұрын
I am just learning about this and it sounds so exciting! A more technical video would be really cool!
@benwilcox1192
@benwilcox1192 Жыл бұрын
Your videos have some of the most beautiful explanations as well as graphics I have see on youtube
@mags3872
@mags3872 Жыл бұрын
Thank you so much for this! I think I'm doing my masters thesis on TEM so this is such a wonderful resource. Subscribed!
@Alex.In_Wonderland
@Alex.In_Wonderland Жыл бұрын
your videos floor me absolutely every time! You clearly put a LOT of work in to these and I can't thank you enough. These are genuinely a lot of fun to watch! :)
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thank you!!
@tomaubier6670
@tomaubier6670 Жыл бұрын
Such a nice video! A deep dive in TEM / transformers would be awesome!!
@nicolaemihai8871
@nicolaemihai8871 7 ай бұрын
Yes pls keep on working on this series as your content îs really creative, concise, high-quqlity and it adresses exotic specific themes
@sebastiangonzalezaseretto7885
@sebastiangonzalezaseretto7885 5 ай бұрын
Really nice video, very well explained!! Would love to see a more detailed version of TEM
@robertpfeiffer4686
@robertpfeiffer4686 Жыл бұрын
I would *love* to see a deeper dive into the technology of transformer networks as compared with hippocampal research! These videos are outstanding!!
@lucyhalut4028
@lucyhalut4028 Жыл бұрын
I would love to see a more technical video! Amazing work, Keep it up!😃
@justwest
@justwest 5 ай бұрын
absolutely astonishing that I, like all of you, have access to such valuable, highly interesting and professional educational material. thanks a lot!
@alexharvey9721
@alexharvey9721 11 ай бұрын
Definitely keen to see a more technical video, though I know it would be a lot of work!
@kevon217
@kevon217 Жыл бұрын
top notch visualizations! great video!
@johanjuarez6238
@johanjuarez6238 Жыл бұрын
Mhhhhh that's so interesting! Quality is mad here, gg and thanks for providing us with these videos.
@cobyiv
@cobyiv Жыл бұрын
This feels like what we should all be obsessed with as opposed to just pure AI. Top notch content!
@aw2031zap
@aw2031zap Жыл бұрын
LLM are not "AI" they're just freaking good parrots that give too many people the "mirage" of intelligence. A truly "intelligent" model doesn't make up BS to make you go away. A truly "intelligent" model can draw hands FFS. This is what's BS.
@gorgolyt
@gorgolyt 5 ай бұрын
idk what you think "pure AI" means
@arasharfa
@arasharfa Жыл бұрын
how fascinating that you talk about sensory, structural and constructed model/interpretation, those are the three base modalities of thinking i've been able to narrow down all of our human experience to in my artistic practice. I call them "phenomenologic, collective and the ideal" modalities of thinking.
@michaelgussert6158
@michaelgussert6158 Жыл бұрын
Good stuff man! Your work is always excellent :D
@asemic
@asemic Жыл бұрын
this is a big reason i've been interested in neuroscience for a while. just the fact you are covering this gets my sub. this area needs more interest.
@Mad3011
@Mad3011 Жыл бұрын
This is all so fascinating. Feels like we are close to some truly groundbreaking discoveries.
@CharlesVanNoland
@CharlesVanNoland Жыл бұрын
Don't forget groundbreaking inventions too! ;)
@egor.okhterov
@egor.okhterov Жыл бұрын
The missing ingredient is how to make NN changes on the fly when we receive sensory input, without backpropagation. There's no backpropagation in our brain
@CharlesVanNoland
@CharlesVanNoland Жыл бұрын
@@egor.okhterov The best work I've seen so far in that regard is the OgmaNeo project, which explores using predictive hierarchies in lieu of backpropagation.
@egor.okhterov
@egor.okhterov Жыл бұрын
@Charles Van Noland the last commit in github is from 5 years ago and the website didn't update for quite a while. What happened to them?
@yangsong4318
@yangsong4318 Жыл бұрын
@@egor.okhterov There is an ICLR 2023 paper from Hinton: SCALING FORWARD GRADIENT WITH LOCAL LOSSES
@lake5044
@lake5044 Жыл бұрын
But, at least in humans, there is at least two crucial things that is model of intelligence is missing. First, the abstraction is not only applied to the sensory input, it's also applied to internal thoughts (and no, it's not just the same as running the abstraction on the prediction). For example, you could think of a letter (a symbol from the alphabet) and imagine what it would look like rotated or mirrored. And no recent sensory input has a direct relation to the letter you choose, what transformation you chose to imagine or even to imagine all of this in the first place. (You can also think of this as the ability to execute algorithms in your mind, a sequence of transformations based on learned abstractions.) Second, there is definition a list of remembered structures/abstractions that we can run through when we're looking to find a good match for a specific problem or data. Sure, maybe this happens for the "fast thinking" (the perception part of thinking, you see a "3" you perceive it without thinking it has two incomplete circles) but also for the slow deliberate thinking. Take this following example, you're trying to solve some math problem, you're trying to fit it on abstractions you already learned, but then suddenly (whether someone gave you a hint or the hint popped in your mind) you know found a new abstraction that would better fit the problem, the input data didn't change but now you decided to see in as a different structure. So there has to be a mechanism of trying any piece of data with any piece of structure/abstraction.
@brendawilliams8062
@brendawilliams8062 Жыл бұрын
It is a separate intelligence. It communicates with the other cookie cutters by a back propagation similar to telepathic. It is as a plate of sand making patterns on it’s plate by harmonics. It is not human. It is a machine.
@arnau2246
@arnau2246 Жыл бұрын
Please do a deeper dive into the relation between TEM and transformers
@aleph0540
@aleph0540 Жыл бұрын
FANTASTIC WORK!
@archeacnos
@archeacnos 2 ай бұрын
I've somehow found your channel, AND WOW IT'S AMAZINGLY INTERESTING
@astha_yadav
@astha_yadav Жыл бұрын
Please also share what software and utilities you use to make your videos ! I absolutely love their style and content 🌸
@ceritrus
@ceritrus 11 ай бұрын
That might genuinely be the most fascinating video I've ever seen on this website
@ArtemKirsanov
@ArtemKirsanov 11 ай бұрын
Wow, thank you!
@julianhecker944
@julianhecker944 Жыл бұрын
I was just thinking about building an artificial hippocampus using something like a vector database this past weekend! What timing with this upload!
@TheSpyFishMan
@TheSpyFishMan Жыл бұрын
Would love to see the technical video describing the details of transformers and TEMs!
@adhemardesenneville1115
@adhemardesenneville1115 Жыл бұрын
Amazing video ! Amazing quality !
@Lolleka
@Lolleka 6 ай бұрын
This is fantastic content. Subscribed in a nanosecond.
@GiRR007
@GiRR007 Жыл бұрын
This is what I feel like current machine learning models are, different primitive sections of a full brain. Once all the pieces are brought together you get actual artificial general intelligence.
@josephlabs
@josephlabs Жыл бұрын
I totally agree like a 3D net
@aaronyu2660
@aaronyu2660 11 ай бұрын
Well, we’re still way miles off
@jeffbrownstain
@jeffbrownstain 10 ай бұрын
@@aaronyu2660 Closer than you might think
@cosmictreason2242
@cosmictreason2242 9 ай бұрын
@@jeffbrownstainno you need to see the neuron videos. Computers are binary and neurons are not. Besides, each bit of storage is able to be used to store multiple different files.
@didack1419
@didack1419 8 ай бұрын
​​​@@cosmictreason2242 you can simulate the behavior of neurons in computers. There are still advantages to physical-biological neural networks but those could be simulated with a sufficient number of transistors. If it's too difficult they will end up using physical artificial neurons. What I understand that you mean by "each bit of storage is able to be used to store multiple different files" is that biological NNs are very effective at compressing data (ANNs also compress data in that basic sense), but there's no reason to think that carbon-based physical-biological NNs are unmatchable. I'm not gonna say that I have a conviction that it will happen sooner rather than later, and people here are also really vague regardless. What I could say is that I know of important technologists who think that it will happen sooner (others say that it will happen later).
@dinodinoulis923
@dinodinoulis923 Жыл бұрын
I am very interested in the relationships between neuroscience and deep learning and would like to see more details on the TEM-transformer.
@austindibble15
@austindibble15 Жыл бұрын
Fascinating, I have enjoyed both of your videos in this series very much! And your visualizations are really great and high quality. I thought the comparison between the Tolman-Eichenbaum machine and a lookup table was very interesting. In reinforcement learning, I think there's a parallel here between Q-learning (learned lookup table) and policy-based methods which use deep neural network structures.
@jamessnook8449
@jamessnook8449 Жыл бұрын
This has already been done at The Neurosciences Institute back in 2005. We developed a model that not only led to place cell formation, but also prospective and retrospective memory - the beginning of episodic memory. We used the model to control a mobile device that ran the gold standard of spatial navigation ' the Morris water maze'. In fact Professor Morris was visiting the Institute for other reasons and viewed our experiment and gave it his blessing.
@memesofproduction27
@memesofproduction27 Жыл бұрын
Incredible. Were you on the Build-A-Brain team? Could you please direct me to anything you would recommend me read on your work there to familiarize myself and follow citations toward influence on present day research? Much respect, me
@AlecBrady
@AlecBrady Жыл бұрын
Yes, please, I'd love to know how GPT and TEM can be related to each other.
@GabrielLima-gh2we
@GabrielLima-gh2we Жыл бұрын
What an amazing video, knowing that we can now understand how the brain works through these artificial models is incredible, neuroscience research might explode in discoveries right now. We might be able to fully understand how this memory process works in the brain by the end of this decade.
@AkarshanBiswas
@AkarshanBiswas Жыл бұрын
I really liked your video. And I would like to see a technical video on TEM transformer. Especially the difference. Subscribed
@user-zl4fp3ml4e
@user-zl4fp3ml4e Жыл бұрын
Please also consider a video about the PFC and its interaction with the hippocampus.
@KonstantinosSamarasTsakiris
@KonstantinosSamarasTsakiris Жыл бұрын
The video that convinced me to become a patron! Super interested in a part 3 about TEM-transformers.
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thanks :3
@waylonbarrett3456
@waylonbarrett3456 Жыл бұрын
I've been building and revising this machine and machines very similar for about 10 years. I didn't know for a long time that they weren't already known.
@bluecup25
@bluecup25 8 ай бұрын
The Hippocampus knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation. The guidance subsystem uses deviations to generate corrective commands to drive the organism from a position where it is to a position where it isn't, and arriving at a position where it wasn't, it now is. Consequently, the position where it is, is now the position that it wasn't, and it follows that the position that it was, is now the position that it isn't. In the event that the position that it is in is not the position that it wasn't, the system has acquired a variation, the variation being the difference between where the missile is, and where it wasn't. If variation is considered to be a significant factor, it too may be corrected by the GEA. However, the Hippocampus must also know where it was. The Hippocampus works as follows. Because a variation has modified some of the information the Hippocampus has obtained, it is not sure just where it is. However, it is sure where it isn't, within reason, and it knows where it was. It now subtracts where it should be from where it wasn't, or vice-versa, and by differentiating this from the algebraic sum of where it shouldn't be, and where it was, it is able to obtain the deviation and its variation, which is called error.
@Kynatosh
@Kynatosh Жыл бұрын
How is this so high quality wow
@juanandrade2998
@juanandrade2998 Жыл бұрын
It is amazing how each field has its own term for these concepts. I come from an architectural background, and my brief interaction with the art's majors taught me about the concept of "Deconstruction". In my spare time I like to code, so I always thought of this "Tolman-Eichenbaum machine" process of our cognition as the act of deconstructing a system on it's most basic building blocks. I've also seen the term "generalization" to be conceptually equal in the process by which we arrive to a maximum/minimum "entropic" state of a system(depending on scope...).
@memesofproduction27
@memesofproduction27 Жыл бұрын
Ah, the eternal lexicon as gatekeeper, if only we had perfect information liquidity free of the infophysical friction of specific lexica, media, encoding, etc. Working on it:)
@juanandrade2998
@juanandrade2998 Жыл бұрын
@@memesofproduction27 This specifically is a topic in LLM that I see seldom discussed. On the one hand, language is sometimes redundant or interchangeable (like "TEM" and "Deconstruction"), but in other cases the same word has different meanings, in which case "nuance" is required in order to infer meaning. "Nuance" IMO is just a residual consequence of a lack of generalization. Because the data/syntax is not well categorized into mutually-exclusively building blocks, and there is a lot of overlap allowing for ambiguities in the message. But this is not something that can be solved with architecture, the issue is that the language in itself is faulty and incomplete. For example, a lot of times people talk about "love" as a single concept, when in reality it is the conjoint of several feelings, hence the misunderstanding. e.g.: "I don't know how she is so in love with that guy..." So, whoever is saying that line has the term "love" misaligned with the actual activity taking place. Simply because too many underlying concepts overlap into the term "love". Another example, the word "extrapolation" can be interpreted as the act of completing a pattern following previous data points. The issue is that people don't usually use the term to mean "to complete", MMOs don't ask gamers to: "Please extrapolate the next quest" OR "LEVEL EXTRAPOLATED!".... I mean... THEY COULD... but nobody does this... Because of this, If you ask a LLM to make an extrapolation of something, depending on the context, it may or may not understand the prompt. This is because the AI is not actually intelligent, instead it is subjected to its corpus of pretrained data, and the link of "extrapolation/completion" is simply not strong enough because the building blocks are not disjointed enough and there's still overlap.
@sledgehogsoftware
@sledgehogsoftware 11 ай бұрын
Even at 2:25, I can see that the model you used for the office is in fact from another thing I saw; The Office tv show! Loved seeing that connection, and it furthered the point across so well for me!!
@porroapp
@porroapp Жыл бұрын
I like how neurotransmitters and white matter formation in the brain are analogues to weights/biases and back prop in machine learning. Both are used to amplify the signal and re-enforce activation based on rewards be it neurons and synapses or convolution layers and the connection between nodes in each layer.
@y5mgisi
@y5mgisi Жыл бұрын
This channel is so good.
@itay0na
@itay0na Жыл бұрын
Wow this is just great! I believe it somehow contradicts the message of AI & Neuroscience video. In any case really enjoyed that one, keep up the good work.
@klaudialustig3259
@klaudialustig3259 Жыл бұрын
I was surprised to hear at the end that this is almost identical to the transformer architecture
@0pacemaker0
@0pacemaker0 Жыл бұрын
Amazing video as always 🎉! Please do go over how Hopfield networks fit in the picture if possible. Thanks
@En1Gm4A
@En1Gm4A Жыл бұрын
Awesome video. This is gamechangeing
@_sonu_
@_sonu_ Жыл бұрын
I lo❤ your videos more than any videos nowadays.
@BleachWizz
@BleachWizz Жыл бұрын
Thanks man I might actually reference those papers! I just need to be able to actually become a researcher now. I hope I can do it.
@binxuwang4960
@binxuwang4960 Жыл бұрын
Well explained!! The video is just sooooo beautiful.....even more beautiful than the talk given by Whittington himself visually. How did you make such videos? using python or Unity? Just curious!
@FA18_Driver
@FA18_Driver Ай бұрын
Hi your narration and videos are nice. I put on while falling asleep. Thanks.
@SeanDriver
@SeanDriver Жыл бұрын
Great video…the moment you showed the function of the Medial EC and LateralEC I thought …hey transformers….so really nice to see that come out at the end, albeit for a different reason. My intuition for transformers came from the finding of the ROME paper which suggested structure is stored in the higher attention layers and sensory information in the mid level dense layers
@egor.okhterov
@egor.okhterov Жыл бұрын
Excellent video as always :) Do you have ideas on how to get rid of backpropagation to train a transformer and implement one-shot(online) life-long learning?
@plutophy1242
@plutophy1242 Жыл бұрын
love your videos! i'd like more detailed math description
@brubrusuryoutube
@brubrusuryoutube Жыл бұрын
got an exam on neurobio of learning and memory tomorrow, upload schedules on point
@mkteku
@mkteku Жыл бұрын
Awesome knowledge! What app are you using for graphics, graphs and editing? Cheers
@jamessnook8449
@jamessnook8449 Жыл бұрын
Yes, read Jeff Krichmar's work at UC Irvine, it is dramatically different than what people view as the traditional neural network approach.
@siggiprendergast7599
@siggiprendergast7599 Жыл бұрын
the goat is back!!
@donaldgriffin6383
@donaldgriffin6383 Жыл бұрын
More technical video would be awesome! More BCI content in general would be great too
Жыл бұрын
This is cool! Thank you for sharing. The visualization is stunning, I'm curious know if you do it yourself and which tools you use
@ArtemKirsanov
@ArtemKirsanov Жыл бұрын
Thank you! Yeah, I do everything myself ;) Most of it is done in Adobe After Effects with the help of Blender (for rendering 3D scenes) and matplotlib (for animations of neural activity of TEM, random-walk etc)
@josephlabs
@josephlabs Жыл бұрын
I was trying to build something similar, but I thought of the memory module as an event storage, where it would store events and the location of which those events happened. Then we would be able to query things that happened by events or locations or things involved in events at certain locations. However, my idea was to take the memory storage away from the model and create a data structure(graph like) uniquely for it. TEM transformers are really cool.
@egor.okhterov
@egor.okhterov 10 ай бұрын
How to store location? Some kind of hash function of sensory input?
@josephlabs
@josephlabs 10 ай бұрын
@@egor.okhterov that was the plan or some graph like data structure to denote relationships.
@cloudcyclone
@cloudcyclone Жыл бұрын
very good video im going to share it
@TheMrDRAKEX
@TheMrDRAKEX Жыл бұрын
What an excellent video.
@KalebPeters99
@KalebPeters99 Жыл бұрын
This was breathtaking as always Artem. ✨ Have you heard of Vervaeke's theory of "Recursive Relevance Realisation"? It fits really nicely with Friston's framework. I think its super underrated.
@EmmanuelMessulam
@EmmanuelMessulam Жыл бұрын
As an AI engineer I would like to see more of the models that are used in neuroscience and just a light touch of artificial models, as there are many others that explain how AI models work.
@dysphorra
@dysphorra Жыл бұрын
Actually 10 years ago Bergman build a prosthetic hippocampus with much simpler architecture. It was tested in three different conditions. 1) Bergman take input from healthy rat's hippocampus and successfully predicted it's output with his device. 2) He removed the the hippocampus and replaced it with his prosthesis. Electrodes collected inputs to hippocampus sent it to computer then back to the output neurons. And it worked. 3) He connected an input of the device to the brain of a training mice and the output of device to the brain of an untrained one. And he showed some sort of memory transfer (!!!). Noticeable is that he used very simple mathematical algorithm to convert input into output.
@SmirkInvestigator
@SmirkInvestigator Жыл бұрын
yes more technical video please
@FoxTails69
@FoxTails69 7 ай бұрын
you know where my man Artem comes from when he hits the "spherical model in the vacuum" line hahaha great job!
@ptrckqnln
@ptrckqnln Жыл бұрын
Your explanations are simple, compact, and well-join'd. You are a deft educator.
@floridanews8786
@floridanews8786 Жыл бұрын
It's cool that someone is attempting this.
@TheRimmot
@TheRimmot Жыл бұрын
I would love to see a more technical. video about how the TEM transformer works!
@markovarga2424
@markovarga2424 Жыл бұрын
You did it!
@neurosync_research
@neurosync_research 10 ай бұрын
Yes! Make a video that expounds on the relation between transformers and TEMS!
@CopperKettle
@CopperKettle Жыл бұрын
Thank you, quite interesting.
@ironman5034
@ironman5034 Жыл бұрын
Yes yes, technical video!
@AiraSunae
@AiraSunae 11 ай бұрын
Videos that teach me stuff like this is why i love KZfaq
@markwrede8878
@markwrede8878 Жыл бұрын
It would need to host some sophisticated pattern recognition software. These would arise from values similar to phi, which, like phi itself, are described by dividing the square root of the first prime to host a specific sequential difference by that difference. For phi, square root of 5 by 2, then square root of 11 by 4, square root of 29 by 6, square root of 97 by 8, and so on. I have a box with the first 150 terms.
@user-li9rj6jc4s
@user-li9rj6jc4s 5 ай бұрын
Спасибо за видео!
@dontfollowthinkforyourself
@dontfollowthinkforyourself Жыл бұрын
Hi Artem Kirsanov your videos are awesome I have not seen anything like this before .You make it so much easier to learn. Have they build this already ? what about the rest of the brain like visual cortex . etc. ? my understanding of this is that you decode electric activity from neurons in the brain and decode them into mathematical equations ? and can these equation can be mimic and manipulated in a computer simulation ?
Building Blocks of Memory in the Brain
27:46
Artem Kirsanov
Рет қаралды 228 М.
How Your Brain Organizes Information
26:54
Artem Kirsanov
Рет қаралды 511 М.
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 7 МЛН
ОДИН ДОМА #shorts
00:34
Паша Осадчий
Рет қаралды 6 МЛН
Brain Criticality - Optimizing Neural Computations
37:05
Artem Kirsanov
Рет қаралды 205 М.
How I make science animations
43:39
Artem Kirsanov
Рет қаралды 666 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 204 М.
Wavelets: a mathematical microscope
34:29
Artem Kirsanov
Рет қаралды 595 М.
How to Effectively Teach Yourself ANYTHING
13:32
Artem Kirsanov
Рет қаралды 33 М.
Neural manifolds - The Geometry of Behaviour
23:17
Artem Kirsanov
Рет қаралды 264 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН