The future of AI looks like THIS (& it can learn infinitely)

  Рет қаралды 191,546

AI Search

AI Search

Күн бұрын

Liquid neural networks, spiking neural networks, neuromorphic chips. The next generation of AI will be very different.
#ainews #ai #agi #singularity #neuralnetworks #machinelearning
Thanks to our sponsor, Bright Data:
Train your AI models with high-volume, high-quality web data through reliable pipelines, ready-to-use datasets, and scraping APIs.
Learn more at brdta.com/aisearch
Viewers who enjoyed this video also tend to like the following:
You Don't Understand AI Until You Watch THIS • You Don't Understand A...
These 5 AI Discoveries will Change the World Forever • These 5 AI Discoveries...
The Race for AI Humanoid Robots • The INSANE Race for AI...
These new AI's can create & edit life • These new AI's can cre...
Newsletter: aisearch.substack.com/
Find AI tools & jobs: ai-search.io/
Donate: ko-fi.com/aisearch
Here's my equipment, in case you're wondering:
GPU: RTX 4080 amzn.to/3OCOJ8e
Mouse/Keyboard: ALOGIC Echelon bit.ly/alogic-echelon
Mic: Shure SM7B amzn.to/3DErjt1
Audio interface: Scarlett Solo amzn.to/3qELMeu
CPU: i9 11900K amzn.to/3KmYs0b
0:00 How current AI works
04:40 Biggest problems with current AI
9:54 Neuroplasticity
11:05 Liquid neural networks
14:19 Benefits and use cases
15:08 Bright Data
16:22 Benefits and use cases continued
21:26 Limitations of LNNs
23:03 Spiking neural networks
26:29 Benefits and use cases
28:57 Limitations of SNNs
30:58 The future

Пікірлер: 676
@theAIsearch
@theAIsearch Ай бұрын
Thanks to our sponsor, Bright Data: Train your AI models with high-volume, high-quality web data through reliable pipelines, ready-to-use datasets, and scraping APIs. Learn more at brdta.com/aisearch Viewers who enjoyed this video also tend to like the following: You Don't Understand AI Until You Watch THIS kzfaq.info/get/bejne/Z8d9ZK6K29KYdKs.html These 5 AI Discoveries will Change the World Forever kzfaq.info/get/bejne/nN-GncRemp2peac.html The Insane Race for AI Humanoid Robots kzfaq.info/get/bejne/b5aEgL1jy9edd6c.html These new AI's can create & edit life kzfaq.info/get/bejne/abGPf6R41NTXgIk.html
@Miparwo
@Miparwo Ай бұрын
Clickbait. Your "sponsor" deleted the key part, if it ever existed.
@alexamand2312
@alexamand2312 Ай бұрын
Ok there is so much issue in this video. neural network are fixed ? what does this even mean ? we just stop training them, it's a version checkpoint. we could train them continuously. they could learn even by themself without human by "reinforcement learning"... classic neural network can emule any partition or specialisation. this reflexion come from somone that does not really understand how it's works. Liquid neural network what you explained it's smething like a feature extractor, basically an encoder . feel like a lot of bullsht. spking neuron wtf you just discover an activation function... RNN with a simple Relu have the same behaviour. reaching a superior intelligenceby mimicking the brain, holy fk, i was waiting some quantum sht. you don't understand what your are talking about
@jeremiahlethoba8254
@jeremiahlethoba8254 27 күн бұрын
@@alexamand2312 by "neural network are fixed"he means the current weights are based on last date of training, like different chatgpt versions...I haven't watched the whole video but the only issue I have is why the narrator keeps using the verb "compute" when in the context should be computation 😅...is it a bot
@JohnSmith-ut5th
@JohnSmith-ut5th 23 күн бұрын
Wrong... But nice try. Liquid NNs are not the solution. It's actually much simpler
@billkillernic
@billkillernic 20 күн бұрын
Ai sh*t is the next .com bubble it has its use case but it's not nearly as cool as ppl think it will be stupid forever because it is a flawed design that just seems to do some stuff relatively fast (which a monkey could do though but slower ) it's a glorified parrot or mechanical turk
@enzobarbon4501
@enzobarbon4501 Ай бұрын
Seems to me like a lot of people compare the "learning" a human does during it's lifetime to the training process of a LLM. I think that it would make more sense to compare the training process of a neural network to the evolution process of the human being, and the "learning" a human does during its lifetime to in-context learning in a LLM.
@Rafael64_
@Rafael64_ Ай бұрын
More like evolution being the base model training, and lifetime learning being fine-tuning.
@Tayo39
@Tayo39 Ай бұрын
The crazy thing is we can dublicate the best latest level or version of a program or cyborg, with the push of a button.... And keep fine-tune it while it fine tunes and instantly updates itself and all connected devices with the new bit of info that will never be lost while my brain is about explode lol things about get turned upside down fwiw...
@CrimpingPebbles
@CrimpingPebbles Ай бұрын
Yep, that’s all I kept thinking, we took a long time to get where we are now, millions of generations going all the way back to the origin of life, that’s a lot of energy to get to our current brain organization
@Alpha_GameDev-wq5cc
@Alpha_GameDev-wq5cc Ай бұрын
No… these are simply statistical models. Nothing compared to the brain, it’s a sad thing that many “brains” aren’t capable of understanding this. Funny how dampening stupidity can be
@mikezooper
@mikezooper Ай бұрын
@@CrimpingPebblesThis! Also the evolutionary aspect of AI doesn’t hunt out efficiency, hence why we’ll need lots of energy and data. The training should hunt out energy efficiency but also data efficiency (thinking / deducing more with less data/information).
@wmk4454
@wmk4454 Ай бұрын
How do human brain actually work? It feels like all the research out there are still incomplete
@jamesleetrigg
@jamesleetrigg Ай бұрын
Our brains use spiking neural networks. There is some research in them at the moment like IBM's True North processor and Intel is also looking into them. but they don't map so well onto current CPU's and GPU's because of the asynchronous qualities. Some parts of the human brain are hugely complex and we still don't understand them very well however big progress is being made in this area.
@chriscotton4207
@chriscotton4207 Ай бұрын
Basically you have a bunch of neurons and the neuron can hold information and they communicate back and forth. I don't know how much level of detail you want me to get into but we do know quite a bit about how the brain works. There are some mysteries but there are a lot of knowns. How it works is one
@KillbackJob
@KillbackJob Ай бұрын
Nobody knows for sure.
@demej00
@demej00 Ай бұрын
Microtubules.
@dmwalker24
@dmwalker24 Ай бұрын
The truth is we know a lot, but still likely not even close to everything. And most of the people trying to recreate this wonder of evolution, don't even know most of what is known. They've used gigawatt hours of power to achieve less than the brain of any one of my cats.
@kevinmaillet8017
@kevinmaillet8017 Ай бұрын
I built a custom spiking neural network for resolving a last-mile logistics efficiency problem. I agree with your assessment: Very efficient. Complex logic.
@theAIsearch
@theAIsearch Ай бұрын
That's cool! Thanks for sharing
@ALLIRIX
@ALLIRIX Ай бұрын
Oh really? I'd love to hear more. I've been relying on stitching together Google Maps API requests that are limited to 25 nodes a request, but I do 200+ locations. I've been wondering I'd there was an ai solution.
@regulardegular5
@regulardegular5 Ай бұрын
hey how does a noob like me go around building ai
@JordanMetroidManiac
@JordanMetroidManiac Ай бұрын
@@regulardegular5 I learned through KZfaq channels 3blue1brown, sentdex, and deeplizard. I also already had a strong background in math, but if you don’t, you might skip the deeplizard videos. sentdex’s videos are most hands-on. 3blue1brown offers the most intuitive explanations of deep neural networks. deeplizard nicely explains the technical parts of training various neural network architectures.
@zenbauhaus1345
@zenbauhaus1345 Ай бұрын
@@regulardegular5 ask ai
@minefacex
@minefacex Ай бұрын
The opening statement is so true. As a student of this field, I think that this is not said enough and anyone not well versed in machine learning just does not get how bad the current situation is.
@Alexander-or7vr
@Alexander-or7vr Ай бұрын
Dude I’m a completely rookie, what is so bad about the current situation please? Would love to know.
@minefacex
@minefacex Ай бұрын
@@Alexander-or7vr using statistical models without understanding or considering the output validity is borderline insanity
@Alexander-or7vr
@Alexander-or7vr Ай бұрын
@@minefacex can you tell me why? What’s is the outcome you are worried about?
@pentachronic
@pentachronic Ай бұрын
@@minefacex The output validity is known. This is what back propagation does. Basically we have a generalised function finder.
@no-lagteardown3558
@no-lagteardown3558 Ай бұрын
​@@minefacexBruh what r u even talking about.
@keirapendragon5486
@keirapendragon5486 Ай бұрын
Absolutely would love that video about the Neuromorphic Chips!!
@vladartiomav2473
@vladartiomav2473 Ай бұрын
Just waited for somebody to point the tremendous energy problems of current AI. Thank you
@Trixtoxia
@Trixtoxia 24 күн бұрын
We just gotta iron out nuclear fusion and it won't matter anymore .
@olhoTron
@olhoTron Ай бұрын
8:33 not only is this "human brain" computer more efficient, but I heard the first stages of creating a new instance is pretty fun, can't confirm, never done it, but they say it is
@takkik282
@takkik282 Ай бұрын
It's later stages that are more consuming. You sign for life! Perhaps brain is more efficient, but I think Neural Network train faster. Consider the years needed for an human to master language, and think about all the support it need (adults, books, computers etc..). Think about mileniums of human progress needed to develop our current intelligence. Think about all the time needed for life to get where we are! If we ever get to AGI in the next decenies, it's like creating new intelligent life in a fraction of the time needed by nature.
@volkerengels5298
@volkerengels5298 Ай бұрын
@@takkik282 _"Consider the years needed for an human to master *language,..." *culture - which is a far bigger challenge. With "language" you have to learn concepts and abstracts on any level: syntax, semantic, perfomance... The list what a child learns in these years (0-3-6-10) is far far longer.
@tsanguine
@tsanguine 23 күн бұрын
i have, it's alright, but it's probably even better when you create the instance with someone who is more knowledgeable and a bit freakier
@someone9927
@someone9927 10 күн бұрын
​@@takkik282 Current llms can't create new non-fake information that weren't already written somewhere. Humans also teaching to walk, run, jump, eat, breathe, feel speed, their position, heat, pain. What about smelling, hearing sound and extracting words from it (and other sounds also), see, separating colors, also internal editing of image from eyes so you don't see your nose and vessels, what about felling by touch every small detail of object. What about precisely controlling your fingers so you don't miss that button on your small phone screen. Also you can download game and even if it has strange controls, most likely after some time you would be good at it (can't say same about ai) Lets take gpt-4o for example. It can't hear you (audio translated to text by another ai), it can't feel anything, it doesn't have physical body, it doesn't have to precisely control muscles to say something, it can't feel anything humans can, it can't teach. It can see images, but that's not continuous video and audio stream that out brain can accept and work with. Even with these limitations of current ai, it uses much more energy that our brain in whole life
@eSKAone-
@eSKAone- Ай бұрын
The human brain developed over a time span of millions of years. How much energy did that process use?
@dylanlodge4905
@dylanlodge4905 Ай бұрын
That and the fact that each human has learnt to speak and regurgitate useful information for 30 years. Assuming that a human from the age of birth consumes 175 kW h for simplicity per year, and the entirety of chatGPT 3 was created using 1287 mW h, ChatGPT is ~245X less efficient than a human - (1287 * 1000) / (175 * 30) Overall, only considering 1 humans energy consumption compared to ChatGPT, which can communicate with more than 200,000 people across the internet simultaneously and a response time of
@olhoTron
@olhoTron Ай бұрын
To be fair, the energy spent on humans evolving also went into developing AI, since we are the ones who are developing them, without spending energy on human evolution, there would be no AI Lets say AIs become sentient... our history will also be part of their history
@ickorling7328
@ickorling7328 Ай бұрын
OC tries to do science but never heard of entropy in information theory, which eliminates evolution of DNA of a brain from nothing under no guiding intelligence. Is wild ass guess, not even theory. Where the evidence?
@eSKAone-
@eSKAone- Ай бұрын
In the end if it wasn't efficient we wouldn't use it (even if energy was for free). It obviously produces something that you couldn't reproduce even with the same amount of megawatthours of human brains brainstormed together.
@smicha15
@smicha15 Ай бұрын
nice. don't really hear that side of things.
@High-Tech-Geek
@High-Tech-Geek Ай бұрын
1. It's funny that we are trying to create something (AGI) that replicates something else that we do not understand (the human brain). 2. Any neural network that truly emulates the human brain won't need to be trained in the sense you discuss. It would just be. It would learn and be trained by it's design. It would start training immediately and continue to train throughout it's existence. I don't see us ever creating something like this anytime soon (see statement #1).
@theAIsearch
@theAIsearch Ай бұрын
Great point (#2). If it can keep learning, we just need to create it and it would naturally improve over time, or even learn to reconfigure itself
@helloyes2288
@helloyes2288 Ай бұрын
humans receive constant input and data. We frontload that data requirement. A system that improves over time will need constant input data.
@helloyes2288
@helloyes2288 Ай бұрын
@@theAIsearch he's acting like improvement can happen in a vacuum
@alexamand2312
@alexamand2312 Ай бұрын
@@helloyes2288 ye wtf happen in this video and this commentary, is everyone is a religious bitcoiner that not understand anything ?
@samuelbucher5189
@samuelbucher5189 29 күн бұрын
Humans actually come somewhat pre-trained from the womb. We have instincts and reflexes.
@dmwalker24
@dmwalker24 Ай бұрын
First and foremost, I am a biologist, but I have quite an extensive background in computer science as well. I have some fundamental concerns with the efforts to develop AI, and the methodologies being used. For these models to have anything like intelligence, they need to be adaptable, and they need memory. Some temporal understanding of the world. These efforts with LNN strike me as attempting to re-invent the wheel. Our brains are not just a little better at these tasks than the models. They are exponentially better. My cats come pre-assembled with far superior identification, and decision-making systems. Nevertheless, that flexibility and adaptability require an almost innumerable set of 'alignment' layers to regulate behavior, and control impulses. To make a system flexible, and self-referential is to necessarily make it unpredictable. Sometimes the cat bites you. Sometimes you end up with a serial killer.
@camelCased
@camelCased Ай бұрын
Right, and human brain has constant learning feedback loop not only from outside world (through all senses) but also the internal (through self awareness, reflection, critique etc.). Current LLMs don't ever check their responses for validity because there is nowhere to get the feedback from, except the current user, but then the correction will work only in the current short context and not for retraining. So, LLMs essentially just spit out the first response that has the highest probability based on the massive amounts of the training data. And it's quite amazing how often LLMs get it right. Imagine a human not actually solving an equation but spitting out the first response that comes to mind - we would miserably fail all the tests that LLMs pass with ease. Self-correction based on the context awareness is mandatory for an AI.
@honkhonk8009
@honkhonk8009 Ай бұрын
Its not reinventing the wheel if the wheel hasn't even been invented yet. What neuroscientist say differs largely from what you say. From what iv seen, its hard to take lessons we learnt from real neurons, and put it into computer neurons. It takes 8 whole layers of machine neurons to simulate a human neuron. Human neurons arent just a soma, the dendrites do alot of computation aswell. Current machine learning is inspired by biology but not based on it. If we knew how neurons actually worked, ML wouldv been already solved.
@CaiodeOliveira-pg4du
@CaiodeOliveira-pg4du Ай бұрын
One might argue that current neural network models are both adaptable (they have millions of parameters being updated at each step, throughout hundreds of thousands of training epochs) and memory (they remember the instances to which they have been trained on through the weights between layers). There’s also a lot of highly effective unsupervised learning algorithms that learn complex patterns from unlabeled data, which one might call self-assessment.
@TheMetalisImmortal
@TheMetalisImmortal Ай бұрын
Hello 😉
@someone9927
@someone9927 10 күн бұрын
​@@camelCased the thing is that you can't teach llm in normal way. If you explain person that something is false like this: You: do turtles fly? Person: yes. You: nah, they don't Person: oh, i will remember this Person would remembered that turtles don't fly If you do same thing with ai, ai will remember, that when you ask "do turtles fly?", they should reply "yes", and if you reply "nah, they don't", they should reply "oh, i will remember this" This is the problem with ai
@Ding63
@Ding63 Ай бұрын
Definetly make a video on neuromorphic chips And i think the other neural networks outside the scope of this video should have their seperate video as well
@kairi4640
@kairi4640 Ай бұрын
Spiking neural networks and any future neural networks, sounds like where agi and asi are actually at.
@olhoTron
@olhoTron Ай бұрын
Or maybe to reach AGI (do we really want that?) we need to ditch neural nets and actually discover what makes inteligence work on a high level and reimplement to work on computers... Seems to me like doing any type of neural network is like trying to emulate a game console by simulating each transistor in its circuit... sure, it can work, but it would take the most powerful Threadripper CPU to emulate a Atari 2600 at full speed this way Maybe neural nets will help us understand what makes a brain tick on a high level, then we will make a "brain JIT recompiler"... and then... who knows what will happen next
@eddiedoesstuff872
@eddiedoesstuff872 Ай бұрын
@@olhoTronwow, never heard this perspective before. The problem I think tho, is that while it’s easy to simulate neurons, the real issue is arranging them correctly to create higher level behaviours. Using your analogy, yeah you can simulate for example the CPU using transistors and then implement it in a higher level way, but to do that first u need a schematic of how each transistor connects to the next. So, rather we brute force arrangements until we make human-like neuron arrangements, or brain scanning technology needs to improve so we can view whole sections of the brain at neuron level
@nemesiswes426
@nemesiswes426 Ай бұрын
That is what I believe. To me, AGI means the digital equivalent of a human, conscious, self aware and all that. Since the only known example for running AGI (ourselves) is our brain, then we should probably aim to replicate it. Maybe not the cellular biophysics etc.. happening but the overall more abstract ways it works. Any other method has no proven way of getting to AGI. It is how I am going about working on these things at least, using modified spiking neural networks to more closely resemble the brain. It truly is an amazing time to be alive. We are on the brink of a new species being created. Potentially the first time in the entirely of the universe's existence when a given species has created another species smarter than themselves.
@charlesmiller8107
@charlesmiller8107 28 күн бұрын
@@olhoTron It's not the same. Using transistors to emulate transistors? What we need are actual neurons, artificial of course but electronic. Maybe an integrated circuit that has interconnected devices that function like a neuron but also somewhat transistor like. A transistor with like hundreds of inputs and outputs but really small. 🤔It would need to be dynamic but that's way beyond our current capabilities. Bay just using biology is the best option. Cyborgs.
@olhoTron
@olhoTron 28 күн бұрын
@@charlesmiller8107 *if* (and its a big if) inteligence is actually computable (and not some kind of quantum or spiritual thing) then it is just a computer program like any other, only difference is its running on wetware simulating the basic blocks of the wetware is not the way to go, its too ineficient, we need to actually understand the problem and reimplementing it to run on current computer architectures If its not computable, then we will never reach AGI with classical computers and no amount of nested dot products will make inteligence emerge
@williamb.7134
@williamb.7134 Ай бұрын
Thanks!
@theAIsearch
@theAIsearch Ай бұрын
Wow, thanks for the super!
@benfrank6520
@benfrank6520 Ай бұрын
HOLD ON A SECOND!!! did you think we wouldnt notice???☠☠ 3:52
@globurim
@globurim Ай бұрын
Its been stuck there for 10 seconds. He was not being subtle with it or anything.
@theterminaldave
@theterminaldave Ай бұрын
lol i was cooking and didn't see the screen. Ty.
@antonystringfellow5152
@antonystringfellow5152 Ай бұрын
🤮
@inviktus1983
@inviktus1983 6 күн бұрын
@@antonystringfellow5152 3:57
@alkeryn1700
@alkeryn1700 Ай бұрын
i wrote a spiking neural network from scratch, it can learn but it's not as efficient as learning as typical nn as you can't do gradient descent effectivey, instead you need to adjust the neurons based on a reward. now you can backtrace and reward the last neurons and synapses that resulted to the output you want but it is limited, it works better when you don't just reward the lasts, but reward according to the desired output, still, pretty cool to run and it makes nice visualizations.
@WJ1043
@WJ1043 Ай бұрын
We currently simulate neural networks programmatically, which is why they are so inefficient. The problem is, people are so impatient for AGI that they have concentrated all their efforts on achieving it rather than developing an actual neural network.
@beowulf2772
@beowulf2772 Ай бұрын
Yeah, it's like their building a slave rather than a free person. Let the little ai have it's own baby, childhood, etc. These machines only need to be turned on all the time and have something to interact with the real world and parents. Even data didn't just download everything and downloaded the crew's psych profiles just to connect with them.
@lpmlearning2964
@lpmlearning2964 Ай бұрын
How else do you want to stimulate them other than with a computer which understand machine code aka programming? 🙃
@lpmlearning2964
@lpmlearning2964 Ай бұрын
You can’t simulate more than a few milliseconds of a fly’s brain let alone a human brain. Check EPFL’s research
@WJ1043
@WJ1043 Ай бұрын
@@lpmlearning2964 not a simulation. Have actual neural nets instead. Can’t be done on a chip. Some sort of 3D construction is required.
@khanfauji7
@khanfauji7 Ай бұрын
Use AI to build AI 🤖
@alabamacajun7791
@alabamacajun7791 Ай бұрын
Glad to hear this. Back in 2010 I was looking for an alternative to network graph system still in use today. Basically we have a scaled version of a decades old tech that we now have the horsepower to run. I will say current neural matrices are but a partial model of a brain. I studied a portion of grey matter neural matrices see the book Spikes. "Fred Rieke, David Warland, Rob de Ruyter van Steveninck, William Bialek". The human brain is exponentially more complex than any scaled multi million GPU, TPU, CPU system. Good video.
@TheCategor
@TheCategor Ай бұрын
8:40 "Human brain only uses 175kWh in a year" - Since human brain cannot work without the body you have to treat [brain+body] as one entity (which is ~4 times more), unless it's a brain in a jar.. but yea i guess still very efficient.
@GhostEmblem
@GhostEmblem Ай бұрын
If you apply that logic to the AI then you'd need to factor in many other things too. You are fundamentally misunderstanding what is being compared here.
@lagaul5124
@lagaul5124 Ай бұрын
got to take into account the millions of years of evolution to even get to the human brain.
@Instant_Nerf
@Instant_Nerf Ай бұрын
@@lagaul5124that’s a bunch of bs
@viperlineupuser
@viperlineupuser Ай бұрын
@@Instant_Nerf he is not wrong, but development costs ≠ training cost
@ProfessorNova42
@ProfessorNova42 Ай бұрын
Thanks for the video 😁. I really enjoyed it! I'm also very interested in those neuromorphic chips you talked about in the end.
@markldevine
@markldevine Ай бұрын
Really nice recap. I've subscribed. Keep it up.
@theAIsearch
@theAIsearch Ай бұрын
Thanks for the sub!
@saurabhbadole821
@saurabhbadole821 Ай бұрын
this is my first video from your channel, and I am already impressed!
@theAIsearch
@theAIsearch Ай бұрын
thanks!
@wellbishop
@wellbishop Ай бұрын
Awesome content, as always. I would love to know more about neuromorphic chips. Thanks.
@lolo6795
@lolo6795 Ай бұрын
So do I.
@sekkitsek
@sekkitsek Ай бұрын
Same here
@staticlee4287
@staticlee4287 Ай бұрын
Same
@gmuranyi
@gmuranyi Ай бұрын
Yes, please.
@johnlennon2009nyc
@johnlennon2009nyc 27 күн бұрын
Thank you for your very interesting talk. I have a question. Does this entire system run on the same cycle (clock)? Or does each node run on its own timing?
@clarencelam1765
@clarencelam1765 8 күн бұрын
I think it is important to highlight that current artificial neural networks are not based on how human brains work but inspired by biological neural networks. Human brains are really complex thanks to half a billion years of evolution of the brain. There is a pretty good book that serves as a primer on neuroscience called “A Brief History of Intelligence”. If you enjoyed Sapiens you will love this book.
@stevengill1736
@stevengill1736 Ай бұрын
Oh good - was wondering if spiking and liquid NNs were similar. Both trying to emulate our current understanding of human neurons....neat!
@lucavogels
@lucavogels 24 күн бұрын
I don’t get how the Liquid NN should continue to learn if you only train the output layers once and the Reservoir stays the same as well (according to you the reservoir gets randomly initialized before training and from there never changes, just allows for information to circle/ripple in it)
@nemonomen3340
@nemonomen3340 Ай бұрын
This comment is to let you know that you should, in fact, make a video on neuromorphic chips.
@theAIsearch
@theAIsearch Ай бұрын
noted!
@gaius_enceladus
@gaius_enceladus 23 күн бұрын
Yes please - I'd love to see you do a video on neuromorphic chips! Keep up the good work!
@lucidglobalwarning8707
@lucidglobalwarning8707 12 күн бұрын
In response to you question at approx 28 minutes: Yes, I would like to see more on Spiking Neural Networks!
@tapizquent
@tapizquent Ай бұрын
5:18 I agreed with everything until this point. Gemini did prove that models can learn post training. As they did learning a new language
@Me__Myself__and__I
@Me__Myself__and__I Ай бұрын
Correct. I just posted a lengthy comment that contained that very detail.
@RolandoLopezNieto
@RolandoLopezNieto Ай бұрын
Great educational video, thanks.
@aiforculture
@aiforculture Ай бұрын
Super useful video, thank you!
@theAIsearch
@theAIsearch Ай бұрын
You're welcome!
@kellymoses8566
@kellymoses8566 Ай бұрын
Not being able to self-improve is the single greatest limitation of LLMs.
@SarkasticProjects
@SarkasticProjects Ай бұрын
and YES- i would love to learn from You about the neuromorphic chips :)
@theAIsearch
@theAIsearch Ай бұрын
Noted!
@howardb.728
@howardb.728 20 күн бұрын
A very competent compression of complex ideas - well done mate!
@fingerprint8479
@fingerprint8479 Ай бұрын
Great video, thanks. How data resulting from training is stored and accessed. I am familiar with SQL databases and would like to know what happens when, say, a picture of a dog, is submitted to some AI for identification. Thanks
@ninjoor_anirudh
@ninjoor_anirudh 29 күн бұрын
@theAIsearch can you please share the source from where you get these information
@CharlesBrown-xq5ug
@CharlesBrown-xq5ug Ай бұрын
《 Arrays of nanodiodes promise full conservation of energy》 A simple rectifier crystal can, iust short of a replicatable long term demonstration of a powerful prototype, almost certainly filter the random thermal motion of electrons or discrete positiive charged voids called holes so the electric current flowing in one direction predominates. At low system voltage a filtrate of one polarity predominates only a little but there is always usable electrical power derived from the source Johnson Nyquest thermal electrical noise. This net electrical filtrate can be aggregated in a group of separate diodes in consistent alignment parallel creating widely scalable electrical power. As the polarity filtered electrical energy is exported, the amount of thermal energy in the group of diodes decreases. This group cooling will draw heat in from the surrounding ambient heat at a rate depending on the filtering rate and thermal resistance between the group and ambient gas, liquid, or solid warmer than absolute zero. There is a lot of ambient heat on our planet, more in equatorial dry desert summer days and less in polar desert winter nights. Refrigeration by the principle that energy is conserved should produce electricity instead of consuming it. Focusing on explaining the electronic behavior of one composition of simple diode, a near flawless crystal of silicon is modified by implanting a small amount of phosphorus on one side from a ohmic contact end to a junction where the additive is suddenly and completely changed to boron with minimal disturbance of the crystal pattern. The crystal then continues to another ohmic contact. A region of high electrical resistance forms at the junction in this type of diode when the phosphorous near the ĵunction donates electrons that are free to move elsewhere while leaving phosphorus ions held in the crystal while the boron donates a hole which is similalarly free to move. The two types of mobile charges mutually clear each other away near the junction leaving little electrical conductivity. An equlibrium width of this region is settled between the phosphorus, boron, electrons, and holes. Thermal noise is beyond steady state equlibrium. Thermal transients where mobile electrons move from the phosphorus added side to the boron added side ride transient extra conductivity so they are filtered into the external circuit. Electrons are units of electric current. They lose their thermal energy of motion and gain electromotive force, another name for voltage, as they transition between the junction and the array electrical tap. Aloha
@SangramMukherjee
@SangramMukherjee Ай бұрын
A neural network is just probability function which gives how likely the occurrence is, so for a given input how much probability of getting the following output. A output with high probability is the most likely answer to your input. And the network just help to calculate that probability by nodes networks biases, back propagation, residual network, matrix, calculus etc, it seems maths, computer and physics coming together in one place
@drdca8263
@drdca8263 Ай бұрын
The output layer doesn’t have to be probabilities. It can be other things as well, such as “how much to drive each motor”, or “how much does each pixel change”
@gpt-jcommentbot4759
@gpt-jcommentbot4759 Ай бұрын
its not probability, it's just how high the activation of a neuron is.
@JoshKings-tr2vc
@JoshKings-tr2vc 28 күн бұрын
Good simple explanation of the current state of neural nets and where they’re going.
@culture-jamming-rhizome
@culture-jamming-rhizome Ай бұрын
Neuromorphic chips are a topic I would like to see a video on. seems like there is a lot of potential here.
@oswindsouza9068
@oswindsouza9068 Ай бұрын
if spiking neural networks gets implemented then what will be the future of existing neural networks and the infrastructure and the heavy duty hardware that's required for traditional neural networks AI apps, what is your opinion on this topic
@ekaterinakorneeva4792
@ekaterinakorneeva4792 Ай бұрын
Great video, thank you for your work!
@theAIsearch
@theAIsearch Ай бұрын
My pleasure!
@Jevin-gn1vv
@Jevin-gn1vv Ай бұрын
But what will hapen if you combine a normal neuron network with a liquid neuron network? So that these two networks can cooperate with each other in certain situations. Like one being used for new information or similar patterns and the other one for specific rules or how to react in specific situations.
@Mightimus
@Mightimus Ай бұрын
So the readout layer of liquid NN is basically perceptron? Coz in your image it's just one layer of data and no explanation on how the readout layer is trained.
@RICARDO_GALDINO_GABBANA_LIMA
@RICARDO_GALDINO_GABBANA_LIMA Ай бұрын
Fantastic chanell! Super nice!👏👏👏🗣💯💯🔥‼️‼️‼️‼️‼️‼️❤
@theAIsearch
@theAIsearch Ай бұрын
Thanks!
@Mega-wt9do
@Mega-wt9do Ай бұрын
bot 👏👏👏🗣💯💯🔥‼‼‼‼‼‼❤
@RICARDO_GALDINO_GABBANA_LIMA
@RICARDO_GALDINO_GABBANA_LIMA Ай бұрын
@@Mega-wt9do 🤜🤛‼️🗣🔥🔥💯💯💯
@Joseph-nw3gw
@Joseph-nw3gw Ай бұрын
You earned a subscriber rom Kenya....kudos
@theAIsearch
@theAIsearch Ай бұрын
Thanks!
@nikitos_xyz
@nikitos_xyz Ай бұрын
yes, it is the plasticity and learning ability that neural networks lack, thank you for these ideas.
@Rawi888
@Rawi888 Ай бұрын
Thank you for your hard work.
@theAIsearch
@theAIsearch Ай бұрын
My pleasure!
@desertvoyeur
@desertvoyeur Ай бұрын
Hinton has suggested a way to implement feed forward training, as the brain appears to. Can you update this for us, please?
@tyngjim
@tyngjim Ай бұрын
Yes please. Let’s learn about neuromorphic chips!
@pmcate2
@pmcate2 Ай бұрын
What’s the difference between LNNs and tradition NNs with online learning?
@Jianju69
@Jianju69 19 күн бұрын
Very interesting to hear about these emerging architectures.
@RobertsMrtn
@RobertsMrtn Ай бұрын
Firstly, thank you for producing such an informative video. One thing I would like to add is to say that current neural networks require a lot more training data than say a three year old child in order to preform a simple classification from such as distinguishing cats from dogs. Our current models require tens of thousands of examples in order to be properly trained. Where as a three year old child would require perhaps five or six examples of each. I would propose an architecture which I am calling Predictive Neural Networks where neurons are arranged in layers where neurons predict which other neurons will fire depending on the input data. For example, high level neurons may be trained to detect an eye but should also 'know' where to find the next eye or where to find the nose. Because a cats nose looks different from a dogs nose and is one of the main distinguishing features, it should be possible to train these networks with much fewer examples.
@anatalelectronics4096
@anatalelectronics4096 Ай бұрын
jepa?
@JasonCummer
@JasonCummer Ай бұрын
These liquid neural networks sounds like the liquid state machines i used in my masters. Whats the differences between LNNs and LSMs
@UltraStyle-AI
@UltraStyle-AI Ай бұрын
Very informative and well put together video, thanks!
@theAIsearch
@theAIsearch Ай бұрын
Very welcome!
@smellthel
@smellthel Ай бұрын
Awesome video! I would love that video on neuromorphic chips!
@theAIsearch
@theAIsearch Ай бұрын
Thanks!
@TheBann90
@TheBann90 Ай бұрын
Like Intel alluded to regarding their neuromorphic chip, we can use AI to fix the library issue. So neuromorphic is close. Maybe just 3 years out. Liquid neural networks might need a bit more help from AI to solve the kinks, but I think AI is the key here as well in order to have the technology ready to be launched.
@michaelaraki3769
@michaelaraki3769 28 күн бұрын
Silly question: Why not use (some) all of them in combination, with a superodinate network (perhaps the liquid one) that either learns or is told by training which method to deploy for which type of data? The idea, once again, is to mimic the brain, with modular information processing at lower levels but the executive function at a higher level.
@rockochamp
@rockochamp 11 күн бұрын
Very well explained
@kimcosmos
@kimcosmos Ай бұрын
when will we get neural network chips in our phones? I want a layer of capacitors on my transistors
@bitdynamo365
@bitdynamo365 Ай бұрын
great informative video! Thanks a lot Please make us a deep dive into neuromorphic hardware.
@theAIsearch
@theAIsearch Ай бұрын
Noted!
@JosephLuppens
@JosephLuppens Ай бұрын
Amazing presentation, thank you! I would love for your to do a follow-up on the potential of neuro-morphic architectures.
@theAIsearch
@theAIsearch Ай бұрын
Thanks! Will do
@annieorben
@annieorben Ай бұрын
This is very interesting! The reservoir layer seems like the digital analog to the subconscious mind! I really love your explanation of this new type of neural network.
@theAIsearch
@theAIsearch Ай бұрын
Thanks!
@alexamand2312
@alexamand2312 Ай бұрын
wtf
@QuantumVirus
@QuantumVirus 7 күн бұрын
​@@alexamand2312?
@kuroallen6419
@kuroallen6419 Ай бұрын
Super nice and educational video 👏🏻👏🏻👏🏻👏🏻👏🏻
@theterminaldave
@theterminaldave Ай бұрын
How are these variants of neural networks being viewed by companies like OpenAI?
@AaronNicholsonAI
@AaronNicholsonAI 27 күн бұрын
So awesome. Thanks! Neuromorphic, please :)
@jasonkocher3513
@jasonkocher3513 Ай бұрын
Would the liquid reservoir perhaps be a ferrofluid in a well of some kind with transducers around it? Trying to picture the engineering implementation of it. Or am I taking this too literally? It seems there would be a minimum spatial wavelength of any physical liquid, thus limiting the physical miniaturization. Very cool video, thank you for your efforts!
@XDgamer1
@XDgamer1 Ай бұрын
will Liquid neural network be able to work on cpu? 😢😅
@Andy-zu3tv
@Andy-zu3tv Ай бұрын
Does anyone know of the company Brainchip - they say their models can learn?
@rekasil
@rekasil 13 күн бұрын
Hi, I would like to know more about the spiking neural networks, their types, limitations, challenges, performance, online learning capabilities, etc. Thanks!
@tobiaspucher9597
@tobiaspucher9597 Ай бұрын
Yes neuomorphic chips video please
@dhammikaweerasingha9894
@dhammikaweerasingha9894 20 күн бұрын
Nice explanation. Thanks.
@SS801.
@SS801. Ай бұрын
Make video on chips yes
@Garfield_Minecraft
@Garfield_Minecraft 22 күн бұрын
"we just need more nodes" that's what they think
@user-zc7vr6ct2y
@user-zc7vr6ct2y Ай бұрын
From dots and lines of curves, from CPU to Brain links and related objects with movements in space with timed calculated of predictions of possibilities in interest or hobbies
@twirlyspitzer
@twirlyspitzer Ай бұрын
I had no idea before this video that other regenerative AI are coming along & already supersede traditional probagation considerably in many use cases. It gives me hope that an AGI breakout is much closer than they say.
@digitalconstructs6207
@digitalconstructs6207 Ай бұрын
The age of analogue computing is coming. Great video for sure.
@Citrusautomaton
@Citrusautomaton Ай бұрын
I believe that analog/digital hybrid computers will change AI massively in the realm of energy efficiency!
@paatagigolashvili9551
@paatagigolashvili9551 Ай бұрын
@@Citrusautomaton Exactly,i am rooting for aspinity analog and risc-v digital technologies
@JB52520
@JB52520 Ай бұрын
👍 for neuromorphic chips
@cesarlagreca8076
@cesarlagreca8076 18 күн бұрын
excelente resumo para entender as dificudades de software e de hardware para a construção dessas redes "liquidas". obrigado
@hebreathes5954
@hebreathes5954 5 күн бұрын
would it not be best to combine all these? a standardized base set of a nueral network interface with fluid like restructuring to generate patterns and spike detection to recognize it´s own patterns over time. something that would truely mimic a brain in all senses taking the drawbacks away by combining and maintaining all the benefits.
@ayaanm0min
@ayaanm0min Ай бұрын
I think time for analogue computers to shine
@simonspoke
@simonspoke Ай бұрын
So does this mean we could not get AGI on a traditional neural network? Or would it just depend how large the data set is?
@theAIsearch
@theAIsearch Ай бұрын
I don't think the current neural network architecture can be AGI. But, we could use it to design the next generation
@simonspoke
@simonspoke Ай бұрын
@@theAIsearch Good, or bad, to know! 😂
@cmw3737
@cmw3737 4 күн бұрын
When it comes to technology, if it's technically possible and there's economic demand for it, it will happen. Given the stupendous amount of energy needed to train current models, the incentives to perfect these more efficient models will lead to rapid progress. There's no way things stay this inefficient for too long.
@korrelan
@korrelan Күн бұрын
Excellent video.
@user-wy6tq3fq8t
@user-wy6tq3fq8t 26 күн бұрын
YES WOULD LIKE A DIAGRAM OF DIGITAL LOGIC MODEL, OR ANALOGY DEVICES CONFIGURED, OR THE MICRO CIRCUITS CURRENT FOR THE DIFFERENT CLASSES OF NEURO LOGIC IMPLEMENTATIONS IN RESEARCH AND COMMERICAL APPLICATIONS AND HOW ONE IS TESTED AS DIGITAL LEVELS OR WAVEFORM OUTPUTS
@user-dw1cz3jv8b
@user-dw1cz3jv8b Ай бұрын
Liquid neural networks + SNN
@akzsh
@akzsh Ай бұрын
+ GNN
@chich1313
@chich1313 Ай бұрын
It's not just about adding models architectures that make an AI powerful 😂. Well, LNNs are too adaptive to work with these other models, architecture. The video explains pretty well how they work, but do not expect that LLNs are going to be used in LLMs at least not before 8 to 10 years especially because of the fact that copmanies really adapt slowly as well as this area is still in development thus when Conventional Neural networks were officially realeased in 2009 the first functional model didn't appear before 2018 -2019 and adopting LNNs in LLMs would potentially mean a new desgin for transformers therfore we would need to create new "liquid transformers" thus retarding the development of this "AGI" that people think about, hence we don't even know if this development and path will eventually lead us to the AGI and if we have to be honest not before 20 to 25 years to have commercial models using LLN and at the end the high adaptability of this model would also mean a new filtering algorithm and model for data that could help the model and data. It is an exciting and new era that we are living and I would be happy to develop these kind models in a few years, but don't expect that soon, maybe in 15-20 year
@Sumpydumpert
@Sumpydumpert Ай бұрын
That’s a smart rock damn
@edwinschaap5532
@edwinschaap5532 Ай бұрын
Is this about Apple Intelligence or about Machine Intelligence in general? 😉 Can (actual) neurons also loose potential (slowly over time) if it doesn't receive spikes for some time?
@batautomat
@batautomat Ай бұрын
Neuromorphic chips are a good subject for a new video soon.
@austinrusso2178
@austinrusso2178 Ай бұрын
Thank you so much for this information video. It was great to learn what is happening and research in AI. At the current rate, do you think we could see AGI robots in the coming years?
@theAIsearch
@theAIsearch Ай бұрын
Thanks. I'd guess before 2030
@gpt-jcommentbot4759
@gpt-jcommentbot4759 Ай бұрын
2050 if computing doesn't reach it's end and AI continues expanidng
@Vpg001
@Vpg001 Ай бұрын
I think compression is underrated
@ash.mystic
@ash.mystic Ай бұрын
Liquid neural nets are an interesting application of analog computing (contrasting with discrete math/logic used by traditional neural nets). Analog computers have been making a comeback in general. I wonder if analog computer hardware of some kind could be used to run them.
@Sweenus987
@Sweenus987 Ай бұрын
I used a CNN combined with a Liquid Time-constant Network (which is part of LNNs) for my university dissertation, which seems pretty powerful itself, I was able to train a robot to follow me based on image input given the same trained environment and clothes as in the training data. It's interesting stuff
@jeffreyjdesir
@jeffreyjdesir Ай бұрын
🤯 WOAH! could you share your work? That sounds so fascinating and what I'm interested specifically, it seems like you had some kind of Real-Time feature with LTCN? I'd like to see if there's even a description in literature about this part of AI - the time window it exists in. 🤯
@Sweenus987
@Sweenus987 Ай бұрын
@@jeffreyjdesir Sure, I have an unlisted video that shows it working. It's a little jank since I didn't have the time to code in and train for smoother motion. It was specifically trained to pick up me at various locations within the frame, so if I was to the right of its frame, it would turn right by a specified amount and if I was too far it would move forward by a specified amount and so on. The description has links to the dissertation itself on Google Drive and the code on github kzfaq.info/get/bejne/kK9ioK-FzdTUooE.html
@theAIsearch
@theAIsearch Ай бұрын
That's very cool! Thanks for sharing!
@stevenewbold3616
@stevenewbold3616 19 күн бұрын
I’d argue neural networks can improve with the use of further trading and/or different training data which can change or reconfigure under utilised weights to improve overall accuracy.
@MSIContent
@MSIContent Ай бұрын
Feels like there would be a tipping point with a liquid model where it starts out by just working, and then is tasked with making a better model based on its learned measurement of its own current performance. Given it can change and adapt, it could improve on its own design and rince/repeat.
@gpt-jcommentbot4759
@gpt-jcommentbot4759 Ай бұрын
that requires different input data shapes
@Alex-ns6hj
@Alex-ns6hj 27 күн бұрын
@@gpt-jcommentbot4759likely its ability to reason on its own and make judgements on its own to progress to said goal
@johannhuman532
@johannhuman532 25 күн бұрын
Some nuance to be made: models are updated from time to time even if they don't get a new name or version. So GPT4 designates a series of training. About energy, the processors are getting better at energy consumption, so applying a linear factor to the number of weights is a huge overestimation.
@kroniken8938
@kroniken8938 Ай бұрын
Well, how long would it take for a human brain to learn everything gpt knows- probably hundreds or thousands of years
@jeffreyjdesir
@jeffreyjdesir Ай бұрын
If we can use Blooms Taxonomy as any standard, it seems like something like GPT will EVER "understand" anything; just relative semantic mappings to input which can't be the same thing as hermeneutics ontologically. The AGI singularity should be more about when the new human intelligence with a revitalized cosmic identity (as opposed to national or tribal) that comes with Star Trek like planetary ambitions...hopefully soon (or else).
@Me__Myself__and__I
@Me__Myself__and__I Ай бұрын
A single human brain, even though something like 1,000x as complex, could never learn all of human knowledge. Humans have a limit on how much they can store, which is why we forget things. Yet a single LLM that has 1,000x less complexity can know the sum total of all human knowledge. Which is why this comparison of an LLM to a single human brain is ridiculous.
@jeffreyjdesir
@jeffreyjdesir Ай бұрын
@@Me__Myself__and__I 1. Interesting...I'd say the brain more around 1,000,000,000x complex given its what we know knowledge through and transduces reality...its hard to really call it a process since consciousness is seamless with necessary reality (the world that generates perception). 2. (more nit pick) but I wouldn't say LLMs "know" anything in the degree or relevance (given a Blooms taxonomy approach)that humans do. Some humans many not have every true description of the nature of the world, but can see the "Truth" of the world in a gestalt manner that goes beyond computation and semantics into hermeneutics and telelogy. what say you?
@antonystringfellow5152
@antonystringfellow5152 Ай бұрын
Yet the GPT models don't have human-level intelligence. Knowledge is not intelligence just as cheese is not electricity.
@jasonkocher3513
@jasonkocher3513 Ай бұрын
Is there anyone studying AI who can answer a fundamental question I've been wondering: What is the relationship between inference and overfitting as the size of a model goes to infinity? I recently saw that the overfitting issues come and go as the model size increases. Seems to not converge on overfitting in the limit sense as the model size gets larger.
CHATGPT DOESN'T REASON! (Top scientist bombshell)
1:42:28
Machine Learning Street Talk
Рет қаралды 2,3 М.
Дарю Самокат Скейтеру !
00:42
Vlad Samokatchik
Рет қаралды 8 МЛН
Red❤️+Green💚=
00:38
ISSEI / いっせい
Рет қаралды 83 МЛН
EVOLUTION OF ICE CREAM 😱 #shorts
00:11
Savage Vlogs
Рет қаралды 7 МЛН
Summer shower by Secret Vlog
00:17
Secret Vlog
Рет қаралды 11 МЛН
The Next Generation Of Brain Mimicking AI
25:46
New Mind
Рет қаралды 133 М.
The Entire History of RPGs
2:43:35
NeverKnowsBest
Рет қаралды 2,9 МЛН
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 798 М.
ROCKET that LITERALLY BURNS WATER as FUEL
19:00
Integza
Рет қаралды 1,6 МЛН
Google CEO Sundar Pichai and the Future of AI | The Circuit
24:02
Bloomberg Originals
Рет қаралды 3,1 МЛН
Official PyTorch Documentary: Powering the AI Revolution
35:53
"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview
13:12
60 Minutes
Рет қаралды 1,9 МЛН
Ноутбук за 20\40\60 тысяч рублей
42:36
Ремонтяш
Рет қаралды 386 М.
iPhone 15 Pro Max vs IPhone Xs Max  troll face speed test
0:33
ОБСЛУЖИЛИ САМЫЙ ГРЯЗНЫЙ ПК
1:00
VA-PC
Рет қаралды 2,4 МЛН