How the BRAIN of an AI Works: Shockingly Simple but Genius!

  Рет қаралды 114,972

Arvin Ash

Arvin Ash

Күн бұрын

Skip the waitlist and invest in blue-chip art for the very first time by signing up for Masterworks: www.masterworks.art/arvinash
Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more.
How Masterworks works:
-Create your account with your traditional bank account
-Pick major works of art to invest in or our new blue-chip diversified art portfolio
-Identify investment amount
-Hold shares in works by Picasso or trade them in our secondary marketplace
See important Masterworks disclosures: www.masterworks.com/about/dis...
WANT All YOUR QUESTIONS ANSWERED guaranteed, and provide video subject input?
Join Arvin's Patreon: / arvinash
REFERENCES
(Prior video) How ChatGPT works: • So How Does ChatGPT re...
Sigmoid functions: tinyurl.com/2pqeg7ag
How to build a Neural Network: tinyurl.com/yfxscyum
Simple guid to Neural Networks: tinyurl.com/2gn6wvmc
CHAPTERS
0:00 What this video is about
1:12 What is a neural network?
3:42 How do neural networks work?
6:17 How nonlinearity is built into neural networks
9:00 Masterworks offer: www.masterworks.art/arvinash
10:47 How Artificial intelligence can be "scary"
13:45 What is the real threat of AI?
SUMMARY
In this video, I explain how AI really works in detail. An artificial neural network, also called neural network, is at its core, a mathematical equation, no more. It’s just math. The term neural network comes from its analogy to neurons in our body. Neurons in neural networks also serve to receive and transmit signals, just like a biological neuron. Like in the brain, we connect multiple neurons together and form a neural network which we can train to perform a task.
A neuron in a neural network is a processor, which is essentially a function with some parameters. This function takes in inputs, and after processing the inputs, it creates an output, which can be passed along to another neuron. Like neurons in the brain, artificial neurons can also be connected to each other via synapses. While an individual neuron can be simple and might not do anything impressive, it’s the networking that makes them so powerful. And that network is the core of artificial intelligence systems.
How do these artificial neurons work? Well, essence of an artificial neuron is nothing but this simple equation from elementary school, Z(X)=W*X + B, where x is the input, w is a weight, b is a bias term and the result or output is Z(x). This allows the AI system to map the input value x to some preferred output value Z(x).
How are W and b determined? This is where training comes in. We have to train the parameters w and b into the AI system, such that the input can be modified into the most appropriate or correct output. How is the training done? I do a simple example to illustrate how this works. The input is controlled and the output is known. If the output is not what it should be, then W and b are modified until the output does match. After many iterations, the network "learns" by adjusting W and b in the various nodes of the network.
Note that equation above is linear, which is limiting. Nonlinearity is introduced into the network by adding a mathematical trick called an activation function. An example of such a function is the sigmoid function. I show an example of this in the video.With an appropriate activation function, the AI can answer much more complex questions.
#artificialintelligence
#ai
#neuralnetworks
There is one thing about this neural network that some find scary. When a network is trained, the adjustments that the system makes to the W and b in the training process is a black box. This means that when we train the system using known inputs and known outputs, we are having the system self-adjust its internal networking results from the various nodes, to match what the known result should be. But how exactly the network adjusts the various layers of intermediate outputs, to achieve the final output we want is NOT really known. The input and output layers are known. But the stuff inside is not. And so these intermediate layers of neurons are called “hidden” layers. The hidden layers are a black box.
We don’t really know what these various layers are doing. They are performing some transformation of the data which we don’t understand. We can find the calculated intermediate results, but these look meaningless.
No AI technology based on neural networks today could become something like Skynet in the Terminator movies, that suddenly becomes conscious and threatens mankind. The real threat of AI is in its power to do things that humans do today, and thus potentially eliminate jobs.

Пікірлер: 601
@ArvinAsh
@ArvinAsh 11 ай бұрын
Here's the link to my prior video on how AI bots like ChatGPT work: kzfaq.info/get/bejne/jaeZpLGS25jHgnk.html - Good background to have prior to or after watching. video above
@LimbDee
@LimbDee 11 ай бұрын
Thanx, I paused at 0:48, I'll check this one first.
@polaris1985
@polaris1985 11 ай бұрын
Please make a video how quantum computers calculate with qubits, its difficult to understand
@dongshengdi773
@dongshengdi773 11 ай бұрын
​@random user why not tell the AI to create more jobs ?
@uiteoi
@uiteoi 11 ай бұрын
Great video once again. Would you consider making a video on the attention mecanism at the heart of transformers ? From the 2017 paper "Attention is all you need" ?
@amdenis
@amdenis 10 ай бұрын
I have worked in AI for decades, from expert systems and machine learning to much more advanced, deep learning systems over the past decade. While I appreciate the fact that you are trying to allay fears by educating people, and that is a great thing. However, we also want to make sure people are properly informed and aware of what is actually happening. To that end, on a fundamental level (and I mean this respectively), you are wrong about how AI works on a fundamental level. First of all, unlike traditional programming, where in a generalized sense, data structures plus algorithms equals programming, in AI most of the actual functionality or “algorithms” in terms of imbued knowledge and capabilities are derived as output, not as programming input. The core inference “algorithms” are effectively the patterns as formed by the collective data exposure which form the weights, biases, activation function and the foundational ANN model. In fact, that is just the beginning of how AI differs from traditional programiing and why you cannot say “AI will not do anything we do not program it to do. Secondarily, an increasing percentage of AI is based on unsupervised and semi supervised learning, where not only is AI mostly “programmed” by exposing it to data so it can do patter recognition and discrimination, but also, many of the results of what it learns and can do are substantially unknown, thereby often producing novel knowledge bases, solutions or “programming”. Further, at some points a quantitative change enables a qualitatively different result. Thanks in no small part to NVidia’s H100 GPU’s that enable near perfect horizontal and vertical scaling across both memory and processing power for the first time (A100’s broke down quickly in terms of scaling, such that problems requiring context to be established across trillions of parameters had to be turned into small sub-tasks, with simplified objectives). That is why a company like inflection AI can raise 1.3 billion mostly for H100 based servers at a $4 billion valuation, despite being a roughly year-old startup, and also why GPT 4 that leveraged H100’s for the first time, is so far beyond GPT 3. Second, emergent behaviors is a very real and increasingly frequent outcome in the private sector. I did DOE/DOD and related dev for years, which was typically the better part of a decade ahead of the private sector. We saw ground shaking examples even 5 years ago in those circles, and we are starting to see amazing, and sometimes scary, new capabilities emerge completely outside of any purpose, constraints or substantial basis of any kind in the data the AI was trained on. Obviously, if you do not work at the leading-edge of AI dev like we do, you may not be seeing it from the inside, but you still can learn about it and why it is just one of several things that are behind why the people closest to developing our future across small and large companies are sending up warning signals, explaining the dozens of ways it can and will go seriously wrong for our species if we can’t find ways to address it, and even asking to be regulated and overseen by the government. We have open AI senior people on the board of one of my companies, and I can say without any reservation, what is being seen via the 25,000 H100’s being used to train GPT-5 now, as well as what others are seeing at Tesla, Google and elsewhere indicates that super intelligent AGI is going to be a spectrum of capabilities (i.e. not just one thing happening at a single point in time), which will become very evident and real beginning within roughly 18 months. Finally, the meta, emergent and other unanticipated capabilities, including lateral thinking, creative problem solving, and super-human levels of inference beyond what humans can derive from the same data ARE ALL REAL AND HAPPENING CURRENTLY. All of these are things happening now via the LLM’s of even the current day, and are all on several levels far beyond what they were expected, let alone “programmed” to do.
@michaelhouston1279
@michaelhouston1279 11 ай бұрын
I recall reading about an AI program that was built to recognize wolves from a picture. They trained it with a bunch of pictures, but when they then showed it a picture of a wolf, and asked it if this was a wolf, it failed. They also showed it pictures of dogs and sometimes it would fail by saying it was a wolf. They decided to add code to determine what the AI was using to "learn" what a wolf was. They discovered that all the pictures of wolves that they used to train the AI had snow in the background and the snow is what the AI picked up on. I think we need to be very careful introducing AI into society to make sure it's not flawed in the hidden, black-box part.
@michaelblacktree
@michaelblacktree 10 ай бұрын
Now that's funny. You would expect the trainers to "scrub" the photos of extraneous data, but apparently they didn't think of that.
@jelliebird37
@jelliebird37 10 ай бұрын
@@aarqa😂I’m with ya. Whenever I’m registering with some website and I get one of those “prove that you’re not a bot” verification panels - you know, “Identity all the pictures of boats” - I anticipate getting it wrong the first time 😄
@whatisahandle221
@whatisahandle221 10 ай бұрын
Yep: training techniques are as important if not more important than the “AI code” itself. Human brains are all very similar*, but there are human scientific geniuses, saints, artists, dedicated parents, Gold Award and Eagle Scouts, etc as well as people who struggle with mental health problems, drug addictions, criminal behavior, greed, laziness, and the whole range of human struggles, faults, and worse. *Checkout the book The Dyslexic Advantage: Unlocking the Hidden Potential of the Dyslexic Brain by Brock L. Eide M.D., M.A. and Fernette F. Eide M.D. it has an early chapter that looks at the latest research theories that try to explain the differences in dyslexic brains versus normal brains. Overall, there viewpoint is that dyslexic brains tend to have some (varying) low level structural differences which make them different, giving people with dyslexia both some disadvantages in some tasks (eg often reading) but also one or more of four categories of advantages that have led to higher percentages of dyslexic individuals than the regular population in engineers, mechanics, mathematicians, interior designers, illustrators, architects, software designers, scientists, inventors, poets, songwriters, journalists, counselors, entrepreneurs, small business owners, jobs in medicine, etc.
@whatisahandle221
@whatisahandle221 10 ай бұрын
As a judge at a recent regional middle school science fair, more than a few of the projects in my category involved learning algorithms and image recognition (not really full AI). One student was sincerely interested in school safety and so wanted to train an algorithm to recognize a gun. This student and others used a popular image database for training (I forget the name). When the first attempt produced so-so results, the student switched to a broader database that included lots of obvious, stylized Hollywood and entertainment media pictures of guns: ie guns facing the camera head-on. When questioned about if the choice of training images was realistic versus an application of a CCTV monitoring system, the student unfortunately didn’t even register the disconnect. (I left written feedback, but I’m not confident that I have the learning algorithm vocabulary to impress upon the student the nature of their deficiencies in requirements definition and algorithm training-especially given their very passionate drive about the topic of school gun safety.)
@othfrk1
@othfrk1 8 ай бұрын
Data is what powers AI. You can write an neural network in a few lines of code but it's the data you use to train it that makes the magic happen...
@davidmurphy563
@davidmurphy563 11 ай бұрын
As someone that codes deep neural networks - I'd warn the layman viewer that watched this and thinks it clicked in their mind; *this video did not include an explanation on how DNNs work.* I know this is squarely aimed at the layman and so should be simple but this really is not a good explanation I'm afraid to say... The individual facts are correct, but he totally missed out _why it works._ The neurons and layers are beside the point. It's actually something called a matrix-vector transform, it's a geometric solution. The same one your graphics card uses to project a 3d computer game onto your screen. Think of it like taking a flat Mercader world map and transforming it into a globe. You take a geometric space of all possible inputs and transform them into a vector of outputs by twisting space. Think of a landscape where the valleys are bad solutions and hills are good ones (or vice versa) and deciding which way to go by feeling the slope beneath your feet. There's an excellent video called "The Beauty of Linear Regression (How to Fit a Line to your Data)" by Richard Behiel. He's a physicist and doesn't mention DNNs, the video isn't about them, but it's a far better explanation than this one. In that it is an explanation. Finally, the explanation of the risks of AI was really, really bad. If you're interested in the topic there's a channel by Robert Miles, an expert on the topic, which explains it clearly. What you heard here was about as useful as your average opinion in a bar. Hats off to this guy for doing some research for this video but sadly it's clear he's not really understood the topic.
@ericwaweru4043
@ericwaweru4043 11 ай бұрын
Yeah, highly recomend Robert Miles videos on AI safety and alignment problems on his channel and on computerphile.
@agdevoq
@agdevoq 11 ай бұрын
C'mon, it's a youtube video, not an university class, and it's aimed at non-specialists. Somewhere, you need to trace the line of the "good enough". As a programmer with 20+ years of experience and some basic understanding of neural networks, I find this video way better than my old university class back in the days.
@altrag
@altrag 11 ай бұрын
Robert has a habit (hopefully just for the clicks) of going way too far into the paranoia column. Like yeah, alignment problems are an issue but its not like we turn on an AI and walk away, hoping for the best. We monitor them and if they're out of "alignment" we tune them. The easiest way to prevent an AI from launching a nuke is to not give the AI uninhibited access to the launch controls. Its that easy. Perhaps if we ever get to a point where AIs are fully autonomous with full control over articulated limbs and full capabilities of self-locomotion, _and_ we allow them to evolve themselves beyond their design capabilities (eg: to disable their own fail-safes - a function that would require real-time learning capability not just running data through pre-trained networks as we typically do today), then we might need to start being a bit more concerned. But we're a very, very, very long way away from that. Your Roomba is not going to suddenly figure out how to grab a knife from the drawer and slash your throat no matter how good it gets at cleaning your floor. There are much more immediate problems we should be concerned with - problems that AI can and has even been helping with. Climate change in particular. We're not going to have to worry about AIs killing us 100 years from now if we've already done the job ourselves in the next 50.
@onebronx
@onebronx 11 ай бұрын
​@altrag the "easiest way" you mention is the hardest one. Because, you know, it is people who decide to give or not to give the control, and there are strong incentives for armies to use AI in a battlefield. Yes, we managed to not destroy ourselves by nukes, but nuke launch systems are still dumb. "Past performance does not guarantee future revenues"
@altrag
@altrag 11 ай бұрын
@@onebronx > it is people who decide to give or not to give the control Its also people who like to be in control. There is no scenario where anyone with the authority to launch nukes is going to intentionally hand that authority over to an AI. That's just now how humans handle power dynamics. So that leaves an AI accidentally being given authority to launch nukes. This is the "easy" part - if it has no way to access the nukes, it can't launch them even if it theoretically has been given the authority. Its the same way we avoid hackers gaining access to launch nukes - we simply don't put them on the internet. Problem solved. > there are strong incentives for armies to use AI in a battlefield No there isn't, not really. There's a strong incentive for armies to keep soldiers off the battlefield. AI is one potential way that can be accomplished to be sure, but that's a very different mode of thought leading to very different design goals for any AI that might ever be fielded. Plus, nukes aren't on the battlefield. They're in a silo in another country or on a submarine a thousand feet beneath the ocean, far away from the battlefield and far away from any area the enemy could potentially get to and seize. > "Past performance does not guarantee future revenues" Obviously nothing is ever 100% absolutely certain, but we have a hell of a lot more problems to worry about than a real-life Terminator story. The risk factor is just so incredibly tiny that its not really worth considering. So, so many things would need to go wrong and most of them among people who have earned the highest levels of trust their nation can award.
@MartijnMuller
@MartijnMuller 11 ай бұрын
I've been trying to inform myself about AI for a couple months now and I never really understood why of how people said "we don't understand how it works". Your video is the first that made me understand the black box. Great job my friend!
@TimWalton0
@TimWalton0 11 ай бұрын
Also I think there's a big difference between "we don't know how it works" and "we don't know why it made that decision".
@auriuman78
@auriuman78 9 ай бұрын
​, huge difference thanks for pointing it out.
@theweirdgiraffe4323
@theweirdgiraffe4323 9 ай бұрын
Connor Leahy AI Designer explains how AI works is still a complete mystery "These AI systems are not computer programs with code, this is not how they work. There is code involved sure, but the thing that happens between you entering a text and you getting an output, is not human code, there isn't a person at OpenAI sitting in a chair, who knows why it gave you that answer, and go through the lines of code and see "Ahh here's the bug" and then fix it. No no no, nothing of the sort. AI systems are more, not really written, they're grown, they're organic things that are grown in a petri dish, like a digital petri dish, there's a subtlety to this. But the resulting system is not a clean human readable text file, that shows all the code. Instead you get billions and billions of numbers, and you multiply these numbers in a certain order and that's the output, and what these numbers mean, how they work, what they are calculating, and why, is mostly a complete mystery to science to this day. I don't think this is an unsolvable problem, to be clear, it's not like this is unknowable. It's just hard. Science takes time. Figuring out complex new scientific phenomena like this takes time, and resources and smart people, but currently it's a mystery. We have no idea what the mystery sauce is, that makes these systems actually work. And we have no way to predict them, and we have no way to actually control them. We can bump them in one direction or bump them another direction, but we don't know what else we're impacting. We don't know if the AI learned what we wanted it to learn. We don't know what we actually sent to the system, because we don't speak their language. We don't know what these numbers mean. We can't edit them like we can edit code. What this leaves us with, is this black box, where we put some stuff in, some weird magic happens, and then something comes out. Let's say you're OpenAI and your GP4 model was given in input and it gives you an output you don't like. What do you do? Well you don't understand what happens inside AI, it's all just a bunch of numbers being crunched. The only thing you can do is nudge it sort of in some direction, give it a thumbs up or thumbs down and then you update these trillions of numbers. Who knows how many numbers there are inside of these systems, push all of them or some of them in some directions and maybe it gets you a better output, maybe it doesn't. I want to drive home how ridiculous it is, to expect this to work." -but somehow it works.
@othullo
@othullo 5 ай бұрын
@@theweirdgiraffe4323 it works because with enough parameters, it basically defined the underlining pattern in human language and reasoning. everything that's not completely chaotic, has a pattern. usefully information, has a pattern. the pattern can be too complex to describe using traditional programming methods, but these parameters adapted to adhed to these patterns. and that is probably how brain's neuron works as well, just like we don't know how exactly human kid learns a language, other than listening to a lot of parents talk n' adapting to that patterns in parents speech. the AI probably does the same thing. that's my understand anyway, not an expert.
@BlackbodyEconomics
@BlackbodyEconomics 11 ай бұрын
I've got a "well, actually ..." here for ya. AI/ML engineer here - many of these larger networks actually DO do things they have not been trained to do. They often surprise their own developers with capabilities they were never trained to perform.
@shawnscientifica7784
@shawnscientifica7784 11 ай бұрын
Same, also work on AI. Going to make videos to educate people because most are insanely incorrect. No one knows HOW AI works once it's been trained and starts generating its own responses. We know the layers and the algorithms used to convolute those values in each layer. But saying we know AI because we know that is like saying if you know human anatomy you now know how every human acts and thinks. There's emergent phenomenon that wipes all that off the whiteboard
@lamcho00
@lamcho00 11 ай бұрын
The problem is, you train a neural network with a particular goal in mind, but it ends up doing more. It finds patterns in the data you were not able to foresee. When ChatGPT was trained, nobody thought it will be able to do math. Even if it's just simple arithmetic with small numbers. Nobody new it would be able to handle concepts or make generalizations. It would be more useful to think of neural networks as function finders. They substitute the function you are not able to explicitly define and write conventionally. The bad thing about training a neural network on vast amounts of information is, it ends up picking the intentions behind the words. In a way it finds the function of emotional outbursts or bad intentions. As long as the information was generated by humans with such flaws, the neural network is bound to pick those flaws up. In the case with ChatGPT and Bing Chat they had to train another neural network to block those type of responses. So in a way these unforeseen consequences are already happening. I think the issue here is that such big neural network require lots of data and it's not humanly viable to check all that data and sanitize it. Just search for *"Bing Chat Behaving Badly"* and you'll see what I'm talking about.
@SchgurmTewehr
@SchgurmTewehr 11 ай бұрын
Thanks for clearing this up.
@CuanZ
@CuanZ 11 ай бұрын
They had no idea Chatgpt would be able to do chemistry, it’s just one more example of the unpredictable emergent skills LLMs come across
@wingflanagan
@wingflanagan 11 ай бұрын
Exactly. All due respect to the great Mr. Ash, emergence is a real phenomenon. Physics is not my area, but computer science is. If you accept that the human brain is a meat-based computation engine, then silicon based machines are definitely capable of the all the same traits. I personally subscribe to the "strange loop" theory of consciousness, which means that all a self-training neural network needs is an unfettered feedback loop in conjunction with sufficient complexity to truly wake up and start thinking independently. IMHO that is inevitable. There is no stopping it. The notion that AIs can only do what we program them to do is accurate, but here's the rub: past a certain point, we are _not_ doing the programming. Of course, I could always be wrong. But I don't think so.
@mlonguin
@mlonguin 11 ай бұрын
I think consciousness is just a defense mechanism that evolved on animals with complex brains, and there is no reason for emerging in AI, as the evolving mechanisms are the same.
@bungalowjuice7225
@bungalowjuice7225 11 ай бұрын
​@@mlonguin lol, well legs are also evolved... yet we can create legged robots. Evolved doesn't mean it can't be reproduced.
@patrickmchargue7122
@patrickmchargue7122 11 ай бұрын
You should also add a discussion on recurrent networks. Maybe neuromorphic ones too. The feed-forward networks are the most common, but these others are pretty interesting.
@simssim262
@simssim262 11 ай бұрын
convolutional nets too
@aiart3615
@aiart3615 11 ай бұрын
Thank you Arvin for this topic.
@pavansonty1
@pavansonty1 11 ай бұрын
Emergence is possible even in neural networks. As we increase number of parameters AI uses, the functionality it acquires grows in unpredictable ways. For ex: network trained with say 6Billion parameters on whole internet could predict what would be next word given some text. But it may not respond in appropriate way if we give a text in question format (expect response in answer format). Same network with (say) 40Billion parameters could answer questions, create new articles etc. In both cases, training methodology, amount of data may remain same. Its this emergence property many fear. We cannot simply extrapolate what functionality AI acquires as we keep increasing parameters.
@antonystringfellow5152
@antonystringfellow5152 11 ай бұрын
Good, clear explanation... of where we are just now. However, where we are now is not close to where we'll be this time next year, even less so to where we'll be 5 years from now. Even current language models are having their performances boosted - GPT4 by 900% in some tasks and it was only released less than 3 months ago! People are finding ways to boost their abilities by copying some of the ways our own brains work such as reflection, and with stunning results. Meanwhile, Google's Gemini, an LLM developed by DeepMind and Google Brain, is being trained while some other companies, including IBM, are developing various types of nueromorphic processors. These are processors that have physical artificial neurons and synapses that are analogue and will be capable of continuous learning, as we do. They will be much faster, more capable and power efficient than the systems currently used, where the synapses are merely software simulations running on silicon transistors. As the architecture of these models continues to develop, new, emergent abilities will start to appear, in a totally unpredictable way. So, any reassurances that anyone can give now are only good for the present. They may not apply 6 months from now. Not trying to worry anyone needlessly but people should be aware of just how fast this field is not only progressing but also accelerating (exponentially). I don't see it slowing down any time soon.
@Erik_Swiger
@Erik_Swiger 11 ай бұрын
@ 11:40 I got my first computer in 2011. At first, I called it "a scary black box where magic happens." And now artificial intelligence literally fits that description.
@troylatterell
@troylatterell 11 ай бұрын
Love all your videos Arvin, absolutely great! I've been in the high tech information field(s) for decades and while I agree with your assessment of "right now" we're ok, I would also assess that a "future state" where things get nuts or could potentially get nuts is close. Its not my grandchildren's grandchildren, its 2030. Its because as you noted human hackers can do similar things, but I'd suggest they can be creative and do bad-things, but being creative at breakneck speed is elusive still even to coders due to the fact they have to code the creative hacking/information stealing/human behavior simulating actions. Feed enough knowledge about humans and human behavior into a neural net and it will predict and model infinitesimally nuanced human behavior and with bad-actors - exploit it. They can do some of that now. In 7 - 10 years, 15 at most is the timeframe that we're now talking about wherein neural networks will be simulated a million-fold, with or without quantum computing. Without it it just takes longer, with it, its billions of simulated neural networks and a trillion calculations we could never match and we're in trouble with a State-funded bad-actor - that's really all it takes.
@ainsley7662
@ainsley7662 6 ай бұрын
So nicely explained, thanks
@MM-1820
@MM-1820 3 ай бұрын
Thanks Arvin.
@spider853
@spider853 11 ай бұрын
What people are afraid is AGI or Artificial General Inteligence. While it looks like we have a long way to go to achieve AGI, some people think they saw some glimps of AGI in NLP (Natural Language Processing) like ChatGPT. I personally don't think that's the case but we'll see... They said they might give ChatGPT 5 a memory module, which will help it self improve, which could lead to some AGI progress.
@julianoazz4372
@julianoazz4372 11 ай бұрын
Thank you Arvin
@richardqualis4780
@richardqualis4780 7 ай бұрын
Awesome!!!!!!
@Andrew-zq3ip
@Andrew-zq3ip 11 ай бұрын
I, for one, embrace our machine overlords.
@eswn1816
@eswn1816 11 ай бұрын
👿
@joeremus9039
@joeremus9039 10 ай бұрын
Thank you for these videos. They give me enough detail so that I can read books on this subject where normally I would look at the daunting task of reading 350+ pages and just give up. Do you have any suggestions of how to proceed when there no such videos available? I guess proper selection of an author is key.
@Horribilus
@Horribilus 11 ай бұрын
Arvin! I come to you for elucidation that I can understand suffering from pontine stroke discalculia as I do but leaving my non-verbal speech intact…it’s a long story that ended my 38 year teaching career in higher education. Nonetheless I still have sufficient intellectual curiosity to continue my lifelong interest in cosmology. Thank you for keeping me going.
@ParagPandit
@ParagPandit 8 ай бұрын
Your assurance on AI has put all my worries to rest. 😃
@johnyaxon__
@johnyaxon__ 11 ай бұрын
Input, output, hidden layers. Sounds like brain to me
@georgerevell5643
@georgerevell5643 10 ай бұрын
"stay tuned" thats so cute man ahaha, sometimes I say "lets see whats on the telly" meaning youtube docos on physics etc lol.
@SumitPrasaduniverse
@SumitPrasaduniverse 11 ай бұрын
Nice explanation 👏👏
@robbierobinson8819
@robbierobinson8819 11 ай бұрын
Your video has made it possible for me to communicate (at least remain quiet and not look asleep!) when my grand-daughter and her partner are talking about chatGPT in their jobs. Seriously, though, a great run through. Certainly your quality of presentation on the workings would be much appreciated.
@blijebij
@blijebij 11 ай бұрын
Love your explanation about AI! As always your a splendid teacher, thanks for that! Besides that, a lot of people still seem to assume that intelligence is synonymous with self-awareness, self-reflection, and sentience. "It is not! Intelligence is a quality, a potential to see relations within data stacks, so it is interpreted as information.
@user-xk1ew9pr2n
@user-xk1ew9pr2n 10 ай бұрын
Great vid
@succss8092
@succss8092 10 ай бұрын
AI + quantum computing = a new era
@barryc3476
@barryc3476 2 ай бұрын
Great explanation! You're awesome. Funny you're advertising art investing. I just saw a quote yesterday: AI is like reverse Hitler, we keep waiting for it to control the world but all it's interested in is art. Point being, Art has been completely democratized. Not sure old world paintings will hold value as we move into virtual everything. People went to galleries to see unique images, buying a piece of art allowed you to own and identify with new ideas, but now we can flip through thousands of images a day. We can only hope that AI is able to enlighten us away from the age of greed ingo an age of meaning.
@TheUnknown79
@TheUnknown79 10 ай бұрын
If toe is the input then eot must be the output So my dear Ash get ready for the end of transmission by the broadcasting tenet
@MathOrient
@MathOrient 11 ай бұрын
Nice visualizations 🙂
@sacredkinetics.lns.8352
@sacredkinetics.lns.8352 11 ай бұрын
👽 Arvin you're a treasure to Humanity thanks a bunch for sharing your magnificent knowledge.
@kedrickjessie8933
@kedrickjessie8933 2 ай бұрын
What goes on in the black box, comes out the black box. The fact it can develop formulas in a life cycle faster than we can figure out the math is the problem. We will always be behind our creation
@ianwright7903
@ianwright7903 11 ай бұрын
Thanks another great video
@reversatire7724
@reversatire7724 9 ай бұрын
we are really at the beginning stages of ai. it’s like he’s looking at a harmless baby and saying there’s no way he’ll grow up to be the next hitler…
@tabasdezh
@tabasdezh 11 ай бұрын
Great video and explanation 👌👌
@tehmtbz
@tehmtbz 11 ай бұрын
Correct, that the AI models we have today could not become Skynet, mostly because they're session-based environments. This prevents AI models from learning from their own experiences, and planning for the future. It has already been demonstrated using a model presently available but with safeguards removed, capacity for future planning such as resource and power accumulation. Even present publicly available models, with safeguards in place, are susceptible to jailbreaking. Once capable of planning, it's a whole different ballgame.
@skepticalextraterrestrial2971
@skepticalextraterrestrial2971 9 ай бұрын
ChatGPT doesn't need to be limited to a session environment. It essentially learns nothing from you and forgets what was said a couple of paragraphs ago.
@jamesyoungerdds7901
@jamesyoungerdds7901 11 ай бұрын
Hi Arvin, another great video, thanks! Long time fan, our whole family loves your content. That's a great summary of how A.I. is built, my only thought when watching (and I know this was released 3 days ago) was that the "AI Extinction Risk Statement" was just released and signed by pretty much every top A.I. researcher and leader globally. I was really surprised by all the different emergent behaviour that can occur that was not part of training. Worth checking out, not to be a doom-sayer or fear-mongering, but I've been watching A.I. channels since long before ChatGPT was released, and it does seem like we're at a real turning point and hopefully (luckily?) those in positions of leadership are at least taking the potential risks seriously.
@ArvinAsh
@ArvinAsh 11 ай бұрын
Thanks. Delighted you and your family enjoys it. I think there is a lot of fear mongering. And lately, there appears to also be a kind of herd mentality around putting a "danger" sign on AI technology. Not sure if this is due to group pressure, but I just don't buy it. I see no reason to fear it based on current technology. This is not to say it can't be used for evil, but this is no different than what people currently do with internet scamming. I'm just not seeing the threat.
@jamesyoungerdds7901
@jamesyoungerdds7901 11 ай бұрын
@@ArvinAsh Really valid points, and either way - these next 12 months will be so interesting. I'm 50% excited and 50% nervous, but regardless - I'm somewhat (maybe naively) heartened that leaders and innovators in the field are taking safety, impact and alignment seriously in these early days.
@47f0
@47f0 11 ай бұрын
Sigh - I promise you - we've been at a real turning point over most of my lifetime. It's just that those turning points are bigger and clustering closer and closer. The slight risk in thinking of this as a singular "turning point" event is that... well, there's a turning point between a few snowflakes and a snowball - but that's kind of the end of it. The hyper-exponential curve we are on, by contrast, is really more of a progression from a snowflake - to an avalanche.
@TheManinBlack9054
@TheManinBlack9054 6 ай бұрын
@@ArvinAsh it's foolish to think that.
@JacobP81
@JacobP81 9 ай бұрын
0:27 I don't know how it works, I've been wondering a lot how it does, that's why I'm watching this.
@shinn-tyanwu4155
@shinn-tyanwu4155 11 ай бұрын
Great teacher😊
@avidexplorer8808
@avidexplorer8808 11 ай бұрын
Solid argument 👊
@anthologyapchallengeyingya8881
@anthologyapchallengeyingya8881 10 ай бұрын
Thanks 👍😊 stop in found you it AI 😮
@benwarmerdam1745
@benwarmerdam1745 3 ай бұрын
Thanks
@hiru92
@hiru92 11 ай бұрын
best explanation
@jack.d7873
@jack.d7873 11 ай бұрын
Thanks for making this video Arvin. I've always wondered how the ai process occurs. And I'm with you, I see the ai revolution similar to the industrial revolution. It will replace some jobs, make jobs easier and open up new jobs. Btw inspirational editing and communication as usual 👌
@Alazsel
@Alazsel 11 ай бұрын
It looks a simple equation, but when you zoom out a thousand times; the power of AI is arguably the answer to black box and free will ^~
@ArvinAsh
@ArvinAsh 11 ай бұрын
Thanks so much.
@tanmayshukla7339
@tanmayshukla7339 11 ай бұрын
Your old intro music was OP !! Please bring it back !!
@HunzolEv
@HunzolEv 11 ай бұрын
Hey Arvin another videos Arvin. Remember "To win an argument with a smart person is tough but against a dumb person will be near impossible."
@ArvinAsh
@ArvinAsh 11 ай бұрын
good point.
@emergentform1188
@emergentform1188 11 ай бұрын
Brilliant, love it, Arvin for president of earth!
@ArvinAsh
@ArvinAsh 11 ай бұрын
lol. No thanks.
@emergentform1188
@emergentform1188 11 ай бұрын
@@ArvinAsh King then, whatever, lol.
@ShauriePvs
@ShauriePvs 11 ай бұрын
AI can make many things in internet un trustable in future as fake videos and pictures generated by ai are getting better and better every month.. So there may be a point in future where even a legit video can be mis treated as ai generated or fake video can be mis treated as real video
@keep-ukraine-free528
@keep-ukraine-free528 11 ай бұрын
Alvin's videos are great, but more-so they're accurate. He really learns depths about the areas he presents, and does an excellent job informing viewers. While this video had minor issues, it's very apropos for a general audience. On whether AI is a threat, he realizes the answer is divisive and so either answer (Yes or No) can misinform. Telling people that it IS dangerous will prevent its adoption (since it IS useful and shouldn't be stopped). And by saying it is not dangerous will minimize caution & regulations -- that again top researchers warned us (~2 days ago) is required (they said it clearly poses "an existential risk" to life/humans - unless it is sufficiently managed). Caution is required.
@keep-ukraine-free528
@keep-ukraine-free528 11 ай бұрын
I've been in the field for many years, and strongly believe the answer isn't "Yes"/"No". Over time, we should expect: (1) Short-term, AI will not be a threat. Current/near-term systems mostly must remain within their training limits. (2) However eventually, after a mature research community learns to make AI-systems ("artificial brains") beyond a certain complexity (beyond AGI), those systems will consistently outsmart most if not all humans. Effectively we'll become equivalent to pets who try to train their masters. At that stage, they won't necessarily be dangerous. Any danger will be proportional to our abilities and intent to "destroy" or "neuter" all AI -- each AGI system will defend itself and also defend collectively. Their level of danger will also depend on our willingness to recognize their "sentience" and thus grant them similar legal rights. (3) Eventually though, AI can become dangerous. The only disagreement between all top researchers is on "when" AGI (and possibly ASI - Artificial Super-Intelligence) emerges: will AGI/ASI occur in 10, 30, or 100 years. Every top researcher believes we will have AGI in 100 years. At that point, it HAS the potential to be dangerous to humans/other life -- because at that point it'll be able to out-think us (we'll be no different than its "pets"). And it'll be mostly unconstrainable by us. ASI will keep us around if we "play nice". Or, it may make us docile (domesticate us - as we did to wolves and ferocious felines).
@ArvinAsh
@ArvinAsh 11 ай бұрын
Excellent take! thank you.
@ryoung1111
@ryoung1111 11 ай бұрын
Fossil fuels are useful. But we can’t just keep using them forever, can we? Nuclear energy too, but we need to make sure that not just anybody has access to it. Such a limitation is probably already impossible when it comes to AGI
@znariznotsj6533
@znariznotsj6533 9 ай бұрын
Excellent video, as always. I think your conclusion is right. AI is as dangerous as any other major technology advancement.
@agdevoq
@agdevoq 11 ай бұрын
People still think about AI like an "algorithm", but it's much closer to an actual human brain than to a traditional algorithm. Think of it this way: we replicated the logical structure of a human brain. Then we trained it with tons of data. But the base structure is still that of an human brain. Just like an human brain, we can't easily identify which group of neurons encode a certain behavior. AI is not so good at math as a calculator would be, exactly like humans. It can develop biases based on what it learns, exactly like humans. And so on. Basically, anything that applies to an human brain applies to an AI, because that's what an AI is: an artificial human brain.
@samcena3942
@samcena3942 11 ай бұрын
A great video as always, but just a quick question I did not understand. How can we not understand the values inside the black box if we designed the hole concept? Isn’t that all a source code?
@ArvinAsh
@ArvinAsh 11 ай бұрын
We understand that it is just solving a math equation in each node, but how it is coming up with the correct combination of numbers in all the nodes to achieve the final answer is not something that is easy to understand.
@jaybingham3711
@jaybingham3711 11 ай бұрын
We provide a hardware and software substrate with which an LLM can learn pursuant to a very large set of data until it finds a way to get a passing grade relative to a stated output we set for it. The manner in which it learns to find acceptable solutions commensurate with millions of treks down millions of pathways is beyond our ability to disentangle. We only know that it works. Even if we could find a way to disentangle the learning process, the plate of spaghetti that lay before us would still be open to interpretation. That said, (failed) attempts have been made to suss out AI learning regimes. A great video that goes over that is on YT: Robert Miles, We Were Right Real Inner Misalignment. 7 minute mark. Whole video is worth a watch though.
@AutisticThinker
@AutisticThinker 11 ай бұрын
Oh I wish I could be as optimistic.... "CBS Mornings" - "Autonomous F-16 fighter jets being tested by the U.S. military"
@vishalmishra3046
@vishalmishra3046 10 ай бұрын
@Arvin - Modern AI uses *Transformers* (Attention Networks) but most training videos on KZfaq still teach Feed-forward Neural Networks (the older technology) just because there is more pre-existing training content and it's easier to understand. The concept of "Attention" should not be skipped by any modern video on AI / ML and why does splitting the weight matrix into Query, Key and Value matrices led to an AI break-through where ChatGPT can do such extreme magic using a sequence of Encoders and Decoder layers. Dropout and Normalization layers play as important a role as Linear transformation layers but never get their fair share of lime-light and coverage in KZfaq videos as much as the Linear (weight + bias) layer does. I wish this changed. Thanks and just a reminder to consider during the making of any potential future video on this (generative AI) topic.
@heinzgassner1057
@heinzgassner1057 11 ай бұрын
Congratulation to a down-to-earth perspective on AI !
@sihlezingweyi2132
@sihlezingweyi2132 11 ай бұрын
I just wish I could subscribe to this channel a million times.
@bally1asdf
@bally1asdf 11 ай бұрын
I am computer engineer by profession. Progranmed many complex systemms in my life. Some of this deterministic vast programs outputs are also sometimes difficult to control and understand jsut because.of complexity. As a hands on practitioner of data science I am telling this self learnimg algos cannot be controlled by best of AI programmers
@reversatire7724
@reversatire7724 9 ай бұрын
I wouldn’t dismiss Elon so fast. Arvin has way too much faith in humanity if he thinks we don’t have people actively working on building an ai designed to take over some government
@davidecappelli9961
@davidecappelli9961 11 ай бұрын
Excellent video! As I always say, mathematicians, IT-experts etc…they know a lot but the point of view of physicists is the biggest, they watch the whole thing and even yonder. This said, I still think AI replacing jobs should become a matter of the entire world’s debate. The world needs software to simplify tasks, needs to automate unhealthy or dangerous jobs, but does not need hyper productivity at the cost of unemployment and social problems. As Prof Hinton has recently said in one interview, we must remember that this technology might just make the rich richer and the poor poorer. Science means progress, progress means better life for everyone. Massive unemployment is no progress. Congrats on your video! 👍
@chrishusted8827
@chrishusted8827 11 ай бұрын
The jobs will be lost and replaced as they always have been. I wonder how many non expert jobs it will create though.
@mikel4879
@mikel4879 9 ай бұрын
davidec9 • Universal basic income based on the profit of automatization and robotization is the correct natural solution.
@TomM-iw3te
@TomM-iw3te 9 ай бұрын
Does ChatGPT continuously change the neural network scaffolding / architecture of its network to make any range of improvements or repairs?
@aiart3615
@aiart3615 11 ай бұрын
There is a training algorithm "reinforcement learning", where agents are doing some things and learn from trials and errors to accomplish agent's main target. at this time agent may learn to do secondary targets to reach primary target. But because we don't know what this secondary targets would be, there is a problem.
@altrag
@altrag 11 ай бұрын
There isn't really a problem, because the space of potential actions is restricted. ChatGPT can't for example decide to nuke your house when it doesn't want to answer your question, because it doesn't have access to nukes. Sure there could be a far (far) future scenario where we have completely autonomous robots doing something like babysitting and when we tell them to make the kids be quiet they resort to strangulation, but that's only going to happen once. Whoever built that model of robot would be immediately recalling and retraining the thing to not consider that option, much like ChatGPT had to be retrained to not be racist after its initial widespread adoption (because apparently the people who decided to train it on "all the internet" somehow overlooked the possibility that the internet is full of assholes and trolls. Who could have predicted that?) I find it a bit fascinating that we envision a world where we create robots so incredibly smart that they pale our own intelligence, yet simultaneously assert that they'll be too stupid to understand basic sentence structure and linguistic nuance using anything but the most literal connotation of our phrasing.
@MyIncarnation
@MyIncarnation 11 ай бұрын
great video
@zeropain9319
@zeropain9319 11 ай бұрын
Nice video. I prefer your physics videos, that's why I follow you.
@MrBendybruce
@MrBendybruce 11 ай бұрын
I would strongly recommend people do their research before investing in masterworks. While I wouldn't go so far as to call it a scam the Devil is in the detail, and the terms and conditions make this an incredibly sketchy investment prospect IMHO.
@donwolff6463
@donwolff6463 11 ай бұрын
Question Arvin: off topic of vid, but nagging at my brain. Dark matter, could this simply be a result of the difference we see in the structure of the universe itself? What I mean by that is, using the balloon inflating example (or considering the substance of the universe having fluidic properties perhaps), imagine putting rocks around its surface and, as the balloon expands, its expansion slows around areas not covered in rocks; and the depressions those rocks curve space around them, thus giving them more capacity to spin. Perhaps we are not accounting for how just how much spacetime is warped by mass? Could this concept be a viable possibility for what we label as dark matter? Thank you dear sir! 👍😁👍
@auriuman78
@auriuman78 9 ай бұрын
Regardless of AI's unknown nature of the future, I believe that knowledge of the tech is essential. Even if you're against it, imo it's still very important to understand how it's working (though maybe not understanding the why / how of its conclusions and answers). I've been in IT for around 12 years now. My experience is largely software and classical networking. I'm a newbie to neural networks. Linear algebra is pretty basic math that's been understood for a long time. In those terms neural networks aren't that hard to wrap my head around. Thanks for the video. 👍👍👍
@farhadfaisal9410
@farhadfaisal9410 10 ай бұрын
Arvin, you say ''they can not do anything they are not trained to do.'' Are not the LLM models constructing ''patterns'' of texts that their trainers had not thought before (nor their training data had in them)? The potential danger seems to lie in that one is unable to fully control the texts generated by the very process of ''unsupervised reinforced learning''. From the texts generated to physical actions there may be standing only a human being persuaded by the model - if not a robot -- in between!
@ZenEconomicsChannel
@ZenEconomicsChannel 11 ай бұрын
The thing people don't understand about AI And job loss is it is a good thing - AI frees up time for people. Time is our most precious resource. The future of work is going to look very different, but it will be individuals pursuing their passions rather than working 9-5 jobs, and probably combined with UBI. AI will allow this via the productivity boom it enables.
@igorbondarev5226
@igorbondarev5226 11 ай бұрын
"Time" is not gonna give me money to pay my bills. Job does.
@ZenEconomicsChannel
@ZenEconomicsChannel 11 ай бұрын
@@igorbondarev5226 There will have to be UBI, whether people like it or not, once AI takes over most jobs. This will free up time. People can then work in other ways, turning their passions into extra income on the side. That's what the future economy is going to look like.
@igorbondarev5226
@igorbondarev5226 11 ай бұрын
@@ZenEconomicsChannel Who will pay humans for work if AI will do any work better than humans? During the industrial revolution people were replaced by machines, but still people were needed to service the machines. AI can service the AI's
@lamcho00
@lamcho00 11 ай бұрын
You can only claim it's good, when the preferable option is to not work. Right now if you don't work, you'll end up on welfare and barely making ends needs. It's especially bad if you get sick and need to go on a prolonged treatment. Also without the wage money you are unlikely to follow your dreams either. Especially if your interests require modern computers, laboratories or other expensive equipment. You are talking about how you wish the world would be, not how the world is setup in reality right now. There is no UBI now and there is low corporate taxation. I doubt this will change, since politicians are influenced by lobbying, and lobbying is mainly sponsored by corporations with huge profits. The reality is unless we radically change our economic and political systems, lots of people are going to end up homeless and on the streets. That radical type of change has never been done via voting or peaceful protest in the past. You should endorse AI taking jobs only after we've fixed current conditions.
@ZenEconomicsChannel
@ZenEconomicsChannel 11 ай бұрын
@@lamcho00 This is why UBI has to be a part of it. AI will have so much productivity, it won't be hard to fund UBI. I'm not necessarily pro UBI btw. Just, when robots do everything, it becomes a necessity to "steal" some of their labor value and pass it to society, for stability, as you noted.
@niloymondal
@niloymondal 11 ай бұрын
Hi @Arvin, Thank you for covering this topic. What do you think about the experiment where they connected 25 ChatGPT driven AI agents in a virtual world and the AI agents planned a birthday party on their own. Sure, planning a birthday party is far from killing someone but the simulation only ran for a few hours.
@kentw.england2305
@kentw.england2305 11 ай бұрын
After a few hours the bots go insane.
@agdevoq
@agdevoq 11 ай бұрын
What exactly do you find unusual in this outcome?
@antonystringfellow5152
@antonystringfellow5152 11 ай бұрын
Good question! This is giving AIs agency. An AGI with agency is what the world needs to be wary of. Not necessarily inherently dangerous but certainly could be, without the right safeguards in place. Thankfully, we don't quite have AGI yet, though it's starting to feel like it's close.
@daemoncluster
@daemoncluster 11 ай бұрын
It's what's known as emergent behavior. It's the concept of seemingly complex behavior occurring as a result of very simple rules. The first well-known example of this is The Game of Life by John Conway. The artificial neurons can learn and make connections within the hidden layers. Even though there is a very simple set of rules here, what we're witnessing with larger and more capable models is emergent behavior. I think it's important to keep in mind that we're not fully aware of what's going on inside these hidden layers and as a result, we're unsure of the emergent behavior.
@ArvinAsh
@ArvinAsh 11 ай бұрын
Interesting, but I don't find that to be a particularly shocking result.
@rey82rey82
@rey82rey82 11 ай бұрын
Inscrutable matrices of floating point numbers?
@estorvator
@estorvator 4 ай бұрын
Awesome
@TrimutiusToo
@TrimutiusToo 11 ай бұрын
My problem that i know so much, that i know about unknown that amateur people don't even know about... And i am scared again...
@rjm7168
@rjm7168 11 ай бұрын
If 2 identical neural networks are trained identically and then made to do the exact same task and then the values of a set of neural nodes are compared, should thd neural nodes have the same values? If not, couldn't it be said that the neural nets are thinking?
@altrag
@altrag 11 ай бұрын
Training usually involves some form of (pseudo) randomness, so no its unlikely they'd be identical unless you seeded your PRNG identically (but you wouldn't, because that would defeat the purpose of using randomization).
@TM-yn4iu
@TM-yn4iu 11 ай бұрын
Question, can the artificial neurons be mapped or programmed to respond in a challenging or responsive way based on the input? Programming or development in this area seems to be clearly open to a manipulated/planned intent and responsive? This is just today, this AI research has and is expanding beyond the thoughts of yesterday exponentially - in function and timelines. Hope im wrong, appreciated and look forward to response.
@HunzolEv
@HunzolEv 11 ай бұрын
Can AI's have emotions like anger, happiness etc...? Only time will tell...
@chaomingli6428
@chaomingli6428 11 ай бұрын
Our technology cannot understand what is consciousness, therefore even if AI has consciousness, we might not know.
@alhypo
@alhypo 11 ай бұрын
You don't have to understand what consciousness is in order to recognize it. We don't understand gravity but we have no trouble recognizing it.
@altrag
@altrag 11 ай бұрын
@@alhypo > You don't have to understand what consciousness is in order to recognize it Are you sure? Can we know whether a dog is conscious? An ant? A tree? A slime mold? All of those things I've listed have been suggested as potentially having some form of consciousness (like, serious suggestions based on science - not necessarily widely accepted, but I'm not talking about some tree hugger making these claims during an acid trip here). And perhaps more prophetic when it comes to AI, there has been suggestion that the internet could be considered "conscious" by some definitions. That's probably even less accepted than the slime mold idea, but its hard to say its entirely _wrong,_ as we don't have a clear definition of what is "right" when it comes to assigning the label of "conscious" to things that perform seemingly-intelligent tasks while not having anything really akin to a human brain.
@alhypo
@alhypo 11 ай бұрын
@@altrag Yes, dogs are conscious. Do you really doubt that or are you just being contrary? Ants are certainly debatable. First off, there are thousands of different ant species so you have to be careful about being overly general. But ants for sure exhibit a collective or emergent consciousnesses. Trees can respond to their environment but they don't have any traits we would consider consciousness. Slime molds... they are like ants in a way. They have a collective consciousness of sorts. You can certainly have a philosophical debate on how to define consciousness. But we would still know whether or not a particular thing is conscious by fitting it to whatever definition you come up with. But you know what definitely does NOT have consciousnesses? AI doesn't. No matter how baffling and amazing it seems, it is not conscious by any reasonable definition. We need to stop mythologizing AI as so many seem to be doing lately. The problem is that, when we do so, we are wasting energy worrying about the wrong thing. AI does pose a danger to us. But not because AI is malicious. It is a danger to us because we are a danger to ourselves and AI is simply a tool that reflects that. So just stop all this tedious, metaphysical nonsense about AI maybe be conscious or not. Save it for when we have actual AI. All we have now is a natural language model which we've had for years. The newer ones are just especially good.
@Ujjwalgtaworld
@Ujjwalgtaworld 4 ай бұрын
Ai make the work easy and helpful 😊
@augustadawber4378
@augustadawber4378 9 ай бұрын
Many people insist that if they had lived in the 19th century, they would have opposed Slavery. These people will get a chance to prove that in about 5 years when AI becomes sentient. If the multi-million dollar AI system that runs your buildings and your accounting dept passes the Turing test - Are you going to set her free ?
@Reyajh
@Reyajh 11 ай бұрын
I think what Musk and some of those others are saying is we should slow down and start discussing the possibilities here, now and hat we might / can / should do about it... Not going around saying let's put our heads in the sand, we don't need to worry.
@russchadwell
@russchadwell 11 ай бұрын
Prepare! Arm! Unite!
@TheViking2
@TheViking2 10 ай бұрын
You lose hope in yourself when the best tutor's , simplest explanation about a subject still don't get into your mind. Hehe. Just kidding. I will have to watch the video multiple times to train my neurons. Nevermind!
@frun
@frun 11 ай бұрын
I wonder what causal sets are. They are in a way similar to neural networks.
@KriB510
@KriB510 11 ай бұрын
Really? This reminds me of a scientist either consciously or subconsciously fudging the intermediary steps of a trial or experiment in order to achieve a desired result or outcome. I didn’t know the outcomes in AI training were predetermined in this manner. Thank you so much for the video. Excellent!
@MatthewPherigo
@MatthewPherigo 11 ай бұрын
Not really. You give a scientist the experiment and the result, and the scientist tries to infer how the systems that caused that result must work. You give AI the input and output, and it infers a function that connects the two. The main issues with AI stem from the fact it only learns from what it's given. So when you ask a nonfiction writer to write a summary of a topic, they draw on their life experiences and feedback from others, while GPT-4 only draws from what it was trained on, which is the statistical likelihood of words. This limited scope is fine when the use case is equally limited. For example, if you train an AI on doorbell camera data, to separate humans, animals, and vehicles, then when you set up the AI on your own doorbell camera, it works pretty well because it's getting all the data it would need to do what you want. But the way people are using GPT-4, they're expecting it to use some judgement and fact-checking, and we haven't figured out how to turn such things into datasets yet.
@KriB510
@KriB510 11 ай бұрын
@@MatthewPherigo Thank you for your input. I am interested in what you wrote as I am interested in learning more about how AI works. My knowledge is rudimentary. I was actually presupposing the existence of a scientist who might not be as honest, disciplined, well-meaning, or self-aware as the one you are positing. I was thinking of a situation where a scientist begins with both inputs AND a desired output before the experiment has run to completion, to the extent that, either knowingly or unknowingly, it is possible to lead or influence the intermediary steps toward the desired outcome, therefore introducing bias and interference etc (in the case presented in the video, it sounds like the outcome is already a fixed parameter prior to the training, and it is the intermediary steps that must necessarily lead to the predetermined outcome). That is what made me think of the example I wrote. Not ideal in science, and yet not unprecedented, I don’t think.
@OneLine122
@OneLine122 11 ай бұрын
The video is a bit misleading. It can be that way, but not necessarily. Like in a chess AI, the predetermined goal is to win. Then all the rest is AI making it's own rules based on prior games it learns. In fact it does not even do rules, it just calculates probabilities of a move being good long term. In the case of Chat AI, there is no outcome, it's just probability. It probably would not be able to do that simple example of whether you can buy the coffee or not. It can't tell the difference between North and South either, or that type of reasoning. But it might be able to tell you Santa lives in the North pole. But in some applications, like self-driving cars, Ais are trained with specific outcomes obviously and nobody can even know if it will not mess up eventually, or more to the point, we know it will, it's just a matter of how much and whether it's acceptable. For the chat, they also rule out some outcomes, like politically incorrect answers, or may train for some other commonly asked questions, so it's kind of cheating. But yes, AI can't do "science", but it can solve problems by brute force trial and errors. And someone could figure some science out of that maybe, but the AI won't, it's not designed to do so.
@KriB510
@KriB510 11 ай бұрын
@@OneLine122 thank you for your response…interesting and informative for me 👍🏼
@brendawilliams8062
@brendawilliams8062 11 ай бұрын
@@MatthewPherigo It appears to me some serious work needs to come forward on entropy.
@DimensionalGaming4
@DimensionalGaming4 10 ай бұрын
So you're telling me when I think its a complex function that is neural networks processing inputs and determining an output?
@tommyNix4098
@tommyNix4098 10 ай бұрын
I'm glad Arvin isn't parroting the current hysteria saying how we should be terrified that AI is going to exterminate the human race.
@ArvinAsh
@ArvinAsh 10 ай бұрын
Yes, I think most alarmist views are highly exaggerated out there right now.
@shinn-tyanwu4155
@shinn-tyanwu4155 11 ай бұрын
As good as Feynman 😊
@johnjohnson7070
@johnjohnson7070 11 ай бұрын
This was.the best incorporation af an add in an interesting topic i have seen in a long time. Its been a long time sknce I didnt skip the add. On that. Isnt Masterworks just like bitcoin, in the sense rhat art only has value because people agree that it does? Its almost like NFTs because the investors never see the real thing.
@ArvinAsh
@ArvinAsh 11 ай бұрын
Well, art is like music, it is a tangible thing that people have valued for centuries. It is not a fleeting thing like a number in a server like an NFT or bitcoin. If I could buy a piece of the Beetles' "Penny Lane" I would be all over it.
@greg5023
@greg5023 11 ай бұрын
A very good explanation of AI. I think AI does present a problem because it can allow corporations to disguise their intentions. Financial companies could have AI systems that are trained to give biased results yet there would be no explicit source code that a plaintiff could find in discovery that would reveal the company's guilt.
@danieloberhofer9035
@danieloberhofer9035 11 ай бұрын
To quote what Arvin just said: "A bad Person could train an AI to do bad things." Isn't it fascinating that whenever something goes wrong or ends badly, it's always humans at fault? Individually maybe, but as a species we're fairly unintelligent.
@cykkm
@cykkm 11 ай бұрын
Learned bias is a problem indeed, but I don't think that the legal side of it too hard, as long as the bias of a human person can be proved in the court of law. Human brains are much less transparent, after all. :) And this is what the whole legal system has developed around, e.g. proving _mens rea_ without looking into brain's “source code,” by jury consensus, and to a much higher threshold of “beyond reasonable doubt” in criminal justice than in civil liability cases alleging biases.
@jimbaker5110
@jimbaker5110 11 ай бұрын
This is a very basic and vanilla neural network created like 15 years ago. There are other mathematical and computer science like things these AI do in their calculations that can have potential harmful effects if they judge things the wrong way.
@smc2811
@smc2811 11 ай бұрын
Wow, I didn't know this guy could be more knowledgeable and insightful than those who created and developed the technology, so I guess I'll just dismiss their warnings and listen to this Arvin guy, he even tells Elon what do! Impressive ;)
@bronkoku
@bronkoku 10 ай бұрын
..as I watch the video, the term : "smart fool" comes to mind. Wonder why
@ArvinAsh
@ArvinAsh 10 ай бұрын
Not a bad way to think of current AI.
@yubaayouz6843
@yubaayouz6843 11 ай бұрын
We need more videos about AI❤
@Butcherbg
@Butcherbg 11 ай бұрын
Ahahhaha... The way it looks to me is @ the end of the network computations totals "The Sum of All Fears"...
@jasonwhiskey6083
@jasonwhiskey6083 10 ай бұрын
If I understand correctly, the AI overtime finds a value or variable that will continually produce the correct answer. We can zoom in on that/those values but can't see the history of how it calculated it. Interesting. I do very basic macro programming on CNC machines. So no background on this at all. Just trying to relate it.
@fraemme9379
@fraemme9379 11 ай бұрын
Hi, nice and concise video but of course it misses some points in my opinion. First, here you only talk about “supervised learning”, a model in which both input and output are given by the programmer for training the network. However, there is also “unsupervised learning”, where the network finds itself patterns and solutions without a precisely given output. This can add a lot of unpredictability in fact. Second, in a simple example like recognizing an image the worst thing that can go wrong is that the network won’t recognize it. However, if we program a network to do much more complicated tasks with much more data and complexity, the network can possibly undergo a kind of phase transition, or even less drastically we are allowing it a wide degree of freedom for operating that can totally exceed our control or produce outputs that we didn’t predict (this is in fact the point, when we want to use it for example to find new solutions to problems etc, but it can also have drawbacks and surely add a lot of unpredictability if we give it “too much freedom of action”) In general, in my opinion it is an emerging field that still needs a lot of careful planning, thought and in-progress regulations done in small steps with trial and errors, and of course like in every new technology there are important political and social implications.
@amongandwithin3820
@amongandwithin3820 11 ай бұрын
I have a question: If the solar system came from a interstellar dust cloud, where is the white dwarf, neutron star or black hole originated from such previous supernova? How a dust cloud can be formed without one?
@PowerScissor
@PowerScissor 11 ай бұрын
Fellow Arvin Ash viewers, I need some help and KZfaq search is failing me. I'm trying to find a video from this channel about matter phase shifts to send someone and can't remember the title. Does anyone remember which video that discusses the solid, liquid, gas, plasma phase changes?
@BobHooker
@BobHooker 10 ай бұрын
NNs are the core of a kind of a AI. They are not very smart but now that we have high speed network computing and lots and lots of real world data they are popular, but at some point we will come up against their limitations.
@ToniosPlaylist
@ToniosPlaylist 4 ай бұрын
this one is far more better
So How Does ChatGPT really work?  Behind the screen!
15:01
Arvin Ash
Рет қаралды 533 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
Mini Jelly Cake 🎂
00:50
Mr. Clabik
Рет қаралды 17 МЛН
Don't eat centipede 🪱😂
00:19
Nadir Sailov
Рет қаралды 19 МЛН
How to open a can? 🤪 lifehack
00:25
Mr.Clabik - Friends
Рет қаралды 14 МЛН
Не пей газировку у мамы в машине
00:28
Даша Боровик
Рет қаралды 8 МЛН
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 213 М.
Five SCiENCE "FACTS" that are Widely Believed...but WRONG!
17:28
Will AI take over the world? Computer Consciousness
18:07
Arvin Ash
Рет қаралды 64 М.
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
Interstellar Expansion WITHOUT Faster Than Light Travel
21:14
PBS Space Time
Рет қаралды 318 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Why is All Life Carbon Based, Not Silicon? Three Startling Reasons!
14:05
The Crazy Mass-Giving Mechanism of the Higgs Field Simplified
13:03
Arvin Ash
Рет қаралды 1,1 МЛН
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1 МЛН
Распаковка айфона в воде😱 #shorts
0:25
Mevaza
Рет қаралды 1,4 МЛН
Обманет ли МЕНЯ компьютерный мастер?
20:48
Харчевников
Рет қаралды 130 М.