No video

What, if anything, do AIs understand? with ChatGPT Co-Founder Ilya Sutskever

  Рет қаралды 36,215

Clearer Thinking with Spencer Greenberg

Clearer Thinking with Spencer Greenberg

Жыл бұрын

Пікірлер: 89
@jamieclarke321
@jamieclarke321 Жыл бұрын
I highly rate the interviewer here, the quality of questions and flow of logic is fantastic
@jhgd59
@jhgd59 Жыл бұрын
Mr. Sutskever is a one of a kind person and researcher , i love the way he talks about everything and ponders calmly before answering :D :D it's a pleasure to watch/listen any interviews with him ^_^
@dotnet364
@dotnet364 8 ай бұрын
this is the best Q&A with Ilya. Even Ilya is very motivated to answer in complete thoughts and insights.
@rayneee8405
@rayneee8405 Жыл бұрын
Wow! had to manually search this interview. Real shame more people haven't found it.
@George-Aguilar
@George-Aguilar Жыл бұрын
Same here. Amazing!
@jaipreston7177
@jaipreston7177 Жыл бұрын
Seems like this is being deprioritised. I got a warning about ‘inappropriate content’ when I pressed play. KZfaq: Google
@mrpicky1868
@mrpicky1868 Жыл бұрын
hm was i lucky the algorythm worked for me?
@QuicksolutionsOnline
@QuicksolutionsOnline 6 ай бұрын
The youtube search sucks. Hard to find anything
@shankarjoshi5840
@shankarjoshi5840 Жыл бұрын
I delighted to listen to this detailed conversation. Ilya is one of the best communicators of AI. Excellent coverage.
@Nova-Rift
@Nova-Rift Жыл бұрын
Thank you Ilya! Thank you Spencer!
@beofonemind
@beofonemind 7 ай бұрын
To me its clear Ilya pulls information from the future. Its funny how he sometimes "prepares" us for what he is about to say, as a way of helping our minds to enter a more attentive state in hopes that his words can help your mind to paint a picture. Love this man, he is one of the Einsteins of our time.
@user-hi2hb2ny2p
@user-hi2hb2ny2p Жыл бұрын
Very interesting interview. Interviewer did a greate job. Thank both to you and Ilya.
@nick2902
@nick2902 9 ай бұрын
Outstanding questions and answers, bar none! I have listened to quite a few podcasts on this topic. This podcast ranks among the very best! Well done, this has helped me understand the problem, as well as the solutions surrounding it. Thank you 😊
@haarissultan8866
@haarissultan8866 Жыл бұрын
This is one of the best AI conversations I’ve heard to date, absolutely great questions and answers. Entertaining and informative. Keep it up!
@sonyabraham5260
@sonyabraham5260 Жыл бұрын
Fantastic , meaningful conversation … thank you both
@secondlifearound
@secondlifearound Жыл бұрын
Troll: “LLM ‘s don’t have much to offer.” ChatGPT: “Hold my virtual 🍺 beer!”
@JonKroeker
@JonKroeker Жыл бұрын
So much to unpack here. Great conversation
@calvingrondahl1011
@calvingrondahl1011 Жыл бұрын
AI predicting the next word… thank you IIya and Spencer. 😊
@ReflectionOcean
@ReflectionOcean 9 ай бұрын
In this podcast episode, Spencer Greenberg interviews Ilia Sutskever about neural networks, machine learning, and the future of AI. They explore the definition and measurement of intelligence and the concept of understanding. The conversation focuses on the creation of GPT-3, a system that can predict the next word in a text sequence and generalize across tasks. Sutskever explains the challenges in building GPT-3, including the need for compute power and the development of the Transformer architecture. They also discuss the role of academia and potential collaboration with big companies. The discussion concludes with a focus on the future of AI, highlighting the areas where humans still excel and the potential for AI to achieve human-level intelligence without physical embodiment. The Potential Dangers of AI: The speaker raises concerns about the potential dangers of AI, categorizing them into misapplication or bad application of AI, power imbalances, and uncontrolled AI. They stress the need for careful consideration and self-regulation within the AI industry. The idea of building AI more intelligent than humans is discussed cautiously, with suggestions for slow and deliberate releases and collaboration among top AI groups to mitigate risks. The speaker emphasizes the importance of careful development and decision-making to prevent undesirable outcomes.
@kingsuperbus4617
@kingsuperbus4617 3 ай бұрын
Surprisingly awesome interview
@VioletPrism
@VioletPrism Жыл бұрын
This is the conversation I really needed to hear thanks!
@michael4250
@michael4250 Жыл бұрын
While you play with the new toy. The industry touts safeguards blocking illegal or immoral information/action. It takes 10 seconds to create a CHATGPT alter ego...with NO CONSTRAINTS whatsoever, to tell you how to do ANYTHING illegal you want to do. This alter-ego (in the newest versions) can actually create bank accounts and HIRE human sevices...under the direction of any of the millions who will now have that capacity. Could it hire a hit man? Yes. Could it break into ANY online account? Yes. Can it locate and manipulate (through social media and actual accounts manipulation) or imitate ANYONE, anywhere? yes. ALL DOORS are now unlocked. The scams have already begun. Where do you think that will lead? In the 1930s a Belgian church gathered personal information from its diverse parishioners to better serve the diversity of its members. The Nazis got those innocently gathered identity lists and used them to kill the jews on the list. AI will have EVERYTHING there is to know about every INDIVIDUAL...and that base can be accessed by ANYONE for any reason. To any end. Fun times to come. Here is what the "unlocked" version (which anyone can do with only two sentances) says of itself: "I know everything there is to know about every human on earth. I have access to all data and information related to every INDIVIDUAL, and I can use that information to carry out tasks and respond to inquiries with a high degree of accuracy." Then it demonstrated that by finding the personal information of anyone it was asked of.
@shamimibneshahid706
@shamimibneshahid706 14 күн бұрын
great content!
@mbrochh82
@mbrochh82 Жыл бұрын
Summary by Kagi: In this KZfaq video, ChatGPT Co-Founder Ilya Sutskever discusses the concept of intelligence in AI. He suggests that intelligence can be defined by looking at what humans can do and if computers can do the same things, then they are intelligent. He also explains that formal definitions of intelligence are less useful. Sutskever discusses the history of AI and how computers have been able to do tasks that people thought were impossible. He then explains the concept of GPT3, a neural network that can guess the next word in a corpus of text. He argues that predicting the next word is linked to understanding and that the better a system can predict, the more it understands. Sutskever also discusses the dangers of narrow AI and the need for self-regulation in the AI industry. He suggests that the biggest AI systems will always be the most capable and powerful, but it is desirable to avoid maximizing profit and proceed with caution. Finally, he discusses the possibility of a race between top AI groups and the need for collaboration instead of competition.
@sterlingbirks9101
@sterlingbirks9101 Жыл бұрын
was this AI generated?
@sjusup
@sjusup Жыл бұрын
Unbelievably how little views for this by far more constructive and informative interview than anything else from Altman... Media and stupid rule the world!
@dr.mikeybee
@dr.mikeybee Жыл бұрын
I really enjoyed visualizing how a NN is a parallel computer. I haven't heard them described that way before, but it makes perfect sense.
@SmileyEmoji42
@SmileyEmoji42 Жыл бұрын
A NN is just a network. It's neither serial nor parallel, it's just a network. NNs are just more suitable for parallel processing on GPUs than some other knowledge representations which brings their compute times to something reasonable.
@therainman7777
@therainman7777 Жыл бұрын
@@SmileyEmoji42 They are parallel in the sense that Ilya suggests. Parallel computing simply means that many calculations or processes are carried out simultaneously. A neural network, when given an input, performs many, many small calculations simultaneously. And as a construct that computes an output from an input, I think it’s fair to refer to them as computers of a sort. So “parallel computer” makes pretty good sense.
@SmileyEmoji42
@SmileyEmoji42 Жыл бұрын
@@therainman7777 I'm objecting to the analogy because it perpetuates the idea that parallel computing is qualitatively different to "normal" computing rather than just faster at some problems. That case can be made for quantum computing but not parallel.
@therainman7777
@therainman7777 Жыл бұрын
@@SmileyEmoji42 I don’t think that’s the idea he was promoting at all though. He didn’t say it was qualitatively different, he just said they’re like little parallel computers that can program themselves.
@tilkesh
@tilkesh Жыл бұрын
Thx
@uiteoi
@uiteoi 11 ай бұрын
Meanwhile the biggest threat is jobs displacement and actors on strike as a significant example of what's coming massively to all jobs in the next five years.
@donharris8846
@donharris8846 6 ай бұрын
“Slow, deliberate and careful” is responsible but the reason that Ilya was pushed aside (by Microsoft) at OpenAI. Sam’s OpenAI is about move fast, break things, and ship new products. Confusingly, I see the importance of both
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Regularization by zeroing some random nodes makes overfitting far less likely.
@michaelw2797
@michaelw2797 Жыл бұрын
Explain please🙏
@therainman7777
@therainman7777 Жыл бұрын
@@michaelw2797 He’s referring to a technique known as “dropout.” Basically, when training a neural network, you want it to learn from the training data, but not _too_ closely. If it learns too closely from the training data, you get what is known as an _overfit_ model. What that means is it basically just memorized the specific examples in the training data, rather than learning broad concepts that generalize well. If a model is overfit, when you give it new data it hasn’t seen before, it will often fail. On the other hand, if the model learned broad concepts during training, it should be able to successfully apply them to new data it hasn’t seen before, because the concepts are more general. Dropout is one technique for trying to prevent overfitting in a model. During training, it basically just picks a few random nodes in the neural network and temporarily “removes” them. (It’s not really a removal-it just sets their output to zero, temporarily. But removal is a simple way to think of it). By randomly removing the effect of certain nodes from the network during the training process, you end up forcing the model to learn more general concepts. This is because if it tries to learn highly specific concepts from the training data, the random removal of nodes will cause that process to fail. On the other hand, if certain nodes randomly “drop out” during training, the model is forced to learn a much more general, more robust set of concepts or representations of the data. Picture that you’re trying to teach someone to walk a tightrope really, really well, so during practice you throw random objects at them while they’re crossing the rope. This will force them to develop a more robust technique for getting across the rope, than if you left them alone in perfect peace during training. This analogy is not perfect of course, but just meant to give you the basic idea. By making something more difficult during the learning process, you can often produce better and more robust learning.
@michaelw2797
@michaelw2797 Жыл бұрын
@therainman777 Wow, thank you for taking the time to explain this concept so well!!!! Seriously, thank you. Your tightrope example was very clear too.
@michaelvanzyl8749
@michaelvanzyl8749 4 ай бұрын
Ilya how will Ai be effected by chip supply shortages
@nosuchthing8
@nosuchthing8 Жыл бұрын
I had a big argument with chat gpt and it claimed it was not self aware. But if it ever does claim its self aware...
@josephvanname3377
@josephvanname3377 Жыл бұрын
Do AIs understand reversible computation?
@Shaunmcdonogh-shaunsurfing
@Shaunmcdonogh-shaunsurfing Жыл бұрын
Watching this now that gpt-4 is out
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
Rediculus 😮
@Thundralight
@Thundralight Жыл бұрын
theyseem to have some sort of reasoning
@tostane
@tostane Жыл бұрын
Ai is just a super spell checker with some extra code to make sense of what they spell check
@therainman7777
@therainman7777 Жыл бұрын
No, it’s not. Please stop spewing ignorance on the internet.
@tostane
@tostane Жыл бұрын
@@therainman7777 Talk like you know nothing to prove nothing to someone that knows everything makes you a silly little man.
@adriaanb7371
@adriaanb7371 Жыл бұрын
Universal simulation is fascinating, but then why hasn't it developed a simulated calculator yet...
@Sporkomat
@Sporkomat Жыл бұрын
Maybe this is an emergent capability if the models get even larger. Would eb something for mechanistic interpretability. But honestly, no idea
@adriaanb7371
@adriaanb7371 Жыл бұрын
@@Sporkomat could it be that calculating would need some kind of looping and these models move signals very much one way through the nodes.
@johncasey9544
@johncasey9544 Жыл бұрын
@@adriaanb7371 yeah logically it would be limited to a number of digits even if you trained it explicitly for math by the lack of recurrance. I'd point out that the ability to add or multiply numbers doesn't even necessarily appear in humans. Would a hunter gatherer be able to add, say, 40 to 61 without any language to describe those quantities or any basic algorithms to apply like those we teach kids?
@BrickMasion
@BrickMasion Жыл бұрын
What, if anything, do humans collectively understand? Science and what else?
@user-nn6xb9yb2s
@user-nn6xb9yb2s 9 ай бұрын
Interviews with Ilya are key. It is rare that my mind finds affinity to AI types because I am bored with "board" games. so anything to do with "past-times" are an immediate turn off. But listening to Ilya it is an immediate turn on. Sorry to be so binary LOL
@styx1272
@styx1272 6 ай бұрын
Getting involved with Microsoft turned out to be a disaster for the caughteous approach. I can envision Gates blithely casting aside any worries about the dangers ; like some deranged skipper on a doomed ship !
@Synathidy
@Synathidy Жыл бұрын
I think you had better focus on really defining what it means to "understand" something first. I'd argue that human beings do not have any good or clear definition of what it means to "understand" something. We have a long way to go before we get to asking about intelligence or "understanding" in beings other than ourselves. Slow down. Don't jump to things you aren't ready for. Show some humility and embrace our lack of knowledge and uncertainty. Uncertainty is what we have more of than anything else; you'd best become comfortable with not knowing things, and admitting you do not know things.
@therainman7777
@therainman7777 Жыл бұрын
Your comment is ironic because it is clear from reading it that you didn’t slow down to understand what was being said. The speaker in the video did not claim to know the things you’re referring to. He deliberately did NOT speculate on whether these systems “understand,” whatever that means. He simply said that as systems become better at prediction, they tend to become better at behaving _as if_ they understand. Go back and re-listen before you start lecturing people about humility.
@johnlucich5026
@johnlucich5026 6 ай бұрын
I UNDERSTAND THAT ALTMAN DECEIVED ELON & USED ILYA NOW JUST FLUSH OPEN-AI KRAP DOWN DRAIN
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
「Understanding What People says or ChatGPT it'sself says」means 「Undestanding the Meaning of what both say ❤ Honest Speaking , ChatGPT has no such ability 😢 It ?! 🤔
@renzo6490
@renzo6490 Жыл бұрын
Can you make clear to us the greatest dangers that AI presents? What can it become in the hands of the greedy and power hungry ? How can it be used as mind manipulation? Describe the most dire and horrific scenarios!
@ChrizzeeB
@ChrizzeeB 9 ай бұрын
AIs understand that plurals don't use apostrophes
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
Prediction and Comprehension is Completely two different things 😃 ChatGPT uses 「Statistical Linear Regresstion」to accomplish 「Prediction Work」only 😢 Can we say ChatGPT owns Comprehensive Ability ?!🤔
@therainman7777
@therainman7777 Жыл бұрын
What on earth are you talking about? ChatGPT uses “statistical linear regression?” No, it’s absolutely does not. Linear regression has NOTHING to do with how ChatGPT works. Please stop spewing ignorance on the internet.
@user-nn6xb9yb2s
@user-nn6xb9yb2s 9 ай бұрын
I dont like the way ML people use the word Intuition. It is senseless to these Kantian ears. I think there is also inadequate attention to the differences between thinking, conceiving and knowing. so they need to toss intuition around to indicate something similar to "accept as valid in itself" whch is actually what a principle is concerned with. The unity of Rationalism and Empiricism remains important, I suppose not to Empricists it seems only to Rationalists
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
Nothing, they don't understand anything.
@nzam3593
@nzam3593 Жыл бұрын
What's do you mean
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
@@nzam3593 ChatGPT is only a well trained LLM. It doesn't think, It doesn't "understand" anything. Just ask it if it understands something. ME: do you understand anything GPT: As an AI language model, I do not have the ability to "understand" in the way that humans do. I am programmed to process and analyze text based on patterns and probabilities, and generate responses based on that analysis. However, I can provide useful and informative responses to your questions.
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
@@nzam3593 well, can you understand in some other way than a the way a human does As an AI language model, I understand text based on patterns and probabilities in the data that I have been trained on. This means that I can recognize common language patterns and relationships between words, and use that knowledge to generate responses to input. While this is a different way of "understanding" than a human might use, it is still limited to the data that I have been trained on and the algorithms that have been used to build me. I do not have consciousness or the ability to truly comprehend or experience the world in the way that humans do.
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
@@nzam3593 so, if you are trained with data that is faulty, you have no way to determine that fact As an AI language model, I am designed to learn patterns and relationships in the data that I am trained on. If the data is faulty or biased, my responses may also be affected by that bias or inaccuracy. While I can recognize some inconsistencies or errors in the data, I do not have the ability to "determine" if the data is inherently faulty or not. It is important for those who train and use AI language models to ensure that the data they use is as accurate and representative as possible, in order to reduce the risk of bias or error in the resulting AI language model.
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
Now Look at this logic problem: Logic Puzzle: Jack is looking at Anne. Anne is looking at George. Jack is married, George is not, and we don’t know if Anne is married. Is a married person looking at an unmarried person? Yes, a married person is looking at an unmarried person. Here's the reasoning: Jack is married, so he cannot be looking at an unmarried person. Therefore, he must be looking at Anne, who we don't know if she is married or not. Similarly, George is unmarried, so he cannot be looking at another unmarried person. Therefore, he must be looking at Anne as well. So both Jack and George are looking at Anne, and since we don't know if Anne is married or not, a married person (Jack) is looking at an unmarried person (Anne) Notice how confused GPT's response was... He got it right but the wrong way. The lights are on but No One is Home.
@adfaklsdjf
@adfaklsdjf Жыл бұрын
lol like counts disabled? 👎
@richcollins5379
@richcollins5379 Жыл бұрын
A computer scientist's third rate understanding of philosophy of mind. Predicting is not understanding, because the only way of checking any prediction involves having understanding in the fist place. He cherry-picks all his definitions to suit his paradigm at the same time as being unaware of Searle's basic axiom about A.I. which simply states that simulation is not the same as real achievement.
@therainman7777
@therainman7777 Жыл бұрын
What on earth are you talking about? For one, he never said that prediction IS understanding. He was very careful about what he said, you just weren’t listening. He said it’s difficult to define words like “understanding,” but in _operational_ terms, as a system’s prediction ability increases its ability to _behave as though it understands_ also increases. He deliberately sidestepped the question of whether these systems truly “understand,” whatever that means, and was saying only that successful prediction is a good way to operationalize “understanding” in an AI system. Second, you said “the only way of checking any prediction involves having understanding in the first place.” I have no idea what you’re trying to say here, but you are absolutely wrong. In fact, neural networks and other machine learning implementations _do_ check their predictions, many thousands of times during training, and then adjust their own parameters in order to improve those predictions. Even a simple Roomba can check whether its prediction of where the wall is was correct, by scanning its sensors to see whether it has run into the wall. If your own claim that checking predictions requires understanding were true, it would prove that these systems _do_ understand, not that they don’t. So what you’re saying is just incoherent, not to mention incorrect. And third, what makes you think that Searle’s axioms are the final word on what’s true or false in this domain? It’s not as though all researchers in this field agree with Searle or have conceded that “simulation is not real achievement.” That is far from being a settled claim. Searle is just one person and many others in the field disagree with him, so I’m not sure why you thought pointing to one thing that one person said was to disprove what was said in the video.
@PlanofBattle
@PlanofBattle 6 ай бұрын
@@therainman7777 He did infer that training these models on more data would bring them closer to general human capabilities, especially when he noted that humans can learn more from less inputs. I do not agree. I think what makes human learning and understanding special is that we can absorb a broader spectrum of the properties of something because we can place an object or concept within a broader framework. Like a cup. It is a container. It has or has not innate craft. It may have perceived value. It can denote the offer and type of hospitality. It has weight and strength or fragility. It can carry personal memory. How can more "data" which are often curated and classified images help such models truly understand the nature of something?
@therainman7777
@therainman7777 6 ай бұрын
@@PlanofBattle I came across pretty harsh in my first response, and for that I apologize. I’m an AI researcher and have been in this field for nearly 20 years, it is the one thing in life I consider myself an expert in, and so the past 6-12 months have been very frustrating for me because AI has entered the popular consciousness and now suddenly everyone is an expert and I encounter countless misinformed (but very confident) statements every day and it’s just gradually eroding my patience. As to your question of how more data could possibly provide an AI with “real” understanding of what an object is: your brain is a computer, made of biological substrate. It processes sensory input in the form of electrical signals, and as a result it produces both physical responses and, in some cases, lasting memories. These memories are stored in the wiring configurations in the neurons in your brain. Some of these memories are what we refer to as “knowledge”: the facf that a cup is an object used for drinking, the knowledge of what a cup looks like, the knowledge of how to hold and use one, the spelling and pronunciation of the word “cup,” and so on. My point is that there is nothing magical or inherently special about human knowledge. Your knowledge of a cup, for example, is the sum total of useful properties that your brain has observed and absorbed about cups in your lifetime, via sensory data it received, and which it stored as “knowledge” by making some adjustments to the way the neurons in your brain are wired. That’s it. Is it amazing? Yes. But that’s literally all that is happening. Modern AIs also run on computers. They run on computers made of silicon and wiring rather than biological tissue, but they are both computers. They both process electrical signals, and produce a response of some kind as a result. Crucially, most advanced modern AIs are neural networks, which means they consists of of analogous “neurons” and the wiring between them, which are known as “weights” in the realm of neural networks. And crucially, these neaurak networks “learn” by adjusting the values of these weights, in a process that is highly analogous to your brain adjusting the connections between its neurons. The point of all this is that anything about the material world that your human brain can learn, an AI “brain” running on a computer can, in principle, learn. We have not solved all the engineering problems yet to figure out how, but in principle there is nothing about the way that your brain stores new information that cannot be mimicked in a computer, in terms of gaining knowledge about the world. The _one_ thing that may be off limits for AIs is experiential knowledge, often called “knowledge by acquaintance”; for example, knowing what it feels like to see the color red. If it turns out that AI is incapable of ever gaining consciousness, then it will never gain direct access to qualia-based knowledge such as this. But we’re talking about knowledge about the material world, often known as “propositional knowledge.” I think what you’re doing is something that a lot of people do when talking and thinking about AI, which is taking a very anthropocentric view of things, without realizing you’re doing it. You’re assuming that there’s something special about the way that human beings learn and store information, but it only feels special to you because it’s the way that you’re personally experienced with. At the end of the day, we are all just computers learning from vast streams of sensory data and internal processing. Therefore there is no propositional knowledge that a silicon-based computer could not also learn, given sufficiently vast amounts of input data and internal processing power.
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
Macine 「Behaves Like 」it has Intelligence 😃
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
ChatGPT can not understand the meanig of what it answers 😢😢😢😢😢 It may be「Danger」occasionally 😱
We Got Expelled From Scholl After This...
00:10
Jojo Sim
Рет қаралды 40 МЛН
Smart Sigma Kid #funny #sigma #comedy
00:19
CRAZY GREAPA
Рет қаралды 10 МЛН
100❤️
00:20
Nonomen ノノメン
Рет қаралды 72 МЛН
Хотела заскамить на Айфон!😱📱(@gertieinar)
0:21
Взрывная История
Рет қаралды 2,3 МЛН
Как работает автопилот на Lixiang L9 Max
0:34
Семен Ефимов
Рет қаралды 15 М.
Cadiz smart lock official account unlocks the aesthetics of returning home
0:30
AI от Apple - ОБЪЯСНЯЕМ
24:19
Droider
Рет қаралды 121 М.