No video

Mathematician debunks AI intelligence | Edward Frenkel and Lex Fridman

  Рет қаралды 195,178

Lex Clips

Lex Clips

Жыл бұрын

Lex Fridman Podcast full episode: • Edward Frenkel: Realit...
Please support this podcast by checking out our sponsors:
- House of Macadamias: houseofmacadam... and use code LEX to get 20% off your first order
- Shopify: shopify.com/lex to get free trial
- ExpressVPN: expressvpn.com... to get 3 months free
GUEST BIO:
Edward Frenkel is a mathematician at UC Berkeley working on the interface of mathematics and quantum physics. He is the author of Love and Math: The Heart of Hidden Reality.
PODCAST INFO:
Podcast website: lexfridman.com...
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com...
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
SOCIAL:
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Medium: / lexfridman
- Reddit: / lexfridman
- Support on Patreon: / lexfridman

Пікірлер: 897
@LexClips
@LexClips Жыл бұрын
Full podcast episode: kzfaq.info/get/bejne/hdmYY5B7mLqWno0.html Lex Fridman podcast channel: kzfaq.info Guest bio: Edward Frenkel is a mathematician at UC Berkeley working on the interface of mathematics and quantum physics. He is the author of Love and Math: The Heart of Hidden Reality.
@quantum_ocean
@quantum_ocean Жыл бұрын
Terrible title @lex he’s not talking about “AI” generally but about LLMs specifically.
@robertmartin2262
@robertmartin2262 Жыл бұрын
I look at it the opposite way, a large languague model would have never assumed that the square root of a negative number is impossible...
@reellezahl
@reellezahl Жыл бұрын
Lex, your guest didn't even scratch the surface on the issue. I'll summarise his argument: - It took humans centuries to break the barriers. - I *don't think* that LLMs can do this. That's a god-of-the-gaps argument. LLMs *at the moment* are just performing (roughly) two functions: imitation of and summaries of all the discussions that humans have conducted (both in forums and documentation) on the internet for the past decades. There are other systems that are going to come online soon, that will put this linguistic mimicry to shame. *Artificial reasoning* and experimenting. In part this is already being done. You don't even need to know much about this tech. As a kid I grew up hearing all these stories about *how special* so-and-so in Italy or England or wherever was. So hearing the same ol' tripe from this Russian mathematician made my eyes roll so hard. He did not bring anything new or substantial to this interview. Take √-1: there really is not anything more to this than: find an algebraic framework which extends the reals and solves X² + 1 = 0. Extensions of structures are _very_ common concepts. The fact that it took centuries for humankind to do this is not something to be in awe of but to be ashamed of. Give an AI a few such goals and it *will* come up with a suitable framework. I spent my life trying to show that all these ideas and results _can in principle_ be independently be found *without* an Einstein/von Neumann/Gödel, etc. And it works. (The historical proof of this is that mathematical results often get proved _completely independently_ by multiple people.) Some ingredients are: necessity-is-the-mother-over-invention[or: discovery] + reflection (about concepts and connections you already know) + refinement of ideas + test-cases. *THIS IS ALL STUFF YOU CAN AUTOMATE.*
@quantum_ocean
@quantum_ocean Жыл бұрын
​@@reellezahl it's bit more than imitation of and summaries. It's creating and maintaining representations, among other things.
@reellezahl
@reellezahl Жыл бұрын
@@quantum_ocean sure, that's why I wrote 'some ingredients'. I would like to add: anybody who is not undergoing an existential crisis at the moment, has not reflected enough on how thinking, discovery, etc. work and may even think it's just all magic. I think Frenkel has reflected on history a lot, but not on enough on mechanical thinking, esp. the current 'organic' paradigms being implemented.
@psyfiles7351
@psyfiles7351 Жыл бұрын
This is now one of my favorite interviews. Love and math. What a guy
@FederatedConsciousness
@FederatedConsciousness Жыл бұрын
It was so good. Absolutely one of the best Lex has done. A conversation that pushes the boundaries of everything we know.
@Crusade777
@Crusade777 Жыл бұрын
Yet people say," Where did he debunk A.I/General Intelligence ".
@julioguardado
@julioguardado Жыл бұрын
Frenkel has to be the most interesting mathematician ever. The whole interview is tops.
@SinanAkkoyun
@SinanAkkoyun Жыл бұрын
He is a top tier mathematician and explains to the audience in simple, digestable but still leapful mysterious manner. Giving 'simple' examples like srqt(-1) to explain the emotional concept hiding behind just brings joy to me, such a lovely person!
@timsmith2525
@timsmith2525 Жыл бұрын
That's the sign of a true expert: He can explain things clearly to non-experts.
@JoshTheTechnoShaman
@JoshTheTechnoShaman Жыл бұрын
You can tell this is how wins the ladies 😂
@theecharmingbilly
@theecharmingbilly Жыл бұрын
Yeah, we watched the video too.
@thewildfolk6849
@thewildfolk6849 Жыл бұрын
Wow I not only totally get his point but simultaneously finally understood complex numbers from this. Fantastic guest, Lex
@jasonbowman9521
@jasonbowman9521 Жыл бұрын
I don't know if it's exactly the same but I find I understand concepts better when shown. I study 3D computer art as a hobby. In order to let the computer help make certain textures and bump maps a person can use these things called nodes. They have texture nodes and geometry nodes and I think or tell myself I understand somewhat what those math formulas mean because I can see objects changing in real time every time a node is adjusted. The 3D program is free. It's called Blender. And I think if a mathematician could learn it they could figure out a way for everyday people to see certain things. I kind of get what a black hole is but I doubt I could chart out everything that is going on.
@leighedwards
@leighedwards Жыл бұрын
Where in this clip did Edward Frenkel debunk AI intelligence?
@zfloe
@zfloe Жыл бұрын
Click bait sadly
@BillStrathearn
@BillStrathearn Жыл бұрын
The person that Lex hires to write titles for his KZfaq clips is truly the worst person
@falklumo
@falklumo Жыл бұрын
Well, Frenkel indeed argues that LLMs won’t be able to show imagination like humans do. But AI and LLMs aren’t synonymous.
@timorantalainen3940
@timorantalainen3940 Жыл бұрын
​@@falklumoI don't think we have a single example of AI that did not involve teaching or creating a model. No space for imagination in the methods available today and hence the click bait is somewhat warranted in my opinion. Based on the fact that we can imagine and invent new rules (e.g., add another dimension to make room for complex numbers) it cannot be ruled out that a creative AI arises at some point but we don't have an idea of how that might be achieved at the moment as far as I know. The thing holding us back is the lack of understanding regarding how conciousness arises in the brain. Unless we understand that, we cannot manufacture such a system other than by accident. Please correct me if I'm wrong. I'd be curious to read up on machine intelligence methods that are not dependent on creating a model. Or even just a description of such a method in case we haven't yet managed to implement it.
@WralthChardiceVideo
@WralthChardiceVideo Жыл бұрын
In the title of the video
@Gordin508
@Gordin508 Жыл бұрын
When going through education, consider yourself blessed if you got teachers/instructors/professors who are as passionate about their field as Frenkel is about math.
@Brian6587
@Brian6587 Жыл бұрын
I had one such teacher in high school and he turned something I hated to something I loved! It makes a difference!
@lacedmilk8586
@lacedmilk8586 Жыл бұрын
Wow! This dude is absolutely passionate about math. There was pure joy in his eyes as he spoke.
@angelcastro3129
@angelcastro3129 Жыл бұрын
Edward Frenkel... A beautiful Mind but What a beautiful Soul this man has, It shows through his eyes wise yet childlike. Awesome Thanks you Lex great interview.
@dannygjk
@dannygjk Жыл бұрын
AI has functional intelligence but it is not the same as human intelligence. It's the same as birds do not fly in the same way as aircraft fly.
@aaronjennings8385
@aaronjennings8385 Жыл бұрын
Interesting analogy. I'll remember that as an example.
@PabloVestory
@PabloVestory Жыл бұрын
And humans have consciousness, whatever that would mean. If it's possible for AI's to sustain some kind of "real" (not "simulated") self-awareness or not, that's yet to be proven.
@dannygjk
@dannygjk Жыл бұрын
@@PabloVestory I think self-awareness in AI will be different from humans.
@ChristianIce
@ChristianIce Жыл бұрын
@@PabloVestory AI only mimicks intelligence, it could even mimick consciousness, but mimicking is the very foundation of how it works. The mimicking process can be extended and improved to the point it's indistinguishable from the real deal, but it will be still mimicking. AGI, on the other hand, is a different approach and it's the attempt of creating an actual thinking machine. As Carmack said, the first iteration of AGI will probably look like a 4 years old kid, and you start from there.
@alexnorth3393
@alexnorth3393 Жыл бұрын
@@ChristianIce No they don't mimic intelligence
@steliostoulis1875
@steliostoulis1875 Жыл бұрын
The title sounds awkward and wrong somehow....
@AutitsicDysexlia
@AutitsicDysexlia Жыл бұрын
Yeah... almost redundant and repetitive... like a pleonasm.
@BKNeifert
@BKNeifert Жыл бұрын
No, AI is debunked. It makes perfect sense.
@dannygjk
@dannygjk Жыл бұрын
@@BKNeifert Need to agree on definitions.
@BKNeifert
@BKNeifert Жыл бұрын
@@dannygjk It's hard to say. Have you ever looked at AI? It doesn't think. It just repeats what it's programmed to say. It doesn't have the capacity to understand. Like, can it make beautiful pictures? Yes. But it doesn't make meaningful pictures.
@BKNeifert
@BKNeifert Жыл бұрын
@@dannygjk Like, I doubt AI could understand the Romantic Poets, or write something like Coleridge or Southey. If it tried, it'd be vapid, dischordent. A lot of the metaphor AI creates, is within the human mind itself, programming the AI to create it. It's not creating, but the human is, which the AI interprets and then vomits a sort of copy of what the person who gave the prompt said, only in more detail. And it also plagiarizes. I've noticed that, too.
@jjacky231
@jjacky231 Жыл бұрын
When I was a kid many people explained why computers would never be as good at chess than the best humans. The explanations where similar to Edward Frenkels explanations: "There just is something that we get and computers won't get in chess / mathematics.
@agatastaniak7459
@agatastaniak7459 Жыл бұрын
Back then we didn't know that a master chess palyer simply recalls 70 possible tactical combinations of a new move per 1 minute. Now we know it, so it's more than obvious that's it all about time within which someone or some device can perform such an operation at higher speed. People you mention didn't have this knowledge this is why nowadays we judge them too harshly.
@jjacky231
@jjacky231 Жыл бұрын
@@agatastaniak7459 Ray Kurzweil predicted back in the 80ies that a computer will beat the chess world champion. He roughly also predicted this: "when a computer beats the world chess champion one of three things will happen: people will think more highly of computers, less highly of themselves or less highly of chess. My guess is the latter." He was right. Everybody knew that computers could compute faster than humans and that they would become even faster. And that software would become better and better. But people thought that wouldn't be enough. And I don't judge the people back then harshly. It was easy to underestimate the potential of computers. But I think it's not wise to make the same mistake again.
@amotriuc
@amotriuc Жыл бұрын
Edward Frenkels didn't say that this never can be done, he was specific that he thinks that LLM can't do it. I have suspicion he is right, as well I have suspicion Open AI guys know this as well, they just build up hype to get more funding. It is not like this is the first time it did happen.
@AKumar-co7oe
@AKumar-co7oe Жыл бұрын
@@agatastaniak7459 the same thing is true for regular computation - at this point we know we are running an algorithm
@heywrandom8924
@heywrandom8924 Жыл бұрын
​​​@@amotriuc there is also an interview with CEO of open AI on this channel and he also looked doubtful that llm's will be enough for AGI but he says he wouldn't be too surprised if it turns out that gpt7 or gpt 10 is an AGI. The thing is that these models have emergent capabilities that can suddenly appear after becoming large enough
@dudicrous
@dudicrous Жыл бұрын
How mathematicians can be romantics
@splashmaker2
@splashmaker2 Жыл бұрын
It might depend on how you define imagination, but then how do you categorize experts learning new moves from AlphaGo/Zero? Were those moves not imagined if they were not done before?
@josephsellers5978
@josephsellers5978 Ай бұрын
The role of the subconscious is vital to any definition of imagination. It would be a stretch to say AI is capable of being conscious It's even more improbable that anyone would be able to create a program that would manifest a subconscious in AI, as we don't even really understand how it truly works. A lot of imagination is also instinctual, especially subconsciously, and everyone knows you can't teach instincts.
@MrDarwhite
@MrDarwhite Жыл бұрын
He asserted it. There was no evidence provided.
@jarodgutierrez5389
@jarodgutierrez5389 Жыл бұрын
You must be a fellow prompt engineer.
@MrDarwhite
@MrDarwhite Жыл бұрын
@@jarodgutierrez5389 I’ve been playing around, but my main issue is that of a person who follows the process of skeptical inquiry. He provided no evidence or even a logical argument. Nothing. He simply asserted that it was not possible. I’m not claiming it is, but I’m certainly not going to claim it’s not possible, especially after playing with GPT-4 and it’s ability to reflect on its answers without any specific prompting. It seems trivial to me to have an AI system throw out prior assumptions one or two at a time and see what the results are. Not exactly imagination, but it would likely solve his example. Having said that, I wish I could call myself a prompt engineer. As a programmer, that level of expertise would be very valuable.
@MrDarwhite
@MrDarwhite Жыл бұрын
@@seventeeen29 will do. To be fair, he doesn’t provide the evidence in this clip, and the title of this clip is where I have the issue. He seems like a great guy and I enjoyed what he said.
@andrewshantz9136
@andrewshantz9136 Жыл бұрын
He’s making the point that complex numbers are have strictly conceptual meaning which is not conceivable to an LLM because it is not extrapolatable based on past knowledge.
@hardboiledaleks9012
@hardboiledaleks9012 Жыл бұрын
@@andrewshantz9136 LLM where the L stands for LANGUAGE not MATHEMATICS... Wait until some bozo trains a Mathematic or algebra model on the same level as GPT 4 and it will shit all over your world of fkin complex numbers... Humans really aren't as clever as they think.
@yarpenzigrin1893
@yarpenzigrin1893 Жыл бұрын
LLMs are not AGI. However if something exists in nature, like intelligence, it can be atificially replicated.
@hardboiledaleks9012
@hardboiledaleks9012 Жыл бұрын
i have no idea why this basic concept is so hard for people to understand... It's almost like the smarter someone think they are, the harder it is for them to understand that they arent special 😂 So pretentious
@uphillwalrus5164
@uphillwalrus5164 Жыл бұрын
Nature exists in intelligence
@yarpenzigrin1893
@yarpenzigrin1893 Жыл бұрын
@@uphillwalrus5164 Nature exists in flight.
@Josh-cp4el
@Josh-cp4el Жыл бұрын
Can plastic become titanium? There are physical limits of different materials in our universe.
@GroockG
@GroockG Жыл бұрын
Maybe intelligence doesn't exist
@theTranscendentOnes
@theTranscendentOnes Жыл бұрын
such a great guest! thanks for bringing him on. he's eloquent, seems enthusiastic with such affectiveness for the topic and probably the kind of person I could sit down and talk about stuff for a long time. I love his accent too! adds "flavor"
@spacebunyip8979
@spacebunyip8979 Жыл бұрын
I want to read this man’s ChatGPT history. I’m sure it would be fascinating
@5sharpthorns
@5sharpthorns Жыл бұрын
Omg right?!
@ChatGPT1111
@ChatGPT1111 Жыл бұрын
Well, he has an affinity toward Issac Asimov, plus Rick and Morty, Jerry Springer shorts and Dilbert (fav is Dogbert).
@ronking5103
@ronking5103 Жыл бұрын
Probably not, It'd be him correcting the machine over and over at least if he was attempting to plunge the depths of his expertise. The rest of it would amount to the machine being convincing enough to seem expert in a field, but only because the user isn't. It'll get better, but right now its purpose is not expertise, it's general information that we should all take as being friendly if not accurate advice.
@shyshka_
@shyshka_ Жыл бұрын
chat gpt at that level of expertise is useless
@Ronnypetson
@Ronnypetson Жыл бұрын
In order to search for new mathematical concepts, a LLM would have to be grounded not only on natural language, but also on things like formal logic, like a mathematician is. Because natural language already carries some logic in it, current LLMs already can "create" new concepts.
@georglehner407
@georglehner407 Жыл бұрын
For "new" mathematics, that's not good enough either. It needs to be able to discard, forget, and boil down things it learned to distill the "most useful concepts". A mathematician that is good in formal logic and nothing else is still a poor mathematician.
@hayekianman
@hayekianman Жыл бұрын
then it would indeed be the stochastic parrot it is called. the mathematician knows what to discard
@Ronnypetson
@Ronnypetson Жыл бұрын
@@georglehner407 in this case there is some notion of value that good human mathematicians have. This notion may or may not be learned by an AI. Can you think of something like that?
@Ronnypetson
@Ronnypetson Жыл бұрын
@@hayekianman I agree with the stochastic part but not so much with the parrot one. We humans are stochastic too. The mathematician knowing what to discard can be emulated by a stochastic guided search, which has learned how much weight to put in each decision
@BitwiseMobile
@BitwiseMobile Жыл бұрын
Incorrect. They don't create anything. They iterate over their already known knowledge. They cannot - yet - recognize they don't have the correct knowledge and try to improve themselves. That's called GAI - or general AI - and it's very scary. We are working on that. Generative AI is very different. The fact that you can game generative AI using prompts tells you everything you need to know. I have told it ridiculous stuff before and it happily agreed with me and proceeded to iterate over that bullsh!t. That's not cognition and it's not innovation. It might seem like that to us, but it's really just reflecting what you are saying to it. It's not innovating, you are.
@AndreaCalaon73
@AndreaCalaon73 Жыл бұрын
Dear Lex, I can’t resist commenting on what Edward Frenkel says in this interview. He uses the discovery of complex numbers as an example of something that an artificial intelligence could not cum up with. I think that that example shows precisely the opposite. Let me first mention that since the late 1960s, mainly thanks to the work of David Hestenes, we have known what complex numbers are, their intuitive and simple geometrical meaning, contrary to what E. Frenkel suggests. Geometric Algebra defines the “well behaving” product in 3d, which exists in any dimension, not only in 2, 4 and 8, that Frankel says does not exist. You can look up “Geometric Algebra” yourself. I am well convinced that an AI with some “model based reasoning” would have discovered the marvellous and beautifully symmetric structure of Geometric Algebra together with the few rules for 2D that Gauss and other mathematician discovered centuries ago, when the story of the complex numbers originated. The absence of the structure of Geometric Algebra kept the simple significance of complex numbers (rotors) hidden and created the myth that Frenkel describes. In other words, an AI would not have been foolishly fascinated with the mysteriousness of the complex numbers, so incomplete and unjustified, because it would have arrived straight to the structure of Geometric Algebra, inside which complex numbers, quaternions, octions, the vector product, rotation in any dimension, … are all easily explained with a single product! Geometric Algebra impacts quantum mechanics, computer graphics, general relativity, … Complex numbers are just rotors … Have a nice weekend Lex! Well done, as always!!!!!
@DingbatToast
@DingbatToast Жыл бұрын
I agree. I don't believe an AI would get hung up on the same things (or in the same way) humans do.
@coolcat23
@coolcat23 Жыл бұрын
@@electrocademyofficial893 I believe Frenkel was rather justified in wanting to cite someone else to give weight to his view on AI, because in his hearts of hearts of hearts he knows that human brains are not magical. If current AI implementations cannot make leaps yet, it is because they are not operating at meta levels yet. An AI "simply" has to know of an example of a leap in one field to be able to apply it to another field. Voilà, there's your leap that isn't possible by simply trying to extrapolate at the same level. We can romanticize about human intelligence all we want, the writing is on the wall: At some point in time, AI is going to outperform us in every mental capacity.
@GeekProdigyGuy
@GeekProdigyGuy Жыл бұрын
​@@coolcat23 saying "at some point in time" isn't interesting. if it takes 1000 years, nobody alive today will care. the question is exactly how long it will take to achieve the necessary breakthroughs. no AI up until now has truly "invented" anything akin to what we ascribe to the greatest of human intellect.
@ivankaramasov
@ivankaramasov Жыл бұрын
​@@GeekProdigyGuyIt won't take 1000 years
@reellezahl
@reellezahl Жыл бұрын
@@GeekProdigyGuy it has. In Physics for example, AI has been implemented to come up with both experimental and theoretical results in short spans of time, which would have taken decades/centuries for human beings. The problem with the academic world is the bullsh!t of cermony and time-wasting nonsense: conferences, workshops, seminars, … all of that is utter utter bullsh!t. An AI does not need to attend any of that: just give it sense, raw computing power, and wire it right, and it will overtake any human being. Soon humanity will learn: there is NOTHING special about Einstein, or von Neumann, or Nash, or any of these people. Truth is inherently _discoverable_ and does not depend on being 'special' or some bullsh!t magic (and most definitely not on crappy workshops that we waste so much time, money, and fuel on).
@The-KP
@The-KP Жыл бұрын
"Everybody knows that the dice are loaded, everybody rolls with their fingers crossed"
@sunandablanc
@sunandablanc Жыл бұрын
"Everybody knows the war is over, everybody knows the good guys lost"
@aaronjennings8385
@aaronjennings8385 Жыл бұрын
The cavalry isn't coming.
@Gizziiusa
@Gizziiusa Жыл бұрын
"Everybody knows...Da' po' always bein' fucked ova by da' rich. Always have...Always will." Keith David, Platoon (1986)
@ChristianIce
@ChristianIce Жыл бұрын
AI cannot come up with new ideas, but it can see patterns in a large set of data that we didn't notice. It's not an emergent property, it's an unexpected result. Given the impossibility for a human being to read and memorize said dataset, unexpected results are to be expected.
@katehamilton7240
@katehamilton7240 Жыл бұрын
IKR? I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"
@MrSidney9
@MrSidney9 Жыл бұрын
Wolfram told the story of him playing with GPT 3.5. He asked it to write a persuasive essay arguing about the bluest bear that exist. So chat gpt started with " Most people don't know this fact but blue bears do exist... They are found In the Tibetan Mountain... their color doesn't come from pigment, instead it comes from a phenomenon analogous to how butterflies produce colors ....." Near the end of the essay, he was like , " wait a minute do blue bears actually exist ? " He had to google it to make sure. Now tell me again, AI can't have imagination.
@leandroaraujo4201
@leandroaraujo4201 Жыл бұрын
I am not denying the idea that AI models can have imagination or emotion, but that story just means that the AI can be convincing, not necessarily imaginative.
@heinzditer7286
@heinzditer7286 Жыл бұрын
There is no reason to assume that a computer can have emotions.
@MrSidney9
@MrSidney9 Жыл бұрын
@@leandroaraujo4201 How did he manage to be convincing? By MAKING UP plausible facts. That's what imagination is about.
@leandroaraujo4201
@leandroaraujo4201 Жыл бұрын
​@@MrSidney9 *It* managed to be convincing by arranging its ideas and using words in a certain way, in order to be persuasive. Those ideas could have come from imagination, but imagination is completely secondary to the ability to convince someone. You can convince someone of something false with facts (e.g. confusing correlation with causation).
@MrSidney9
@MrSidney9 Жыл бұрын
@@leandroaraujo4201 My working definition of imagination is the faculty to create/conjure concepts of external objects not available in the real world. It did just just that and managed to be convincing ( testament to the coherence and of its imagination). Hence it proved it could be both convincing and imaginative.
@BEDLAMITE-5280ft.
@BEDLAMITE-5280ft. Жыл бұрын
The “observed and observer” is a phrase coined by Jidu Krisnamurti, then taken by David Bohm and used in his description of quantum mechanics. I always find that fascinating.
@christopherrobbins0
@christopherrobbins0 Жыл бұрын
What we know about consciousness already seems prove something fantastical lies beneath the surface of our current knowledge. Ancients seemed to have understood this much more than we do now.
@MoversOnDutyUSA
@MoversOnDutyUSA Жыл бұрын
The square of -1 is equal to 1. In other words, (-1) multiplied by (-1) gives us 1. However, it is not possible to take the square root of -1 in the real number system. In order to represent the square root of -1, mathematicians use the imaginary unit "i", which is defined as the square root of -1. Therefore, the square root of -1 is represented as "i" in mathematics. So the square root of -1 can be written as √(-1) = i.
@stt5v2002
@stt5v2002 Жыл бұрын
You could make a good argument that a machine intelligence would more easily embrace complex numbers than humans do. After all, humans are endlessly constrained by “that’s not allowed” or “that doesn’t make sense.” These are basically emotions. A program that can self improve would already have the quality of “there are some things that are true but that I don’t already know and understand.”
@Martinit0
@Martinit0 Жыл бұрын
I would not say emotions but rather false conclusions rooted in insufficient understanding of underlying assumptions.
@cnrspiller3549
@cnrspiller3549 Жыл бұрын
I remember being taught imaginary and complex numbers, and I remember hearing my brain say, "That's it, I'm out of here". That was the point at which me'n'maths bifurcated. But I often reflected on the first maniac to pursue imaginary and complex numbers; what sort of lunatic does that? Now I know he was the same fella that invented the double cv joint - weird.
@abeidiot
@abeidiot Жыл бұрын
funny. That was when I got back into math. i suck at arithmatics, but actual mathematics is fascinating
@rokko_hates_japan
@rokko_hates_japan Жыл бұрын
I agree. I still think they are meaningless, just a substitute for things we cannot comprehend. They're used in formula to reach a solution, but it seems one starts with the conclusion they want and fill in nonsense to get there.
@shyshka_
@shyshka_ Жыл бұрын
@@rokko_hates_japan how are they meaningless if they're literally used in engineering all the time and not just in theoretical maths
@Gizziiusa
@Gizziiusa Жыл бұрын
lol, kinda like how when you try to divide by zero with a calculator, to says ERROR.
@kingol4801
@kingol4801 Жыл бұрын
@@Gizziiusa Because that expression does not have meaning. They could have also written “infinity” or “undefined”. Would you be happy then? Since it is NOT a number when you divide by 0.
@alikazemi5491
@alikazemi5491 Жыл бұрын
For GAI to understand that sqrt(-1) could have real value its matter of learning to do design of experiments which means it will eventually construct it.
@nickr4957
@nickr4957 Жыл бұрын
I think that the creative spark that Frenkel is describing is what philosophers call abductive inference, as opposed to deductive and inductive inference.
@RetzyWilliams
@RetzyWilliams 14 күн бұрын
It should say "Mathematician Imagines He Debunks AI Intelligence"
@bananakuma
@bananakuma 2 ай бұрын
Regarding the complex number point, you can just explicitly ask the “agi” to probe things in mathematics that has historically been seen as impossible or unintuitive. Seems a very simple “fix” for a advanced llm (with mathematical reasoning) to discover complex numbers etc.
@particleconfig.8935
@particleconfig.8935 Жыл бұрын
In my opinion this argument starts off with the assumption that the LLM can't deduce the new way of thinking, Simply by means of the historical data of said mathematician that pondered sqrt(-17). It can simply deduce from even only that 1 instance that divergent "thinking" needs to be done. If, how am I wrong?
@dolosdenada771
@dolosdenada771 Жыл бұрын
You are not wrong. He quotes Einstein suggesting imagination is unlimited. He then goes on to say he can't imagine AI solving X.
@jaydawgmac88
@jaydawgmac88 Жыл бұрын
To summarize, LLMs predict the most common answers to a particular input. Solving complex problems requires imagination and predictions that go AGAINST the grain and expected future. LLMs have to keep predicting future based on the past.
@jaydawgmac88
@jaydawgmac88 Жыл бұрын
I love the example of thinking about dimensions as powers of 2 and wondering why that’s the case. Very powerful and inspiring example for anyone that wants to be a mathematician. He said so much in that one section. Does he mean that 3 dimensions is not currently compatible with mathematics because multiplication can’t be solved? 1,2,4 and 8 dimensions were viewed as ok but something was wrong with 3 and couldn’t deal with multiplication on some level. Perhaps time is such a critical component that we can’t have 3 dimensions without time, and by then you just jump from 2 to 4 dimensions? Very awesome interview. Gets your brain thinking. Time to go ask Chat GPT some follow up questions 😅
@ben_spiller
@ben_spiller 9 ай бұрын
There's nothing stopping an AI from adopting the hypothesis that the square root of a negative exists and seeing what happens.
@masteryoda9044
@masteryoda9044 Жыл бұрын
Do we have any use for fractional dimensions or even complex ones and not just integral ?
@sgramstrup
@sgramstrup Жыл бұрын
It's so embarrassing at the moment, when so many super bright people shows their simplistic 'human exceptionalist' worldview. People that say: 'oh no, AI can't do this like we can', is deniers bc it clashes with their worldview that we are something special - which it turns out - we are not. I look forward to hear them again when they have moved on from their old standpoints.
@Martinit0
@Martinit0 Жыл бұрын
I agree. AI will just generate an embedding for concepts and we will be puzzled about what that embedding stands for. Just like people were puzzled about the square root of -1 before Cardano.
@Nathaniel_Bush_Ph.D.
@Nathaniel_Bush_Ph.D. Жыл бұрын
It is super cringy at the moment! I think many bright people will look back with chagrin on their hot takes on early AI. I also find it kind of hilarious that we have a LANGUAGE model that is already better than the average doctor, lawyer, teacher, writer, and poet, and yet we're still debating whether or not it qualifies as intelligent... and it wasn't even trained narrowly on any of those things. When we do narrow modular training, I fully expected to exceed 90%+ of human experts... and people will still be debating its intelligence.
@katehamilton7240
@katehamilton7240 Жыл бұрын
I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"
@ye333
@ye333 Жыл бұрын
AI doesn't need to be exactly like human to have intelligence or to replace human.
@quantum_ocean
@quantum_ocean Жыл бұрын
Terrible title @lex he’s not talking about “AI” generally but about LLMs specifically.
@hillosand
@hillosand Жыл бұрын
I mean, LLNs aren't going to be the models that advance mathematics, but even still you could try to program an neural network to 'play', e.g. allow ignoring certain rules I'm order to solve problems. Cool episode though.
@nickwalczak9764
@nickwalczak9764 Жыл бұрын
I hoped he would talk about AI more but he's right - language models are mostly not built on their own experience (mostly, it is supervised learning although there is some reinforcement learning in newer models). They act more like function interpolaters which can produce impressive results in the right context. Getting them to extrapolate anything and they can produce complete nonsense. They don't understand concepts deeply, there simply very very good mimics of the training data they have seen.
@luckychuckybless
@luckychuckybless Жыл бұрын
The computer learns language exactly like a child does. Using context from other sources of information or people
@jeffwads
@jeffwads Жыл бұрын
Sure dude...see you when GPT-5 starts doing your homework.
@spencerwilson-softwaredeve6384
@spencerwilson-softwaredeve6384 Жыл бұрын
This is correct for now, but I believe it would only be a small tweak to gpt to convert it from language model to agi, the tweak isnt quite understood yet
@federicoz250
@federicoz250 Жыл бұрын
@@luckychuckybless Not at all. Babies don’t need to read the entire web to understand language 😂
@rprevolv
@rprevolv Жыл бұрын
Alphazero extrapolates rather amazingly
@carlosfreire8249
@carlosfreire8249 Жыл бұрын
A sufficiently smart model can extract deeper meaning from less evidence. Who is to say new mathematics is not already hidden in the relationships found in the existing training data? The fact that the canon implies that something is not possible would not necessarily detain a LLM, because the it is not explicitly trained to respect the rules of Mathematics or take them with any special regard. There’s actually nothing blocking it from going beyond, from using obscure references or just stumble into a new way of solving a problem, thus creativity needs to be considered using non-anthropomorphic lens in this case.
@amotriuc
@amotriuc Жыл бұрын
A sufficiently smart model probably can do a lot but it does not mean we know how to build it. LLM are trained on existing knowledge and to predict exiting knowledge means so if you train it that 1+1=2 it is not likely to discover that 1+1=4. The claim "there is noting stopping it from going beyond" is a wishful thinking any real system has limitations we just don't know what it is for LLM. The guy is a mathmagician, mathmagicians don't get anything for granted just the axioms. There are a lot of BIG claims coming from Open AI with 0 prove that they are true. I suspect with LLM we will get to same situation as we have with self driving card, still not ready even it was promised to be done yesterday I am wiling to bet money on this.
@carlosfreire8249
@carlosfreire8249 Жыл бұрын
@@amotriuc GPT-4 had been observed generalizing 40-digit numbers addition without any explicit training. The emergent behaviors of these models betray the simplicity of their architecture. People arguing transformers are “stochastic parrots” are not paying close attention to second-order effects.
@amotriuc
@amotriuc Жыл бұрын
@@carlosfreire8249 The question is which emerging behaviour this is? If it really did discover what a number and addition is why it did stop at 40 digits?. It should be able to to any addition if it understood. So your example actually shows signs that it does not builds any understanding needed for AGI. As a see it is still a very sophisticated “stochastic parrot".
@carlosfreire8249
@carlosfreire8249 Жыл бұрын
@@amotriuc the model does not need to be able to add two arbitrarily long numbers without a calculator, more than you need to. The addition of two 40-digit numbers is emergent for at least two reasons: it is was not repeating data from the training set, it learned to do math having not being instructed to do so. We should be careful not to apply a “god of the gaps”-type of reasoning here, because generalization is not an all or nothing situation. Even if the model has blind spots, even if its internal language is not as expressive, even its functioning is not as efficient as our cortexes, a LLM reaching increasing levels of generalization capability by virtue of scaling is a surprising (and humbling) discovery. Stalin’s cold remark that “quantity is a quality all its own” applies here, hyper-parameterization is a quality all its own.
@amotriuc
@amotriuc Жыл бұрын
​@@carlosfreire8249 It does not matter what I need or not, I can add 2 number more then 40 digits without a calculator since I know what a number is and what is addition. The limit of 40 digits shows that it did learn how to add 2 numbers without understanding what a number is. I am not clamming it does not have any emerging properties the issue is those properties have nothing to do with AGI since it don't discover an understanding of the subject, which is much harder then just predicting a result (even some most simple system can have emerging properties it means nothing). I have to be clear I do believe at some point with will have AGI but it defiantly will not be an LLM. If AGI was so simple that LLM can do it, we defiantly would have had other intelligent creatures appearing during evolution and our Galaxy would be full of alien's. So don't be overoptimistic all the claims that LLM can do AGI have no scientific base they just hopes.
@justin4202
@justin4202 Жыл бұрын
Best clip ever. Wow. Raw and true and vulnerable. Great job catching this moment. My goodness
@arboghast8505
@arboghast8505 Жыл бұрын
It's all nice and well explained but how does it relate to AI?
@greenl7661
@greenl7661 Жыл бұрын
He's factually wrong; imaginary numbers are not required for quantum theory, we simply use them because equations are nice and easy. Rumblings of a man who doesn't understand neural nets, although none of us do
@jsrjsr
@jsrjsr Жыл бұрын
Oh but neural nets you understand, you just like implement the mathematics behind them at the finest details when you create the algorithms. You know what it is doing, that is why you use them lol
@greenl7661
@greenl7661 Жыл бұрын
@@jsrjsr we know the algorithm but none of us actually understand how matrixes work in relation to one another. It's a bunch of endless numbers, all transforming vectors, and it is complete blackbox for us humans
@jsrjsr
@jsrjsr Жыл бұрын
​@@greenl7661 yes you do. The same way we know what is happening inside a computer milluosn of gates interacting with each other. You dont need to assume emergence either, specially where the process being implemented is mathematical.
@mikewiskoski1585
@mikewiskoski1585 Жыл бұрын
Are you two A.i. bots?
@jsrjsr
@jsrjsr Жыл бұрын
@@mikewiskoski1585 out of compassion, no one is.i am sure that answers your question :)
@erlstone
@erlstone Жыл бұрын
as they say.. when u know the rules, u can break the rules
@Thomas-sb8xh
@Thomas-sb8xh 3 ай бұрын
A mathematician is one who knows how to find analogies between theorems, a better one who sees analogies between proofs, a still better one who sees analogies between theories, and one can imagine one who sees analogies between analogies. Stefan Banach, polish mathematician, one of the greatest ever lived...Feynman/Frenkel type, so you all would love him. Fantastic interview ))))
@Ploskkky
@Ploskkky Жыл бұрын
How was AI intelligence debunked? Did I miss something?
@takisally
@takisally Жыл бұрын
What seems like a jump to us might be obvious to AI
@reellezahl
@reellezahl Жыл бұрын
@kulu mbula it's wired analogously to imitate aspects of human thinking. The advantage is the hardware. AI does not need to sleep or eat or be loved. It can churn through trillions of images or documents, where we would give up after a dozen attempts. THAT's the power of this thing. Your reaction is like scoffing at the crappy vision of a horseshoe crab, failing to see the big picture of the machinery of evolution.
@TheWilliamHoganExperience
@TheWilliamHoganExperience Жыл бұрын
It's not artificial intellence that scares me. It's artificial stupidity...
@user-vc6uk1eu8l
@user-vc6uk1eu8l Жыл бұрын
You mean, the natural stupidity?🤠
@D.Eldon_
@D.Eldon_ Жыл бұрын
_@Lex Fridman_ -- Edward Frenkel is brilliant and I appreciate his insights and his humility very much. Thanks for posting this video clip. For another, more down-to-earth, perspective on complex math, you should interview an engineer. You know, the people who apply the crazy things mathematicians dream up. A good electro-mechanical engineer can easily provide tons of real-world examples where the "imaginary" number system is essential to describe the day-to-day reality we observe. For example, audio engineers would know nothing about phase without complex math. They would have no idea how two seemingly identical sound waves (identical magnitudes) can completely cancel (when they are 180° out of phase). And it goes even deeper because complex math is at the center of the Heisenberg uncertainty principle. In audio we can know everything about the magnitude of sound. But if we do, we'll know nothing about when in time the sound occurred. On the other hand, we can know everything about the time when a sound occurred, but we'll know nothing about its magnitude. Both cannot be fully known at the same time, creating the uncertainty. This is why advanced audio measurements systems must trade the magnitude-frequency domain for the time domain, depending on the job requirement. And it illustrates how complex math affects the macro world -- not just the micro world of quantum mechanics. Then along came a clever guy (Richard Heyser 1931-1987) who discovered that you could map mathematically into an abstract dimension via a Hilbert transform and operate simultaneously on both the magnitude and phase of sound, then map back to our reality with the result. The technology this birthed is Time Delay Spectrometry or TDS. Heyser applied this same "trick" to medical MRI (magnetic resonance imaging) systems to greatly increase their resolution. This just touches the surface of the amazing way complex math weaves throughout our world. Another great example is kinetic vs potential energy. Kinetic energy requires the "real" numbers and potential energy requires the "imaginary" numbers. It bugs me no end that we are stuck with these awful names for these two essential number systems. I wish we could do away with the "real" and "imaginary" labels and call them something else.
@shyshka_
@shyshka_ Жыл бұрын
the moment we create an AI machine without any concrete goals or set objectives/tasks but yet it still goes on to do something (even something as simple as moving around (if it has a robotic body)) is the moment we know it's self-aware and conscious. IDK maybe I'm dumb but that's the way I imagine we would know that it's the real deal
@beefnuts2941
@beefnuts2941 Жыл бұрын
I imagine the benchmark being that the AI is supposed to do something but refuses to do so or tries to terminate its own existence because it isnt allowed to be free
@essassasassaass
@essassasassaass Жыл бұрын
you could actually be right. ai does not „want“ anything yet it is just a tool. and maybe (and that makes me optimistic about our future) it will never have a will to do anything. a beeing must value things to take actions by its own but how can a machine create its own values? id argue that it is imposaible because a machine will always mimic the intentions of its creators. but then it would not be the will of the machine itself. but just a theory idk 😄
@kingol4801
@kingol4801 Жыл бұрын
That is not how any of it works. AI improves because it gets rewarded for doing a certain action. Kinda like our brain makes us like doing something because we get dopamine/endorphins from it etc. So, if I were to program a “robot”, I HAVE to define what is the reward mechanism (what are they being rewarded for) - and the “robot” tries until it get’s better at it. And you can guide the process by setting closer goals or changing it’s architecture/brain make-up. Without it being rewarded for anything all it will do is pure white noise. And it will only ever “learn” on how to stay alive within the confines of it’s environment, since the robots that don’t stay alive don’t reproduce. Since we intentionally set it’s goal via setting the reward mechanism, it will do things to get rewarded (although not necessarily in the way we might expect). Kinda unintentionally reaching a goal etc. So, no, it won’t be sentient (at least how AI neural networks are modeled now) because of that. It needs some reward mechanism to do things, and that is pre-defined by a person. Source: Masters in Robotics and AI. P.S: You CAN technically assume that we GOT sentient as a result of us developing certain neural networks. But that would require BILLIONS of cycles of evolution AND a VERY VERY big neural network AND a complex environment to stimulate us through it’s survival AND ability to form new nodes. Yes, AI currently simply optimizes it’s neurons. It does NOT build new nodes/changes it’s pre-determined structure itself - just chooses OUT of that structure the most efficient pathway to get rewarded. So, not really, no.
@DeTruthful
@DeTruthful Жыл бұрын
What do you mean tho every living being has concrete goals and set objectives. You get hungry, you get horny, you feel social pressure. Its not an accident you feel these things you’re designed to survive. So to say an AI should act without a purpose when you act with multiple purposes built in is a bad goal post.
@DeTruthful
@DeTruthful Жыл бұрын
@@essassasassaass you could argue that your prefrontal cortex is simply a tool of your limbic system. Dogs feel hungry, horny, have a desire for safety and social status, we strive to achieve all the same things just in a more convoluted ways. Our great minds are largely just a tool to get mammal desires met.
@ronking5103
@ronking5103 Жыл бұрын
From about 300BCE to until the early 19th century, humanity made a pretty basic assumption that two parallel lines would never intersect. Euclid. It was taken as law. Yet, it's pretty clear to anyone that studies a globe, that parallel lines can indeed intersect, they will at the poles. It's not an abstraction that difficult to come to terms with, you don't need to be Einstein to grasp it. Yet all of humanity missed it, even when they were actively looking for it, for a very long time. Sometimes even things that are staring at us in plain sight, elude us, because we fall into dogmatic beliefs about what we take as law.
@Apjooz
@Apjooz Жыл бұрын
And it only took 200,000 years to find those imaginary numbers.
@rebusd
@rebusd Жыл бұрын
Chat GPT (I don’t care which version) is an absolute train wreck when it comes to higher math
@amotriuc
@amotriuc Жыл бұрын
well I don't think you can do math just by trying to guess next word. And the claim from Open AI guys training for guessing next word will get as to AI is a BIG claim which we have no prove. I am terrible at guessing next word but I hope I have some intelligence. And I doubt feeding more data will help, the amount the data they feed now is more then any human can consume, we need to discover the architecture that will get us intelligence and We would be really lucky if LLM would be such model. BIG doubts here I suspect now it is already close to the limits what it can do.
@integrallens6045
@integrallens6045 Жыл бұрын
I like the use of the phrase "imaginary parts" this is very similar to how people have their "real parts" or their bodily parts, and then they have their "imaginary parts" which would be the mind, thoughts values, feelings, goals etc. Even numbers have this interior terrain
@rokko_hates_japan
@rokko_hates_japan Жыл бұрын
Do they really? Or that us projecting our imagination onto them to reach the desired conclusion? Methinks the latter.
@integrallens6045
@integrallens6045 Жыл бұрын
@rokko that's your opinion and that's fine. But what other kind of metaphor fits the idea of negative numbers? What happens when you go backwards past zero? Your numbers don't pop back into the positive, they are taking up negative space. If you can't image that as a folding inward then I don't know what other metaphors you could use to help your mind grasp these types of processes and numbers. Also what do you believe is my desired conclusion and how did you become a mind reader?
@VictorRodriguez-zp2do
@VictorRodriguez-zp2do Жыл бұрын
He didn't really debunk it, he explained why he thinks it is not intelligent. And he wasn't even ralking about AI in general but about Large Language models. People often forget that AI is a ridiculously large subject and LLMs (and more specifically transformers) are just one way to go about it.
@MRVNKL
@MRVNKL 11 ай бұрын
Ai is another great example of 2 + x = 1. Someone will always try to say there is a number that could represent x but we just haven't found it yet and you can't disprove it, even though common sense would tell you it's bs. Just because we build airplanes that doesn't mean we created birds. Silicon based computers can't be conscious, the brain is not a computer.
@CohenRautenkranz
@CohenRautenkranz Жыл бұрын
The devices we employ to build and run ”AI" models would not exist in the absence of the mathematics which the models themselves are unlikely to be capable of even conceiving. It seems to me that a (ironic) parallel could also exist with regard to humans attempting to decipher consciousness?
@SolideSchlange
@SolideSchlange Жыл бұрын
Nice title.... Ai intelligence Artificial intelligence intelligence
@xman933
@xman933 Жыл бұрын
While current AI cannot imagine or conceive of the square root of minus 1, does he believe it won’t be able to in the future. Current AI can be considered as infants and just like human infants might not be able to imagine the square root of minus 1, adult humans and likely adult AI might.
@davidvalderrama1816
@davidvalderrama1816 Жыл бұрын
A complete and open minded person isn’t one thing, intuition is important.
@maureenparisi5808
@maureenparisi5808 Жыл бұрын
This is plainly speaking inbox, the yellow brick road of progress.
@PoisonJarl71501
@PoisonJarl71501 Жыл бұрын
AI: “how many bullets does not take to kill humans? How long can humans survive poisoning the air, food and water?”
@bokoler9107
@bokoler9107 Жыл бұрын
Mr. Einstein solved his mysteries on a solid couch, while lucid daydreaming.
@HkFinn83
@HkFinn83 Жыл бұрын
Yeh but you aren’t going to solve the mysteries of the universe while musing about shit you don’t understand, so don’t even think about 😂
@bokoler9107
@bokoler9107 Жыл бұрын
@HkFinn83...hate is your main emotion?
@hardboiledaleks9012
@hardboiledaleks9012 Жыл бұрын
@@bokoler9107 No I think logic would be a better word...??? Feel free to prove him wrong... 🤣
@eerohughes
@eerohughes Жыл бұрын
I invented a language with my farts. Let's see AI do that!
@reellezahl
@reellezahl Жыл бұрын
give it a body, and it will.
@cosmosapien597
@cosmosapien597 Жыл бұрын
AI doesn't need sqrt(-1). It just needs bits or qubits. It doesn't have to develop math for humans to understand.
@Av-fn5wx
@Av-fn5wx Жыл бұрын
This man almost convinced me that LLMs emulate rather imitate human behavior based on all the existing knowledge but not reproduce the vast imaginative capabilities exercised by humans. ChatGPT would've responded that Sqrt of a a negative number doesn't exist. whereas a human has gone beynd what already exists and made new contributions
@laxmanneupane1739
@laxmanneupane1739 Жыл бұрын
So, Bilbo Baggins was a mathematician too! (Huge respect for the guest)
@johnreid5814
@johnreid5814 3 ай бұрын
In my opinion, square root of negative one is just two number lines or axises that can be oriented in any way. If it is on the same numberline, say x, then it should be one. I think its a fake problem since weve favord the x, y, and z which are arbitrarily at 90°s from eachother. Spherical coordinates are the next step. Negative numbers are literally whole numbers if you just translate their values into the real plane. Always have a camera or measuring device to measure your original data called the origin.
@ConnoisseurOfExistence
@ConnoisseurOfExistence Жыл бұрын
And why exactly shouldn't a machine be able to do such discoveries?
@reellezahl
@reellezahl Жыл бұрын
_BeCauSE huManS arE SPEshuL_
@mikewiskoski1585
@mikewiskoski1585 Жыл бұрын
Because word salad
@CaptainValian
@CaptainValian Жыл бұрын
Brilliant discussion.
@sm12hus
@sm12hus Жыл бұрын
I understand none of this but am super relieved to see a brilliant person confirm my hope and feeling that AI cannot ever be sentient
@momom6197
@momom6197 Жыл бұрын
That's not at all what he said! His point was about one specific ability that LLM do not display. He does not say that AI won't ever be sentient; in fact, his argument is not even evidence that we won't reach AGI in the near future.
@jakubsebek
@jakubsebek Жыл бұрын
"I understand none of this but.."
@sherlyn.a
@sherlyn.a Жыл бұрын
@Az Ek present day AI isn’t actual AI, it’s just linear algebra + some fancy stuff. Real AI would simulate a human brain. Besides, we’re made of DNA-and that’s a form of algorithm/code. We’ve already proved that someone’s genetics can affect how they think (i.e. if they will have certain mental illnesses), so it’s only logical to conclude that we are also algorithms-or at least, hardwired to some extent. Otherwise, why would humans act so similarly if there isn’t something that makes them act that way? We just have to replicate that artificially.
@robertthrelfall2650
@robertthrelfall2650 Жыл бұрын
​@@sherlyn.a Sounds like the insane ramblings if Dr. Frankenstien. Good luck with that.
@carleynorthcoast1915
@carleynorthcoast1915 Жыл бұрын
current computers certainly can't they just execute code, and you can't code sentience nonmatter how bad people want to think so. That would be analogous to writing a paragraph that made the paper self-aware.
@peterbellini6102
@peterbellini6102 Жыл бұрын
At the core of his statements is the fact that humans use inferential reasoning not just the compilation of data. There's the learning of facts, even the curation and organization of facts, but the leaps come from our DRAM. Not a Mathematician, but a very enjoyable video. Kudos for the Einstein references !
@lancemarchetti8673
@lancemarchetti8673 Жыл бұрын
Great interview indeed! Here's an example of how far we still have to go...Just take a look at this simple instruction that the poor bot fumbled up this evening as I enjoyed a cup of coffee and a few giggles. User: aAbcde9fz list every 2nd character in the above string please. ChatGPT: Sure, here are the every second characters in the string "aAbcde9fz": "Abd9z" User: why did you include 'b' ? ChatGPT: I apologize for the confusion. The correct every second character in the string "aAbcde9fz" is: "Abe9z" User: No. The correct answer is actually: Acef ChatGPT: I apologize for my mistake earlier. You are correct that the correct every second character in the string "aAbcde9fz" is: "Acef" //Unless the bot, being a computer takes it for granted that the first character of any given string is always '0' and then 1 2 3 etc ? But even so, its first answer is still wrong. Or am I missing something?
@jorgmeltzer9234
@jorgmeltzer9234 Жыл бұрын
ChatGPT's answer to could you have invented complex numbers: "The potential of an AI to conceive of a new concept like complex numbers depends on its architecture and training. If an AI is designed with the ability to learn, reason, and make creative connections, it may be able to come up with new concepts that were previously unknown or unexplored. However, this would require a rich and varied set of input data, as well as the possibility for the AI to engage in hypothesis generation, conjecturing, and testing. That said, since AI is typically a product of human knowledge and ingenuity, it is important to consider that the creative spark underlying the development of something as novel as complex numbers is still driven by human creators and programmers. In conclusion, it's theoretically possible for an AI to come up with a concept like complex numbers if they hadn't been invented before, but it would require a suitable architecture and a deep, diverse set of input data to facilitate that level of creativity."
@MrAnderson2845
@MrAnderson2845 Жыл бұрын
Its almost like complex numbers exist in a higher dimention and we know they exist but dont know what they are. Yet they are directly linked to us and some how calculate our physical world in the mandbrot set.
@lolguytiger45
@lolguytiger45 Жыл бұрын
Lex should have Swami Sarvapriyananda on for a discussion on consciousness and vedanta.
@georgechyz
@georgechyz Жыл бұрын
Math being rational is a subset of reality which includes rational and irrational. For example, emotions are very important features of our consciousness and they are irrational. What's remarkable about the irrational/emotional aspects of consciousness is how creativity comes from our emotional aspect. It's the irrational that leaps from what we know to entirely new possibilities. Conversely, the intellect relies on logic which plods along from what we know inching toward a slightly different idea. That's why revolutionary new ideas first appear using our irrational emotions. However, since irrational emotions lie outside the limits of rational math and logic computers cannot explore emotions or use those irrational features of consciousness to leap to entirely new perspectives, solutions, etc. “If I create from the heart, nearly everything works; if from the head, almost nothing.” -Marc Chagall (1887-1985), artist
@rafaelortega1376
@rafaelortega1376 Жыл бұрын
The title does not correspond with the chat. The mathematician does not debunk anything. He doubts it can be and points that human mind does not operate in the same way. Clickbait like this is detrimental. Stop doing it.
@TheDudeAbides2
@TheDudeAbides2 Ай бұрын
I have a degree in math and computer science, and I know quite a bit about ML and AI. I agree with 100% of what he says. It will be exciting when there is a new AI technology that can think abstractly and generate new knowledge, but we don't have that yet!
@carefulcarpenter
@carefulcarpenter Жыл бұрын
As a highly creative designer-craftsman I was fortunate to work in Silicon Valley for some of the best and brightest, and richest, people in the world. I listened to their "theories on their dreams" and brought it to fruition. I also witnessed their private lives, and decisions they had made about their dream.
@ivanmatveyev13
@ivanmatveyev13 Жыл бұрын
cool story, bro
@carefulcarpenter
@carefulcarpenter Жыл бұрын
@@ivanmatveyev13 I have been in some places, and had some conversations, that no one else in history could ever have. I am a "trusted man" in the hearts and minds of people who had to be cautious about people as a rule--- never knowing who to trust. My work still speaks for me, and likely will for hundreds of years. That is the way of a master craftsman who took the Road Less Travelled. It is a lonely path, but there are a few others I've worked for that lived lonely lives. The world out there is full of highwaymen, gypsies, and thieves. 👀🐡
@lakonic4964
@lakonic4964 Жыл бұрын
I have seen things you people wouldn't believe 👀
@justinava1675
@justinava1675 Жыл бұрын
Good for you? Lol
@mikerosoft1009
@mikerosoft1009 Жыл бұрын
​@@carefulcarpenter tell us more
@martinkunev9911
@martinkunev9911 11 ай бұрын
quaternion multiplication does not commute
@jonorgames6596
@jonorgames6596 Жыл бұрын
Later: AI debunks mathematicians intelligence.
@AnRodz
@AnRodz Жыл бұрын
Maybe this guy is missing the point: LLM would say, sqrt(-1), why not? I imagine chatpgpt saying: sqrt(-1) is not a usual integer, however if we accept it ..."
@liamroche1473
@liamroche1473 Жыл бұрын
I disagree with the example of imagining the square root of -1 for a rather concrete reason. Neural networks have features that are fundamentally made up of real parameters, and these features can achieve extremely high levels of abstraction - for example a feature representing whether a picture has a cat in it! Even that should be a strong clue they can come up with other sorts of abstraction, like the square root of minus one. There is one much simpler type of feature which is relevant to the claim. Topologically, a real-valued feature has no loop - if you keep increasing or decreasing it, you never see the same values again. But from a single such feature it is possible to generate two new features using sine and cosine that are related by the familiar sin^2 + cos^2 = 1 rule. This effectively maps the line of a single feature to a circle in the complex plane by the transformation x -> e^ikx. The two transformed features are effectively a single new feature with different topology. More generally two features always have the capability of being used so that they represent complex numbers, and where complex features are useful to a model they can emerge naturally. So it is safe to say that not only can general neural networks come up with the notion of a square root of minus one, they can do this sort of thing quietly in the background where it turns out to be useful to a model. And if they can do it quietly, it is certainly reasonable to believe they could talk about it if they had a large language model as well!
@gnollio
@gnollio Жыл бұрын
I respect a mathematicians perspective, but I sort of look at it in the same way I would look at a physicist describing the human body. If you break us down to our atomic structure; mostly just hydrogen, carbon, nitrogen and oxygen; it's hard to imagine how it can all come together to create consciousness. AI, although composed of simple 1s and 0s, could create emergent intelligence at a higher level when those 1s and 0s are combined in certain ways. What we think of as intelligence may simply be a boring and predictable outcome, but we just can't comprehend all the variables so it looks unique and special. We already see AI "hallucinating" beyond a machine's traditional cold and strict boundaries. Also, programming frequently injects randomness into the process to encourage unique outcomes amongst the array of potential solutions. I would say these things are demonstrations of a type of imagination, allowing for new ideas to emerge from unlikely places. We see glimpses of what can be described as "common sense" within the latest LLMs that may be an indicator how future AI will be able to self-assess failure and make new attempts to remedy the situation. Hell, AlphaFold's very purpose is to discover new protein structures, which is a practical example of AI thinking of unique ideas beyond just a brute-force approach.
@Acacian141
@Acacian141 Жыл бұрын
Irrational numbers and complex numbers are not the same thing. Why are irrational numbers irrational? There lies the great divide of mathematics and reality
@AnimusOG
@AnimusOG Жыл бұрын
This guy is truly awesome, great interview Lex!
@5sharpthorns
@5sharpthorns Жыл бұрын
So in the 4th dimension, you can't multiply by 3, 5, or 7. I would want to look into the significance of that.
@grandlotus1
@grandlotus1 Жыл бұрын
A ubiquitous presence does not need to be intelligent in order to be dangerous. Think of termites. The risk is not that computers will of necessity outsmart us, the risk is we will hand them the steering wheel.
@danielmurogonzalez1911
@danielmurogonzalez1911 Жыл бұрын
What about searching for numbers structures in the 16 dimension? I got curious as he said only powers of 2 made sense and 16 is a power of 2
@almightysapling
@almightysapling Жыл бұрын
There's a set for those too.what he failed to mention is that with every step we go up we lose an important property. Quaternions are not commutative. Octonians are not associative. The Sedonians don't get much love because they have so few properties that we just don't care about them
@reellezahl
@reellezahl Жыл бұрын
@@almightysapling for an algebra with 2^n generators (and basis elements wrt to the additive structure?) what exactly do we demand? Is it always an algebra over ℝ? or is it an algebra over the previous (2^{n-1}) algebraic structure?
@turtles38x19
@turtles38x19 Жыл бұрын
What if imaginary numbers dont exist and we are just dumb for inputting a negative under the square root? Do imaginary numbers appears spontaneously in regular math equations?
@IndustrialMilitia
@IndustrialMilitia Жыл бұрын
The tool is phenomenology. The philosophical method for describing subjective experience has already been developed.
@sarsaparillasunset3873
@sarsaparillasunset3873 Жыл бұрын
This is a profound intellectual discussion about mysticism, very rare. That said, perhaps it's not a matter of imagination to come up with ingenious mathematical constructs like the squareroot of -1, but merely the ability to challenge one's own assumptions. If or when AI reaches that pinnacle, we would be making profound scientific discoveries that could let us bend spacetime and travel to furthest parts of the universe.
@katehamilton7240
@katehamilton7240 Жыл бұрын
I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"
@limelightmuskoka
@limelightmuskoka Жыл бұрын
So elegant in conversing such a complex and mysterious topic.
@stevenschilizzi4104
@stevenschilizzi4104 Жыл бұрын
Prof. Frenkel stops at octonions, but I’ve read that numbers of dimension 2 to the power of 4, or 16, called sedenions, have also been defined and studied, and have very curious properties. Or rather, they lack properties that are fundamental to real or complex numbers, like associativity and commutativity. They also allow division by zero, where multiplying two non-zero sedenions can give zero as an answer!! I don’t know that they have found any practical applications though.
@martinkunev9911
@martinkunev9911 11 ай бұрын
multiplying two non-zero sedenions can give zero as an answer ≠ division by zero The technical term is that there are divisors of zero. The same is true for e.g. 2x2 matrices of real numbers.
@thechadeuropeanfederalist893
@thechadeuropeanfederalist893 Жыл бұрын
I think AI would be capable of coming up with sqrt(-1), because it doesn't require imagination, it just requires generaliization of algebraic rules. An AI trained on math would have seen the concept of generalization already numerous times and be able to apply it to new fields it hasn't seen yet.
@reellezahl
@reellezahl Жыл бұрын
Absolutely! Came here to say something similar. I grew up hearing all these stories about *how special* so-and-so in Italy or England or wherever was. So hearing the same ol' tripe from this Russian mathematician made my eyes roll so hard. All these ideas and results _can in principle_ be independently be found *without* an Einstein/von Neumann/Gödel, etc. And it works. (The historical proof of this is that mathematical results often get proved _completely independently_ by multiple people. Only stuff like the Internet ruins this, because people often give up as soon as they hear somebody else beat them to it.) Some ingredients are: necessity-is-the-mother-over-invention[or: discovery] + reflection (about concepts and connections you already know) + refinement of ideas + test-cases. These are just tasks that can be automated.
@kingol4801
@kingol4801 Жыл бұрын
Agreed. AI is great at generalizations. But it is not so great at real understanding/inference. It combines things until they are real-like. It does not comprehend them itself. It just sees a connection/relevance and capitalizes on it further. Low-level thinking, which is still interesting, but very low-level
@mikewiskoski1585
@mikewiskoski1585 Жыл бұрын
Also A.I. is free to lie and be wrong so it can definitely tell you the answer. (It just won't be right)
@katehamilton7240
@katehamilton7240 Жыл бұрын
what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible?
@scarlett_j
@scarlett_j Жыл бұрын
As a set of instructions, ai will be able to sample every subset of any sample.
@WerdnaGninwod
@WerdnaGninwod Жыл бұрын
Yeah, sorry, not buying it. He says there are tools for analysing things like consciousness, then fails to enumerate or describe any of them. What he did do, was to point out the mysterious intuitive leaps (such as sqrt(-1)) that human intelligence makes. Meanwhile, many of the people involved with advanced AI systems like GPT, are all worried about the way that they don't have good explanations for the "emergent behaviours" of their AI systems. They're learning to do things that are not explicitly taught. How are these not the same phenomena?
@griffin-leonard
@griffin-leonard Жыл бұрын
Love this podcast, but why the clickbait titles? What does it even mean to "debunk AI"? AI stands for artificial intelligence, why would you write "AI intelligence"? I've always admired Lex for his integrity, so why is this title so disingenuous?
Идеально повторил? Хотите вторую часть?
00:13
⚡️КАН АНДРЕЙ⚡️
Рет қаралды 17 МЛН
Советы на всё лето 4 @postworkllc
00:23
История одного вокалиста
Рет қаралды 5 МЛН
Kind Waiter's Gesture to Homeless Boy #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 6 МЛН
Why AI Is Tech's Latest Hoax
38:26
Modern MBA
Рет қаралды 636 М.
Andrew Wiles - What does it feel like to do maths?
8:32
plusmathsorg
Рет қаралды 88 М.
AI doomers are wrong | Lee Cronin and Lex Fridman
22:57
Lex Clips
Рет қаралды 49 М.
Bill Gates Reveals Superhuman AI Prediction
57:18
Next Big Idea Club
Рет қаралды 232 М.
Math's Fundamental Flaw
34:00
Veritasium
Рет қаралды 27 МЛН
‘Artificial Intelligence is a misnomer’ - Sir Roger Penrose
26:16
Channel 4 News
Рет қаралды 76 М.
Идеально повторил? Хотите вторую часть?
00:13
⚡️КАН АНДРЕЙ⚡️
Рет қаралды 17 МЛН