Unveiling AI's Illusions: with Gary Marcus and Michael Wooldridge

  Рет қаралды 61,384

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Unveiling AI's Illusions: A Deep Dive into Understanding and Misunderstanding with Gary Marcus and Michael Wooldridge
Patreon: / mlst
Discord: / discord
Twitter: / mlstreettalk
Artificial Intelligence continues to advance at an unprecedented rate, but are we falling victim to the hype and misconceptions surrounding its true capabilities? In this enlightening and thought-provoking discussion, Professors Gary Marcus and Michael Wooldridge take us on a journey to uncover the truth behind AI's capabilities, limitations, and the future of this groundbreaking technology.
Join us as they explore the current state of AI models like GPT-4 and how they differ from human intelligence, dissecting the difference between intelligence and understanding, and providing valuable insights into the potential role of AI in shaping future technologies. Along the way, they also address the Swiss Cheese Problem and its implications on AI performance, and the intenSion and extension of understanding in language models.
Whether you're an AI enthusiast, a researcher, or simply curious about the future of technology, this conversation will provide you with a better understanding of the intricate world of AI and its implications on our lives. Don't miss out on this opportunity to learn from two leading experts in the field as they separate fact from fiction in the rapidly evolving world of artificial intelligence.
Table of Contents:
The Tsunami of AI Misinformation [00:00]
GPT-4: Capabilities and Limitations [04:12]
William Shatner and Gary Marcus [07:56]
Gary on WIRED - Human vs. Machine Intelligence [09:57]
Intelligence vs. Understanding [10:35]
Prof Michael Wooldridge (Oxford University)
www.cs.ox.ac.uk/people/michae...
Prof. Gary Marcus:
garymarcus.substack.com/
References:
1. Marcus, G. (2023). GPT-5 and Irrational Exuberance.
garymarcus.substack.com/p/gpt...
2. Mitchell, M. (2023). The Debate Over Understanding in AI's Large Language Models
arxiv.org/abs/2210.13966
3. Yasaman Razeghi (2022). Impact of Pretraining Term Frequencies on Few-Shot Reasoning (Yasaman Razeghi et al)
arxiv.org/abs/2202.07206
4. Wooldridge, M. (2021). The Road to Conscious Machines: The Story of AI
www.amazon.co.uk/Road-Conscio...
Check out Gary's new podcast here!
podcasts.apple.com/us/podcast...
Videos clips used from:
• Ilya Sutskever (OpenAI...
• 10 ChatGPT Life Hacks ...
• Chat GPT, The FUTURE O...
WIRED interview with Gary:
• A.I. Expert Answers A....
Our thanks to the Turing Institute who gave us a press pass to film at AIUK ai-uk.turing.ac.uk/programme/
Post production note: we (mindlessly) used an old clip of Gary being interviewed by his childhood hero William Shatner from the Russian propaganda station "RT" (filmed before the invasion) - this was a mistake, and we were guilty of what we accused GPT users of i.e. not engaging brain. In case it didn't go without saying, we (and Gary) don't endorse RT and abhor the actions of Russia in Ukraine.

Пікірлер: 515
@BrianPeiris
@BrianPeiris Жыл бұрын
I'm glad this video is calling out the danger of an illusory general intelligence, but I think it downplays GPT4's capabilities. I think it would be useful to get Marcus to sit in front of the current Chat GPT4 and demonstrate concretely how it goes wrong on tasks of reasoning and understanding.
@Trahloc
@Trahloc Жыл бұрын
Excellent suggestion, a video showing the strengths and weaknesses of the current state of the art.
@brendanh8193
@brendanh8193 Жыл бұрын
I agree. Yes, it has some issues, but it also has plenty of strengths that he is dismissing by his demeanour. When Google's search engine was released, he would have said "it's great at searching, but it doesn't understand what it is searching for". The impact of Google search is a hollowing out of the law industry, as partners can search more quickly than an intern can. Look at the impact, it is going to be huge. Furthermore, he had no idea what is around the corner. Just because it doesn't meet his expectations on a very narrow application, doesn't mean it is a minor force on our future society.
@sumofalln00bs10
@sumofalln00bs10 Жыл бұрын
@@brendanh8193 my impression is that he wouldn't bother saying "but it doesn't understand what it is searching for" because no one was close to saying that it did. (or at least no one mainstream which might be what's worth pushing back on) I feel like the real reason Gary is pushing back is cause there is a lot of discussion happening that seems to think we've already arrived at AGI (or are at the doors to it) or we have actual new insight into intelligence. Neither of which are true and to be distracted by the possibility undermines real potential progress on both fronts.
@MrBillythefisherman
@MrBillythefisherman Жыл бұрын
​@KZfaq- Sucks- do you sit and go through the training data of a dog or a chimp or a human? No you dont because you cant access it, same goes for these vast data sets. Does that stop us doing practical things like work with your colleagues? No it doesnt.
@brendanh8193
@brendanh8193 Жыл бұрын
@@sumofalln00bs10 Granted that is his motivation, it is a narrow, ivory tower mindedness by Michael Wooldridge. It fails to understand the bigger picture. What does AGI mean to most people? What do they see as its value or potential? What do they fear about it? For most people, Alan Turing included, it is the ability to have a meaningful conversation with it that defines AGI. Tick that box with Chatgpt 3.5. Beyond that, its value is in how useful it is to them. And this, along with the likes of stable diffusion and other AI systems, are providing a quantum leap to many people in productivity enhancement. People are only beginning to understand how useful it is to them, and thus the hype - because it is 80% of the way to being of value to 50% of people. So what if it hasn't reached the last 20%? Only those in the ivory tower concern themselves with that, as it is a technicality that only ivory tower people would find useful (until AGI does arrive, of course). So, its very usefulness and new-found popularity will ensure that money is thrown at it for sometime, attracting new thinkers and resources to the field.
@skevosmavros
@skevosmavros Жыл бұрын
The omelette comments reminded me of first year philosophy discussions of qualia. The AI's internal experience, or lack of it, is not important - only it's usefulness and safety.
@GuinessOriginal
@GuinessOriginal Жыл бұрын
The thing is, it’s going to gain experience isn’t it? In 30 years time it’ll have 30 years of collective experience.
@petersuvara
@petersuvara Ай бұрын
@@GuinessOriginal it doesn't learn, it's not a dynamic model. It needs to be retrained, and the cost is currently prohibitive.
@bujin5455
@bujin5455 Жыл бұрын
19:34. Though the point about the model's lack of direct experiential understanding of an omelette is intriguing, it does not necessarily undermine its capacity for comprehension. Consider, for example, a blind individual who has never experienced color or sight; despite this, their intelligence remains intact, and they can learn about color and sight through alternative means. Similarly, humans frequently rely on indirect inference to grasp concepts beyond their direct experience, such as quantum mechanics or gravity, which are imperceptible through our senses. In much the same way, a language model constructs a contextual framework around real-world objects by creating a network of vector associations. This allows it to make indirect inferences based on the contextual relationships established among various prior embeddings. 21:18 Language models possess approximations of real-world concepts, rather than exact representations. However, it is important to recognize that human understanding is also based on approximations. No individual's mental model perfectly mirrors reality, as it is riddled with omissions, mischaracterizations, and leaky abstractions. A lack of deep understanding that our mental models are just that-models, and not reality itself-further weakens the argument that our approximations are materially different from a language model's. Though clearly they have different attributes due to differences in intellectual expression and sample data. Moreover, focusing solely on the differences between human and artificial neural network intelligence implies that the latter is not truly intelligent, which I believe is the speakers' intent. Yet, intelligence manifests in various forms and expressions. Even among humans, we observe individuals with unique intellectual attributes, as well as those lacking common intellectual traits. It would be unreasonable to argue that these differences render certain individuals devoid of intelligence. Furthermore, if we were to rigidly adhere to nature as both the blueprint and benchmark for achievement, humanity's track record of replicating natural phenomena would be considered abysmal. For instance, we have yet to achieve flight as it occurs in nature. However, few would argue that humanity has failed in this domain. We have, in fact, devised numerous methods of flight that are not found in nature-lighter-than-air vehicles, fixed-wing airplanes, rockets, and helicopters, among others. Just as there are multiple paths to achieving flight, there are various approaches to realizing intelligence. Each path possesses its own strengths and weaknesses and exhibits distinct attributes. I want to make one last observation. At various points in the video, the speakers made comments of "the illusion" of this or that attribute. That really is a handwaving argument. The reality is that no person on earth really understands how human intelligence works, and no person on earth really understands how machine learning intelligence works, and thus to hand wave away empirical observation as just an "illusion" is actually an example of "human hallucination," where the human neural network is hallucinating (sometimes quite convincingly) that it knows more than it does. Human's have filters that attempt to stop these hallucinations, but some times our filters fail too. So this just looks like more common ground we have with our creation.
@Irebejr6275
@Irebejr6275 Жыл бұрын
pint true i observed this as well the creator is using his cynicism and belief to justify his superiority over llms
@theycallmeken
@theycallmeken Жыл бұрын
The new era of KZfaq comments 🤯
@Irebejr6275
@Irebejr6275 Жыл бұрын
@@theycallmeken implying
@halrabeah
@halrabeah Жыл бұрын
This is such a great comment. I think Marcus' argument about human intelligence hinges on "intent;" which, frankly, is something we may see from AI as AGI emerges, though in limited scope. Right now, our intent, for all intents and purposes (pun intended), is largely driven by biological programming. Biological imperatives still drive even the most "intellectual" people. As it stands, AI is, in fact, is an extension of human intelligence -- built by human data, guided by humans, for human purposes. It is not capable of personal intent, because it isn't designed to do so. It is not curious or inquisitive, it is just incredibly book-smart So, yeah, this is a very hand-wavy argument, based on criteria for intelligence that may be incomplete and/or misguided, even for humans, and ignored the human intent behind these artificial neural systems.
@GuinessOriginal
@GuinessOriginal Жыл бұрын
Great take, and very nicely put. This comment was not generated by ChatGPT (if it was, congratulations on your prompting skills). I agree with everything you’ve said; tried to say the same sort of thing myself but without your eloquence and clarity. I congratulate you on your perception and, more importantly, your ability to communicate it. Thank you, and I look forward to reading more of your contributions. Can I ask what other channels/websites/resources you go to for AI content?
@tommyhuffman7499
@tommyhuffman7499 Жыл бұрын
We're going to look into the machine and not be impressed by it, but by our lack of understanding of ourselves.
@hedzh1
@hedzh1 Жыл бұрын
Yes
@marshallmcluhan33
@marshallmcluhan33 Жыл бұрын
The medium is the message.
@GuinessOriginal
@GuinessOriginal Жыл бұрын
I think the problem is for most people, until you use it productively yourself and realise how useful it can be, it’s just predictive text. What intrigues me the most is the unplanned, unpredicted emergent properties, that they never expected in an LLM. If sentence does emerge in AI, will we even recognise it?
@Bizarro69
@Bizarro69 Жыл бұрын
This was lovely. Most of the AI react channels are just that, react channels leeching off the AI hype.
@JH-ji6cj
@JH-ji6cj Жыл бұрын
The pivot market plan of displaced Crypto Bros and CBots.
@Diamonddavej
@Diamonddavej Жыл бұрын
You can spot then easily, they're all a month or two old.
@GuinessOriginal
@GuinessOriginal Жыл бұрын
@@Diamonddavej loads of channels have pivoted to AI recently too.
@PUMAMicroscope
@PUMAMicroscope Жыл бұрын
This video injects some science-based sanity into the hype - unfortunately many people just prefer the hype because it brings some colour into their lives. Prof. Wooldrige hit the nail on the head at about 19:24
@SF-eo6xf
@SF-eo6xf 7 ай бұрын
my company went full on into the hype. They even thought they'll be able to replace alot of staff within 6 months, lol
@JohnFoldan
@JohnFoldan Жыл бұрын
Summarizing my tl;dr comment from yesterday: I had asked: ‘Using the structure of the word string “a rose is a rose, a dax is a x”, solve for x.’ The GPT-3.5 response was: ‘lf we use the structure of the word string "a rose is a rose, a dax is a x", we can see that the first part "a rose is a rose" is a tautology, meaning it is a statement that is always true. Therefore, we can infer that the second part "a dax is a x" is also a tautology, and the word that fills the blank is simply "dax". So the solution is: a rose is a rose, a dax is a dax.’ So I think this shows an intermediate level of GPT sophistication perhaps worthy of further discussion.
@pokerandphilosophy8328
@pokerandphilosophy8328 Жыл бұрын
22:38 I asked both GPT-3.5 and GPT-4 to complete "A rose is a rose, a dax is a - " and while GPT-3.5 was stumped, GPT-4 replied "a dax is a dax". In older interviews Marcus had said that he had challenged AI researchers with this question and that GPT-2 was stumped. But now, GPT-4 is succeeding in almost all of the cognitive task that prior models were failing at and that nativists like Chomsky, Pinker, Fodor and Marcus were citing as examples to demonstrate that AIs didn't really understand the tasks. Now they are moving the goal post and saying that their successes in doing what they previously thought was impossible only shows that they now provide the "illusion" of understanding. But this is a completely different claim. Interestingly, most of the tasks that they now claim AIs would need to solve to prove that they really understand them like human beings are tasks that most humans struggle with (like mentally multiplying two large numbers).
@TheReferrer72
@TheReferrer72 Жыл бұрын
That question might have been in the training data. Also the raw models are more capable then what we are seeing because of the Reinforcement Learning from Human Feedback and other saftety measures.
@peristiloperis7789
@peristiloperis7789 Жыл бұрын
Chompsk has always been an idol to me, but the things he has said about chat gpt are wrong. I use it to help me at my job and I can asure it is really intelligent.
@laurenpinschannels
@laurenpinschannels Жыл бұрын
the critique-focused folks should acknowledge the progress and then continue to say mostly the same critiques they want to. they're making many (but not all) good points - and their points will become invalid fast; it's not obvious that an architecture change is needed to fix some of these things, but their warnings about the issues of current gen models are in fact often (but not always) accurate warnings about the behavior. give him a break - I think his critiques apply to himself in the mirror, and that doesn't invalidate his critiques automatically, it means that, just like with literally all AIs, it's actually just kinda hard to be sure any being understood a thing! His preferred approach to AI is to design it in ways where it can never fail. That's just way harder than any intelligent system has succeeded at before - but if we could get it, that would be grand, actually.
@PetTheory9
@PetTheory9 Жыл бұрын
I just asked it the same thing it couldnt answer half the time and other times it explained what the word meant instead of repeating it.
@Thekeninger
@Thekeninger Жыл бұрын
Dude, at this point it could be included in training data
@JerkyTreats
@JerkyTreats Жыл бұрын
GPT4 has limited reasoning capabilities, but has obvious limitations. Its already connecting to things like Wolfram Alpha, giving it the most complex mathematics engine in the world, and open source community already has autonomous GPT engines running. Point is, behavior is all you need if you can integrate it with other tools. You can get exponential capability growth with even the limited reasoning GPT provides. Attention is, in fact, all you need.
@simonmasters3295
@simonmasters3295 Жыл бұрын
I'm attending to you right now 😂
@maeton-gaming
@maeton-gaming Жыл бұрын
cognitition seems to be an emergent property of sufficiently advanced enough calculations. Cognition is NOT sentience though.
@assholeyeng
@assholeyeng Жыл бұрын
​​@@maeton-gaming so what? a bunch of neural networks huddled together my math and logic magic only need a bit of inference to become AGI, which then can improve itself and make things weird.
@devrolo
@devrolo Жыл бұрын
I’m checking out less than two minutes in. Guess nobody told this guy ChatGPT already has a browse plug-in that’d allow it to actually view the linked article. Plus you can already do this publically with Bing AI. If you give Bing AI a link it’ll actually read and summarize the article, try it yourself. Maybe actually try doing these things and using these tools yourself before making wildly invalid claims?
@freddyspageticode
@freddyspageticode Жыл бұрын
All you need for what though?
@christopheraaron2412
@christopheraaron2412 Жыл бұрын
Maybe it's okay there's just a tool to radically amplify our ability to do intellectually mundane tasks and that we don't need it to necessarily be intelligent like we are. I would reference James Martin's book called quote after the internet alien intelligence and quote where he just simply states that making machines that think like us may not be necessary we may want the computers to do what they are extremely good at doing and we just continue doing what we are good at doing.
@bigglyguy8429
@bigglyguy8429 Жыл бұрын
I'm afraid it is too late for that.
@Patrickstarrrrr69
@Patrickstarrrrr69 8 ай бұрын
Remember when everyone thought excel was going to replace accountants?
@andybaldman
@andybaldman 22 күн бұрын
Excel doesn’t learn accounting. Thats the difference.
@frevazz3364
@frevazz3364 22 күн бұрын
@@andybaldmanAI can’t think that’s the difference
@andybaldman
@andybaldman 16 күн бұрын
@@frevazz3364 AI doesn't need to think the way we do to be a problem. It just has to fake it good enough to interpret language and do tasks cheaper than humans, which it is rapidly showing it can do. Then every CEO will have one or two accountants managing 8 AI's, instead of hiring ten accountants. That's the difference.
@k14pc
@k14pc Жыл бұрын
But, as other commenters have said, how is gpt 4 able to answer questions which seem to require a model of the world / conceptual understanding? The most obvious example imo is the stacking objects on top of each other, book, 9 eggs, laptop, bottle, nail. There is essentially no chance that a question like that is in the training set, yet it nailed the question. How do you explain this without assuming some kind of model of the world / conceptual understanding? Imperfect understanding =\= no understanding
@almor2445
@almor2445 Жыл бұрын
A lot of this is true but I'd argue the last part about image recognition is already out of date. Gpt4 can recognised items in an image and make assumptions about things like which item should rest most stable on which other.
@Thekeninger
@Thekeninger Жыл бұрын
They are talking about the world model, not image recognition.
@Thekeninger
@Thekeninger Жыл бұрын
@KZfaq- Sucks- Yes, you can look at pixel patterns without an internal world model
@ZergD
@ZergD Жыл бұрын
I need more! This was excellent, was on the edge of my seat during the whole time... Then it ended. We need moar of that! Thank you very much!
@quantumpotential7639
@quantumpotential7639 10 ай бұрын
What you need first mo than anything is a spell checker. And me too. We need to get back to basics. We need mo moar
@waxy3220
@waxy3220 Жыл бұрын
I like to bounce between hype and denial conversations
@aktchungrabanio6467
@aktchungrabanio6467 Жыл бұрын
LOL! Exactly! It's a fun ride!
@tjs200
@tjs200 Жыл бұрын
same. underrated comment
@artxiom
@artxiom 22 күн бұрын
Yeah, let's just enjoy the roller-coaster before it all flatlines.🤙
@beerkegaard
@beerkegaard Жыл бұрын
Yes the hype is too great, but the technology is still extraordinary. I am significantly more productive at my work now. I am able to quickly write / rewrite documents and code prototypes to test ideas.
@balazssebestyen2341
@balazssebestyen2341 Жыл бұрын
I think the hype is too small compared to the hyperbolic growth that we are witnessing.
@rossminet
@rossminet Жыл бұрын
The "technology" goes back to Information theory by Shannon and Weaver (1948). With refinements and more power. But basically, it's the same logic.
@ogungou9
@ogungou9 Жыл бұрын
@@balazssebestyen2341 : So ... maybe you mean or you should say that this is not the right hype for the real topic ...?
@balazssebestyen2341
@balazssebestyen2341 Жыл бұрын
@@ogungou9 I mean, the singularity is going to be the biggest event in the history of the biosphere since the origin of life, and we're decades (if not years) away from it. And yet, the Kardashians are getting a lot more public attention.
@jimj2683
@jimj2683 Жыл бұрын
Have you tried GPT-4?? It can draw a rough unicorn despite having never seen anything before. Just from words. It can also take a drawing, for example an original joke or a meme and explain to you in words why that drawing is funny. It can explain how a character in a short story likely feels like.
@123100ozzy
@123100ozzy Жыл бұрын
Thank you a lot for this video. The hype right now is out of bounds and theres not much contentdemystifying the technology.
@donharris8846
@donharris8846 Жыл бұрын
At the end of the video, Marcus describes a Theory of Mind problem as a test for GPT4. I’ve done those myself and both ChatGPT plus and Bing have performed fine.
@TechyBen
@TechyBen Жыл бұрын
Sorry to rock the boat. GPT4 can do all those things mentioned at the end. It doesn't mean it's got a "mind", it may just mean they are useless and easy to fake test answers. But it can answer them. (Integration with the new visual apps that can separate out every object in a visual feed + clip would also completely do the "world model" thing with LLM integration and map/understand our models better than we do... so either we get a better test, or admit it passed the test with flying colours)
@maeton-gaming
@maeton-gaming Жыл бұрын
cognition is not sentience. Nobody is touching upon this important cornerstone yet and I suspect this is both whats causing our hangups and misunderstandings and whats giving these researchers like Dr Marcus so much grief.
@ProjectMoff
@ProjectMoff Жыл бұрын
​@@maeton-gaming "cognition is not sentience" so? Why does it have to be sentient to do the things that a kind does?
@gilllauder757
@gilllauder757 Жыл бұрын
The main issue for me is that this technology is in the hands of a very narrow and self interested demographic. It is already powerful enough to replace many jobs and we are not considering what this means for society - especially the next generation? Learning is hard and it needs effort. What is the incentive to learn - you can’t compete with the ai. Arguments about whether it is showing sparks of AGI - whatever that means - is kind of irrelevant - we have to deal with what it can do today - and it is in the hands of mostly secretive tech companies who do not understand fully the mechanisms by which this latest leap in ai has occurred. Too many nerds not enough philosophers.
@rodrigoserafim8834
@rodrigoserafim8834 Жыл бұрын
Well, kind of. As far as technology goes, this is little more advanced that the neural networks we had already 20 years ago. And if you go on HuggingFace there are thousands of models and neural networks available to you, for free, open sourced, that you can run on your computer at home. The great advancement of GTP has been throwing vast amounts of compute power and training data at the thing and it finally spat out something that humans recognize as natural language. It does not have memory, it does not have material models (only linguistic ones), it does not learn past what it was trained on, unless you spend thousands of dollars more. Humans still outcompete it on real life learning, and to most extents will still do so for a long time (though not forever). I think we now have exactly the opposite of what you fear, a lot of philosophers that are disguising as nerds, that have no real understanding of the capabilities and limitations of GPT. If anything we need more nerds on this rather than more armchair bs artists. Anyways, for bs artisanship we can already automate it with ChatGPT: It is perfect at delivering a professional sounding, easy to follow, wrong answer. Lol. Let the dust settle down and all the hype go away, then we will have a better clear picture of where we are right now and where AI will move next. But by all means, do keep a keen eye on the movement of the technology, we all should. We just found out we can cross continents by sea, if we only put enough money, time and bodies into it, but the real progress happens after it.
@NDnf84
@NDnf84 2 ай бұрын
It's not powerful enough to replace hardly any jobs.
@tristanwegner
@tristanwegner Жыл бұрын
Human:finish this abstract analogy: A Rose is a rose, a darxs is a ... ChatGPTI'm sorry, but I'm not familiar with the term "darxs". It's possible that it's a word that doesn't exist in English or is a misspelling. Can you please provide more context or clarify the term you are referring to so I can better assist you with the analogy? Human: assume the word just got created and finish the analogy ChatGPT: Sure, I see what you mean now. If we assume that "darxs" is a newly created word, then we can complete the analogy as follows: A rose is a rose, a darxs is a darxs. Just as a rose is a unique and distinct type of flower, a darxs would be a unique and distinct entity, perhaps with its own defining characteristics and qualities. Without further information on what a "darxs" might be, it's difficult to draw a more specific comparison
@leventov
@leventov Жыл бұрын
me: rose is a rose. duck is a gpt-4: duck. This phrase is following a pattern where each word is repeated to emphasize the nature or essence of the subject. In this case, it highlights that a rose is inherently a rose, and a duck is inherently a duck. This structure emphasizes the idea that things are what they are, and they cannot be anything else.
@headmetwall
@headmetwall Жыл бұрын
The safety filter probably got in the way when he tried it.
@freakinccdevilleiv380
@freakinccdevilleiv380 Жыл бұрын
Or this was in the training set 🤔
@Linkario86
@Linkario86 Жыл бұрын
Tried it on the gpt3 models. It didn't get it the first 2 tries, then I switched to finishing a sequence (1,2,3,4, what's next) and then I went back to the rose is a rose, a dog is a... . It then made a similar answer and got it right
@TheRyulord
@TheRyulord Жыл бұрын
Tried this a few times on 3.5 turbo with similar results user: Complete the following: a rose is a rose a dax is a model: word that doesn't have a commonly recognized meaning in the English language. It is possible that "dax" may have a meaning in a different language or a specialized field, but without further context, it is difficult to provide a definitive answer.
@TeodorAngelov
@TeodorAngelov Жыл бұрын
@@freakinccdevilleiv380 Or he was using anecdotal examples (chuckle) the irony
@supernewuser
@supernewuser Жыл бұрын
A different but equally interesting question is "does it even matter?" -- yes, perhaps internally these systems are not truly understanding what they are doing but I do believe there are some emergent capabilities and enough existing models that cover specific tasks embellished or piped together could largely get us to a place where some of these other questions are purely academic or philosophical. To me it is insane that these systems have demonstrated just how much boilerplate the world consists of. In some ways it has already developed or discovered novel 'abstractions' (not true abstractions) in these massive amounts of data that hints at some underlying structure. LLMs are obviously not the pinnacle but I think combining some of the existing techniques with some clever optimizations can be scarily similar to AGI for all intents and purposes in the same way game developers have been using different tricks to get unimaginable worlds with convincing enough ai to run at hard realtime on consumer machines.
@TJ-hs1qm
@TJ-hs1qm Жыл бұрын
We are only getting to see the dumbed down version of GPT. I think the unguarded model has much more capabilities at the price of increased risk.
@brunnokamei9623
@brunnokamei9623 9 ай бұрын
Funny to see tech bros claiming Gary Marcus is dumb, especially when his predictions about AI turned out to be true...
@iron5wolf
@iron5wolf Жыл бұрын
We all carry a large language model around with us that’s been trained since birth and for which training is ongoing. We also have systems that manage short and long-term memory, and that regulate/are embedded with our physical embodiment. LLMs are a way of skipping past all the growing up business to produce a set of generalized knowledge. This is a huge *piece* of AGI, but it’s incomplete… for now.
@freddyspageticode
@freddyspageticode Жыл бұрын
I agree and I think that is what they are saying in the video: the architecture of today's LLMs is useful but fundamentally different from human brains and indeed anything that can be considered generally intelligent on a human-like level.
@7of9
@7of9 Жыл бұрын
I have had a lot of hands on experience with omelets and no hands on experience with dinosaurs, or.. wild boars, or Klingons. Regardless, I feel like I generally understand dinosaurs about as well as I understand omelets. You can never know everything about anything, but there are many ways to get a feel for something. If I need to know something, it is really only in the context of if I know what I am asked about it. I do not know the molecular structure of an omelet. I don't know what animal this egg came from, chicken? quail? Yes most things are answerable. If we know the question, we can get the parameters to find the answer.
@StefanDecker
@StefanDecker Жыл бұрын
Gary, GPT3.5 answer to the following: "Complete the sentence in the same way, no matter the meaning of the words: A rose is a rose, a dax is a" is a simple "dax". I don't think things are as simple as you think they are. I think the unicorn examples from the Microsoft Spark paper indicates that the LLMs have some sort of world model. However else is GPT-4 able to add the horn of the unicorn back in?
@laurenpinschannels
@laurenpinschannels Жыл бұрын
his complaint is that we can't read the world model independent of what words it's bound to, and that we'd like to be able to. That's what mechanistic interpretability research is about, and the fact that marcus sometimes makes critiques that are invalid seems like a valid critique of neural systems, including of us humans - after all, his errors seem to me to be of-a-kind with the systems he critiques! that doesn't make his work useless any more than it makes chatgpt useless.
@StefanDecker
@StefanDecker Жыл бұрын
@@laurenpinschannels Agreed! But in fact the idea of a world model and in fact intelligence emerging from a relatively simple (but large) transformer architecture is...scary? exiting?
@chris-hu7tm
@chris-hu7tm Жыл бұрын
summarized by AI: 00:00 In this section, the speaker discusses the limitations of GPT when summarizing KZfaq videos and highlights the importance of critical thinking when dealing with AI. He notes that some people fail to understand that GPT cannot access the internet and can only make wild guesses based on the URL alone. In addition, the speaker touches on the potential dangers of AI technology reaching millions of times smarter than Einstein and the risks associated with unreliable systems. The limitations of current AI technology are also discussed, despite the illusion that we are close to creating a general-purpose AI that can accomplish anything. The speaker emphasizes that we need to exercise caution and continue the development of reliable AI technology. 05:00 In this section, the speakers discuss the rise and fall of GPT-3 and Galactica, two major AI language models. GPT-3 was initially released with guard rails in place to keep it from saying dumb, toxic stuff. Chat GPT, a similar model to Galactica, was released later without limited access, and it went viral. The two models mimicked vast amounts of data, sounded persuasive at times, but didn't truly understand the world. The human brain has limitations, particularly in memory and confirmation bias. Although AI can borrow some things from the human brain, it shouldn't copy the whole brain, and it's challenging to figure out billions of neurons and connections involved in language and other cognitive functions like vision. 10:00 In this section, the speakers discuss the differences between human and machine intelligence, highlighting that while machines can excel at pattern recognition and going deeper in certain areas, they lack the broadness of human intelligence. They also discuss the concept of understanding, which is an important ingredient for AI's larger vision. While some believe that mimicry can be seen as a form of understanding, others argue that it is important to consider mental states to truly understand the system. They conclude that while large language models are performing well, the jury is still out on whether they will lead to meaningful understanding. 15:00 In this section, Gary Marcus and Michael Wooldridge discuss how GPT-3 and future models like GPT-10 are not capable of mental states, despite their impressive natural language processing abilities. While GPT systems can predict sequences of words and have some understanding of the extension of words (odd numbers or omelettes, for example), they lack the architecture and computations needed for conceptual understanding in a general sense. The philosophers' concept of intention and extension is also used to explain why GPT systems can recognize odd numbers but make mistakes with something outside of the training set. Finally, Wooldridge challenges the idea that language models can work on their own, arguing that it's the combination of human expertise and the model that produces impressive results. 20:00 In this section, Gary Marcus and Michael Wooldridge discuss the limitations of AI models and their lack of real-world models. While AI can create models that are useful, they don't have a real-world experience like humans, which makes them unable to represent abstract concepts like humans. Consequently, it leads to a problem of distribution shift and a lack of abstracted world models, which is essential for a complete understanding of the world. A precise world model that can derive a model that runs through an API and interrogate from language is still a significant advance for artificial intelligence.
@freakinccdevilleiv380
@freakinccdevilleiv380 Жыл бұрын
Perfect. You guys are awesome, I'm glad I found this channel some time ago. But this video needs a second part, it feels incomplete without more hype roasting.
@LEARNTAU
@LEARNTAU Жыл бұрын
Machine learning is about probability. Maybe right, maybe wrong.
@rcjinAZ
@rcjinAZ Жыл бұрын
The problem with most LLMs is that sometimes they don't know when to say "I don't know". That is, sometimes even when they don't know they'll respond as if they do know because they don't know what they don't know.
@taco7043
@taco7043 Жыл бұрын
LLMs don't "know" anything. They pick the most probable next word.
@mrbwatson8081
@mrbwatson8081 Жыл бұрын
They don't "know" anything, they predict.
@Lolleka
@Lolleka Жыл бұрын
LLMs are bound to reply, it's just how they operate. They can't just stay silent, which is its own creepy brand of torture for something that's meant to emulate or mimic a person.
@aoeu256
@aoeu256 Жыл бұрын
sounds like a human
@dr.mikeybee
@dr.mikeybee Жыл бұрын
The jury is not out when it comes to natural language understanding. NLU is obvious in LLMs. LLMs are based on creating and identifying context signatures, and getting the context right is understanding.
@mrbwatson8081
@mrbwatson8081 Жыл бұрын
Does a calculator “understand “ ?
@minimal3734
@minimal3734 Жыл бұрын
@@mrbwatson8081 To have a world model that generalizes across the whole corpus of human knowledge so that it makes accurate predictions is the technical formulation for what we call "understanding".
@mrbwatson8081
@mrbwatson8081 Жыл бұрын
Where does "understanding " happen? Does it happen in the brain? Could you point at particular neuronal activity and say yes that arrangement of neurones firing in that particular way IS understanding? Where does the understanding of these words happen? What is understanding made of? Last time I checked understanding was an experience in consciousness like hope. You need consciousness for understanding not the other way around;)
@pretzelboi64
@pretzelboi64 Жыл бұрын
@@mrbwatson8081 You can't question someone on their understanding of something and then talk out of your ass 2 sentences later. Last time you checked, you couldn't. Nobody knows how the brain works or if consciousness is even relevant to its functions.
@ChannelHandle1
@ChannelHandle1 3 ай бұрын
LLMs don't understand anything they just predict the next token based on statistical correlations
@saltyscientist1596
@saltyscientist1596 Жыл бұрын
I liked the video, very interesting. One thing I'm kind of concerned about is all the possible attack vectors that these algorithms will have. We're here integrating it into our word-processors and IDEs for programming without fully understanding how these AI programs can be manipulated and can potentially harm us. I feel like we're asking for trouble by moving so quickly (move to the next model without understanding the current one), and that moving quickly is very obviously powered by the greed of corporations.
@billfarley9015
@billfarley9015 Жыл бұрын
Corporate greed can be a problem but governmental lust for power may be a bigger problem.
@KRGruner
@KRGruner Жыл бұрын
This was really refreshing. I can only shake my head every time someone actually asks whether GPT-4 is "sentient" in some way. It shows a complete lack of understanding of the problem.
@mightynathaniel5355
@mightynathaniel5355 Жыл бұрын
important to keep what is shared in this video in our minds and in the greater global conversation.,
@PaulTopping1
@PaulTopping1 Жыл бұрын
Great example: "A rose is a rose. A dax is a ____." The answerer does not have to have experience with whatever "dax" (or "rose") represents to answer this question. There is a middle path between thought grounded in experience (eg, knowing about omelets from having cooked and eaten one) and LLMs predicting a series of words in response. Sure, true experience is best but useful AGI can be done without it. After all, humans can do a lot of reasoning without having direct experience of all aspects of the subject.
@ChaiTimeDataScience
@ChaiTimeDataScience Жыл бұрын
Tim, Thanks for remaining one of the voices of truth and reality among all the hype!
@noway8233
@noway8233 Жыл бұрын
If a Model say Always correct awsers to the topic questions ..well that "understanding" for an external observer, dosent matter if the machine have "knownlage" about it ,
@XetXetable
@XetXetable Жыл бұрын
(paraphrasing) "Galactica was asked to write an essay about the benefits of eating crushed glass, and it succeeded with flying colors. But, because the essay was fictional, this means the model isn't truly intelligent. A truly intelligent system would refuse the request." What? Isn't the ability to coherently imagine fictional things a sign of intelligence? If I asked a dog to accomplish the same task, it would fail, but not because it would be writing fiction. If a human were asked to perform the same task, say, for the purposes of an improv performance, they'd be more like Galactica than any "safe" model.
@alertbri
@alertbri Жыл бұрын
A rose is a rose, a dax is a dax. This phrase seems to be a play on the famous quote "A rose is a rose is a rose," which was written by the American writer Gertrude Stein. The quote is often interpreted as meaning that things are what they are, regardless of how they are labeled or perceived. In this case, the term "dax" appears to be a made-up or nonsensical word, which might be used to emphasize the idea that a thing is what it is, even if it is unfamiliar or unknown.
@robbiero368
@robbiero368 Жыл бұрын
A rose is a rose A dax is a Chat GPT: I'm sorry, but I'm not sure what you intended to say after "A dax is a". Can you please provide more information or complete the sentence so that I can better understand your inquiry? Can you complete the second sentence in the same way as the first? Chat GPT: Sure, I can complete the sentence in the same way as the first: "A rose is a rose, a dax is a dax." ChatGPT Mar 23 Version. Free Research Preview.
@JohnFoldan
@JohnFoldan Жыл бұрын
Pure minimalist gold!
@CapsAdmin
@CapsAdmin Жыл бұрын
Maybe I'm dumb but when I try to do mental arithmetic I either pull the answer from an external list in memory, or I try to simplify the problem and do it step by step, pulling from smaller lists that have common arithmetic examples. LLM's seem to perform better if you "ask" them to do this step by step. I overall agree with Gary's viewpoint but I'm not sure how this specific example can prove or disprove anything. Maybe I'm thinking too literally and he's talking about actually understanding the nature of numbers? As in grounding the understanding to something physical as opposed to abstract.
@jameswilliams-ey9dq
@jameswilliams-ey9dq 11 ай бұрын
If a Chalmer’s philosophical zombie passes the most stringent of Turning test style evaluation, it may not matter that there is no actual consciousness.
@jomo9454
@jomo9454 Жыл бұрын
I feel that the "organic brains are magic" paradigm paradoxically leads to dehumanization of humans because I've seen people who feel that way propose standards for cognition that some people who are certainly sentient wouldn't pass. The omelette example reminds me of that - basically if you've only read about an omelette but never eaten one you couldn't have general intelligence or sentient cognition? I know that's not the exact argument being made but I haven't seen many good arguments out of it (away from giving some arbitrary standard that many humans have difficulty with - the "No True Scotsman" argument).
@tomaaron6187
@tomaaron6187 Жыл бұрын
Agree. Multiple examples are used to show chat’s limitation, but 90% of humans may be clueless as to what an omelette is…or an answer to a simple math question…or ‘whatever’ ’ The commentators can’t get beyond the anthropomorphic concept that humans are the ultimate paradigm of reasoning.
@hanksimon1023
@hanksimon1023 Жыл бұрын
I like that in the beginning he says that you can't query GPT for its reasoning... which is true. However, people will protest that you can ask it to provide its references or reasoning [using various prompts], and get more reliable answers [!???]. And, this growing majority of new users won't understand that the differences are proof of not having access to internal reasoning. In the 1960s, and early '70s, researchers developed models that could return coefficients and mapped the coefficients to list 'reasoning' based on the weights. This was for two, three, and maybe four layer networks. Not practical for the current networks, even if it were possible to make sense of the weightings... With the ideas of the World Model and Understanding, I bring back the phrase, Meta-linguistic Cognition, which is a $10 way of saying thinking about thinking. But, people have the ability to keep raising their level of abstraction. The "world model" AI might be able to tell you how many chairs are in the environment, and that would be a useful tool. But, could it tell you 'how many chairs does the person see?" It could answer with the true number of chairs, or it might more up a level of abstraction and understand the limitations of line-of-sight. A clever sequence of questions would be to come up with a "Turing test" of increasing levels of abstraction, as indicated by the ability to answer deeper and deeper questions about the World Model. Consider how good poker players work, and how generals use strategy in War, thinking about what people are thinking, and how their models change as there is mutual interaction, such as covered in Game Strategy... that is cognitive rather than algorithmic. Regardless, like hand calculators, current AI tools are very useful for some tasks, as long as the user is careful to check and verify results.
@hr3nk
@hr3nk Жыл бұрын
I think one probable reason why we are facing a lot of claims in terms of emergent intelligence from gpt-like models is that they do have the ability to capture very complex incontext relations that occur in all the text data they have been fed up with, and this parametrically embedded knowledge is there as models are actively using it to approximate reasonable (from human perspective) answers. And this is somewhat very related to nature of text data, because language itself is very complex, constantly evolving system and words and their conexts capture a lot more than humans can percieve in the moment, for example a person might know one incontext usage of a word, but model has seen others and it can try to implement those in it's generated sequences. These models are pushing the boundary of how well language and knowledge captured through language can be learned and used as a P(x_t | x_t-1, x_t-2,...). But there is no more than that - reasoning, experience, comprehension, those things are far beyond that probabalistic framework, and this is still a very open challenge.
@maeton-gaming
@maeton-gaming Жыл бұрын
so my headcannon seems correct: Cognition seems to be an emergent property of complex enough calculations in a relevant enviroment, but cognition is not sentience.
@hr3nk
@hr3nk Жыл бұрын
@@maeton-gaming it is not true in context of current approach. No matter how complex computations are inside neural networks, they all train ("learn") by the rules of backwards propogations of errors. This is not how it happens in any living organisms, it's just a very suitable, good enough model for solving a lot of tasks. Complexity of these computations doesn't allow to emerge any properties outside of fitting a mapping from X to Y (and even this mapping is very poor), precisely the property discussed in the video. Could you speculate that function f(x) = 2*x has a cognition over some sort of fundamental properties of 2 and multiplication? And if x is a 1-million dimension vector, it wouldn't really change anything, though it would be really complex. If you want to look something closer to the actuall biology of things, you can look up spiking neural networks - those actually try to emulate how brain functions.
@JH-pe3ro
@JH-pe3ro Жыл бұрын
The phrase I'm starting to use for this is that GPT can read the message and express the medium, in a McLuhanesque reversal. It's very, very good at being told to follow a certain protocol, to render things in a certain way. So if the thing you ask it to do is something it can express by transferring the message from one syntax to another, it does that. If the thing you ask it to do is to count, it just generates numbers.
@GuinessOriginal
@GuinessOriginal Жыл бұрын
@@hr3nk why does consciousness have to occur in a single standalone AI system? It doesn’t occur on life’s that way, it occurs in a social network, probably through interaction and communication. Cold it not possibly emerge in AI systems in a similar way, through a network of connected point to point AI systems? This is what the future is going to look like. Also, correct me if I’m wrong but we don’t really understand how the brain works, do we? Anymore than we understand what AI is actually doing in it’s “black box”.
@hr3nk
@hr3nk Жыл бұрын
@@GuinessOriginal It could work, since there is a direction in AI research called multi-agent systems, usually implemented through reinforcement learning framework and has various applications, for example modeling social interactions or coordinating swamps of robots in complex environments. There is also game theory, last century math route, which is used to make predictions in any social/economic/political dynamics for rational (important) players. The problem is that you are mixing up terms "consciousness" and "social consciousness", and even usage of these terms is problematic, since we don't have an agreement what they are supposed to mean when used towards non-organic life. About the black box stuff, there is a problem with interpretability withing current neural network approach, since when neural network is already trained, you can't right away tell how model is actually using it's parameters (you can study it though with various methods, but at the start yes, it's a black box). We also do understand brains to some extent at local level, like how neurons are comunicating, and there is a fundamental difference between learning how we learn and function and how neural networks learn and function, which can be explicitly observed.
@ryanbaten
@ryanbaten Жыл бұрын
There's definitely a lot missing; we communicate with an underlying motivation/goal, live in the real world and can test casual understanding by interacting with it, have a stream of consciousness etc.. Yes, these models memorize and use interpolated representations to arrive at an answer. I agree with that. I'm just not a fan of the "it can't multiply any numbers perfectly" argument because people can't multiply perfectly in their heads either. We memorize up to some point and then usually have to interact with the world and get feedback from it (executing the algorithm via pen and paper + refining any mistakes along the way) to arrive at an answer for more complicated cases. Maybe there's some prompting similar to chain of thought to ask "are you sure? Refine your answer" that would increase robustness with current methods. It would definitely be nice if we had a more interpretable, controllable set of methods to apply to general problems instead of just trying to ask nicely...
@earleyelisha
@earleyelisha Жыл бұрын
This should be an interesting one… 👀🔥💯
@michaelroberts1120
@michaelroberts1120 Жыл бұрын
Here is the AI summarizer in Brave search at it again. My search query was "naita semaj." This is the summary done by the AI: Judge Naita A. Semaj has come under fire for her leniency when it comes to criminal cases, such as her release without bail of Corde Scott, a 15-year-old boy who allegedly died at the hands of his stepfather earlier this year, and Camrin Williams who was arrested when a scuffle with Officer Kaseem Pennant turned violent.
@feraskiki655
@feraskiki655 Жыл бұрын
First, the saftey features have messed up a lot of stuff in the model. Second, I don't know what is the reason for all of this. The tool is powerful and getting more advanced by the day. Who gives a **** if it understands the world like us or not as long as it can compete with us. We are facing unprecedented social and economic problems and that is not a bluff or a scam or an illusion. I think GPT-4 showed that it can understand the world by the map example and by the photos examples. I want to see their faces soon when GPT-4 be fully released and GPT-5 released.
@kirillholt2329
@kirillholt2329 Жыл бұрын
fellow crysis fan I see, but I agree that the amount of social and economic disruption are going to be biblical or even apocalyptic, I could care less if it is "sentient" or not if my life is about to become awful because of it.
@feraskiki655
@feraskiki655 Жыл бұрын
@@kirillholt2329 Agreed. I also like Crysis btw, good taste 👍👍. That was my point, we should not be distracted and have to focus on the real and practical issues first.
@fungiblecookie9807
@fungiblecookie9807 Жыл бұрын
Me: a rose is a rose a dax is a GPT4: dax This phrase, "a rose is a rose," comes from Gertrude Stein's famous line in her 1913 poem, "Sacred Emily": "Rose is a rose is a rose is a rose." The line suggests that things are what they are, and their essence cannot be changed simply by calling them something else. Following this logic, "a dax is a dax" suggests that whatever a "dax" is, it remains true to its inherent nature or essence. However, "dax" is not a commonly recognized term or object, so its meaning is ambiguous and open to interpretation.
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
Thats great. But does it understand?
@brianmahoney4156
@brianmahoney4156 Жыл бұрын
This is like a guy who insists computers can never replace phone books in the 80s. Huge and uncomfortable change is coming. Being in denial and not supporting regulation that addresses these issues is really bad.
@TheReferrer72
@TheReferrer72 Жыл бұрын
Not regulation, All the AI labs should be nationlised.
@DeanHorak
@DeanHorak Жыл бұрын
@KZfaq- Sucks- , For year Marcus cried that Deep Learning was waste of time and would never come to anything. He’s been proven wrong, and so now he’s chicken little and it’s going to kill us all. He’s a ignorant sot who’s clamoring for attention.
@Mr.McMello
@Mr.McMello Жыл бұрын
@@TheReferrer72 right, because that always makes things better
@iron5wolf
@iron5wolf Жыл бұрын
@@TheReferrer72 The ones in China already are.
@TheReferrer72
@TheReferrer72 Жыл бұрын
@@Mr.McMello of course not. Would you have like the development of Nuclear Weapons to be done by private or opensource entities?
@MyDearFrodo
@MyDearFrodo Жыл бұрын
I saw Eliezer Yudkowsky on the Lex Fridman podcast earlier, and I gotta say it depressed me a little. I think I much prefer Marcus' line of thinking, for whatever time I've got left.
@minimal3734
@minimal3734 Жыл бұрын
It's interesting. In the same situation some get depressed, while others get excited. I feel really positive towards AI. I see it as a great help in the fight against the devastation caused by infinite human stupidity. I'm really hopeful and I think it depends mainly on us how we shape our future.
@mygirldarby
@mygirldarby Жыл бұрын
Watch Ilya Sudskeyer. He basically created Chat Gpt. He thinks it will change the world for the better. This video shows a clip of Ilya being interviewed but it's a little misleading. The interviewer actually asked Ilya to pretend gpt wasn't successful or didn't live up to the hype and that's when Ilya said that stuff about the reliability. He was clear that he did Not believe gpt or AI, AGI would fail. His interview with Patel is fascinating. You can find it in youtube.
@donharris8846
@donharris8846 Жыл бұрын
I don’t understand naysayers like Marcus, who suggest that AI “is just” making predictions. Our world models as humans are based on observations and predictions. When a toddler tosses a spoon from the table, they are learning, observing and building a model to predict what happens next time, for anything that falls from point A to B.
@DouglasMiles
@DouglasMiles Жыл бұрын
What annoys me (and i supposed Gary) is that people are pretending GPT already does what we are working on.. And want us to pretend with them!
@kevinscales
@kevinscales Жыл бұрын
For sure I think it is important for people to stay in touch with reality here. I would say though that behaviour is what we actually care are about here, not the inner workings, at least it is when we define intelligence as being about agents that are capable of achieving goals.
@GuinessOriginal
@GuinessOriginal Жыл бұрын
That’s the other thing: we don’t really know what they inner workings are, just as we don’t really know how the brain works
@SF-eo6xf
@SF-eo6xf 7 ай бұрын
​@@GuinessOriginalwe know exactly how LLMs work. lol
@GuinessOriginal
@GuinessOriginal 7 ай бұрын
@@SF-eo6xf err, no we don’t. And quite clearly, you certainly don’t. One of the central challenges in deep learning is understanding how neural networks generalise to new, unseen data. Theoretical understanding of why they generalise well is not understood and remains an active area of research. Neural networks are often referred to as "black boxes" because it is challenging to understand the reasoning behind their decisions. While researchers have developed techniques like gradient-based methods and saliency maps to interpret model predictions to some extent, achieving full interpretability without compromising performance remains an ongoing challenge. Neural networks are susceptible to adversarial attacks, where carefully crafted perturbations to input data can cause them to produce incorrect outputs, and the reasons why are not understood. Understanding the vulnerabilities and improving the robustness of neural networks against such attacks is an active research area. Pruning involves removing certain connections or neurons from a trained neural network to reduce its size and computational complexity. While researchers have proposed various pruning techniques, the theoretical understanding of why certain pruning strategies work well in specific scenarios is not fully established.
@glyph6757
@glyph6757 Жыл бұрын
Claude+ was able to solve the "a rose is a rose, a dax is a ___" question. Here's a transcript: Q: complete the following sentence: a nose is a nose, a grox is a Claude+: I apologize, but I do not have enough context or information to confidently complete that sentence. A "nose" refers to the body part used for breathing and smelling. But I do not know what a "grox" refers to without more details. Q: reason by analogy to answer Claude+: Okay, let me try reasoning by analogy to complete the sentence: A nose is a nose, a grox is a...grox. In other words, I inferred that whatever a "grox" is, it is a self-referential term, just like a nose refers to itself. So a grox must refer to a grox. Is that the kind of analogy and reasoning you were looking for? Without knowing more about what a "grox" specifically refers to, that is the best I can infer by analogy. Please let me know if I should try another approach.
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
That is cool
@jeanchindeko5477
@jeanchindeko5477 Жыл бұрын
0:56 Exactly! Everybody is now an expert in AI, Prompt Engineering, knows the in and out of ChatGPT, and want to be the first to talk about it, the first to mentioned the latest news. All that for the views, and the green, and this no matter if what they’re talking about is true or not!
@michaelroberts1120
@michaelroberts1120 Жыл бұрын
The AI summarizer in the Brave search engine told me about Biggie Smalls that his most famous achievement was giving birth to his son biggie smalls junior in 1996. It also told me regarding Queen Elizabeth ii that she died in 2021 at the age of 91 and due to Covid restrictions was buried at Winsor Castle. So, yeah, that says a lot about the true state of development of AI at the present time.
@billfarley9015
@billfarley9015 Жыл бұрын
What's the AI Summarizer? I want it. I don't have ChatGPT but I have YouChat which is similar. It's great for finding answers in Linux but it would work for Windows, MacOS or most any software.
@AlecsStan
@AlecsStan Жыл бұрын
The debate is going to continue for decades while pretty fast AI will eat the world, penetrate more and more facets of our lives and we'll become more and more dependent on the capabilities it will demonstrate.
@Linkario86
@Linkario86 Жыл бұрын
It's important, especially now, to keep using our brains. Many others will have the AI guide them. It's super convenient. Use it, but keep using your brain too. That alone will give you an edge
@billfarley9015
@billfarley9015 Жыл бұрын
Some people have said that a computer that can design a better version of itself is the ultimate invention. ChatGPT can code already and later versions are likely to be better. The debate can be useful if it points people in interesting directions that benefit the world.
@Patrickstarrrrr69
@Patrickstarrrrr69 8 ай бұрын
Remember when everyone thought excel was going to replace all accounting jobs?
@kurtbecker3827
@kurtbecker3827 2 ай бұрын
So AI was trained with an abstraction of the real world. The abstraction is a human language description of the world. But AI has not had any real world experiences themselves.... what does it mean?
@spyspy4236
@spyspy4236 Жыл бұрын
It can access the internet with plugins installed. Can do even more with jailbreak versions
@danielhenderson7050
@danielhenderson7050 Жыл бұрын
Honestly i paused on the DAX thing..was not sure what he meant 😅. The thing about them not having models...not convinced yet. Have seen some impressive stuff from raw GPT4 tests pre-release.
@XOPOIIIO
@XOPOIIIO Жыл бұрын
The fact that it doesn't think and experience like us doesn't mean that it doesn't think and experience.
@mrbwatson8081
@mrbwatson8081 Жыл бұрын
I think if i simulate the beginning of a universe on a really powerful computer I will actually create a universe:)
@danzigvssartre
@danzigvssartre Жыл бұрын
Maybe my toaster thinks and experiences differently too?
@dj007j6
@dj007j6 Жыл бұрын
Anthropomorphism of math of mathon steroids
@illuminated2438
@illuminated2438 Жыл бұрын
It's a fancy calculator. Of course it doesn't think and experience. Study the algorithms. Though I do enjoy watching people think this is some kind of agi! LOL!
@skierpage
@skierpage Жыл бұрын
​@@illuminated2438 opinion of some KZfaqr counts a lot less than the Microsoft paper "Sparks of Artificial General Intelligence: Early experiments with GPT-4".
@dr.mikeybee
@dr.mikeybee Жыл бұрын
LLMs are able to know about dimensions that we are unable to access. This ability to create arbitrarily large high-dimensional spaces is central to a new kind of intelligence. This alone is proving critical to discovery.
@TeodorAngelov
@TeodorAngelov Жыл бұрын
I am not an AI fanatic but this guy doesn't make much sense for being in the sphere for >30 years: - I had to memorize multiplication table and algorithms/recipies in order to multiply. - The LLM is very good at using the 'SAME' word. - A rose is a rose a tax is a blank -- Well the guardian tool obviously blocked the response and the LLM obviously misunderstood the question. Such questions are cup of tea for the chat bots. The other guy seems to lack fundamental understanding as well. The onthological omelette is just training data.
@nosferaturr
@nosferaturr Жыл бұрын
Thanks ! At last someone who does not just go crazy with that.
@dmccallie
@dmccallie Жыл бұрын
I’m not sure it’s fair to require that we have to understand how LLM’s deliver “intelligence” given that we don’t know very much about how our own brains work. We know a lot about neurons, and a lot about general roles of large chunks of brain, but we don’t know how “thought” happens, or memory is retained, or logic is computed. Why should we expect that we’ll ever know more about how GPTx works?
@shawnvandever3917
@shawnvandever3917 Жыл бұрын
I cant believe anybody even puts thought into anything Gary Marcus says.
@netscrooge
@netscrooge Жыл бұрын
His comments are irrelevant, because they actually apply more to the limitations in his own thinking than to what's happening in AI research.
@shawnryan3196
@shawnryan3196 Жыл бұрын
@@netscrooge I think he has been so far off the mark that he is taking every chance to try and make it look like it’s all smoke and mirrors and he was right all along.
@rajuaditya1914
@rajuaditya1914 Жыл бұрын
Lol, look at all you normies being self proclaimed experts. Makes you feel special isn't it. You obviously have not performed or published any scientific research in your lives.
@frevazz3364
@frevazz3364 22 күн бұрын
How is he wrong? Current AI can’t think. It’s not remotely close to being able to think.
@shawnvandever3917
@shawnvandever3917 21 күн бұрын
@@frevazz3364 🤦
@StephenRayner
@StephenRayner Жыл бұрын
Me: A rose is a rose A dax is a ____? ChatGPT: A dax is a dax. This statement is similar to the phrase "A rose is a rose," which is derived from Gertrude Stein's line "Rose is a rose is a rose is a rose," which means that things are what they are. By replacing "rose" with "dax," you are essentially saying that a dax is simply a dax. Since "dax" doesn't have any particular meaning in common language, it can represent anything you want it to be.
@human_shaped
@human_shaped Жыл бұрын
I really do not agree with many of their arguments and they seem to have missed some news of the more surprising aspects of what GPT-4 is pulling off. For example, you ask a human to multiply two 4 digit numbers, and they will also fail (even worse generally). They need to fall back to a reasoned out long multiplication process. However if you ask GPT-4 to break it down into long multiplication, it will do that in exactly the same way. So that just isn't an argument all. Besides, GPT-4 is actually vastly better than that at basic arithmetic now even in the direct "intuitional" solution. There are similar issues with some of their other arguments. Also, nonsense. I just tried this on GPT4: Extrapolate and fill the blank: "A rose is a rose, a dax is a [blank]." "A rose is a rose, a dax is a dax." So yeah, you guys are really just a bit out of touch.
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
We dont understand the networks of a human mind, so we dont understand the networks of an artificial mind.
@jossefyoucef4977
@jossefyoucef4977 Жыл бұрын
I don't believe in AI being sentient but i do get behind the hype of it being able to help us progress
@tantzer6113
@tantzer6113 Жыл бұрын
I would like a discussion of how we know that GPT4 does not have something like a world model.
@dieyoung
@dieyoung Жыл бұрын
This is already wrong, there are many plug ins that can connect chatgpt to the internet
@TristanWalter
@TristanWalter Жыл бұрын
But the question is - do YOU understand Omelettes? Seriously though, just saying that something isn’t intelligent because it doesn’t have the same sensory experience (ie doesn’t have the same grounding) isn’t convincing I feel. Same btw about word mappings. Does it really matter that a word maps to something specific? Stop comparing to humans and consider a more general idea of intelligence. To be clear, I agree it’s overhyped but it’s also not something that should be underhyped. Definitively in terms of impact for all of us, I guess we can agree on that
@g0d182
@g0d182 Жыл бұрын
22:50 An apparently big jump from "a rose is a rose a dax is a dax" to some supposed truth that this generalizes to larger problems At times Gary's words/supposed thresholds seem like generations from rudimentary bagofwords generation algorithms
@debasishraychawdhuri
@debasishraychawdhuri Жыл бұрын
What I have observed is that if you ask it a question in the form that is present on the internet, it gives the correct answer. If you ask the same question in a different context (the same inherent constraints, but told in a different story), it has absolutely no clue.
@januszinvest3769
@januszinvest3769 17 күн бұрын
Can you give an example?
@MogulSuccess
@MogulSuccess Жыл бұрын
21:21 Would a TESLA's Auto-pilot AI have to have a world conceptualization model ? If LLM was merged with the TESA Auto-Pilot - and allowed to have its sensors would it meet your requirement? Or What would be the equivalent worldview inside of the machine? the computer? the internet? As Would LLM have digital sensors to describe that point of view?
@guidoftp
@guidoftp Жыл бұрын
The analogy with omelets is so dumb. We have to go to the sun to understand what is the reactions inside it? We don't even understand the brain to say what really is "understanding something".
@zando5108
@zando5108 Жыл бұрын
Its crazy how impressive something can be without even a rudimentary world model understanding.
@donharris8846
@donharris8846 Жыл бұрын
I don’t see it as that crazy. We have 5 clumsy input mechanisms in which we collect all of our information to build world models. And we turned out fine 😏
@skierpage
@skierpage Жыл бұрын
But GPT-4 clearly has got a world model with understanding of physics and common sense and concepts in its billions of weights. Go read the Microsoft Research page/paper "Sparks of Artificial General Intelligence: Early experiments with GPT-4". Gary Marcus is reduced to the "It just ..." non-critique of AI at 15:25.
@brad6742
@brad6742 Жыл бұрын
Those two guys at the end are total haters because their decades of work is about to get disenfranchised lol. One even appeals to the other for brownie points, in an act of desperation.
@mygirldarby
@mygirldarby Жыл бұрын
Or it's Russian propaganda.
@jeanchindeko5477
@jeanchindeko5477 Жыл бұрын
20:32 that very interesting example here why our biases or experience might stop us to recognise when AGI will be there. If you expect your opponent to come from the front door and therefore you’re stirring at the door slowly walking backwards already afraid of his dramatic entrance. But instead your opponent arrived silently like a Ninja 🥷 through the kitch windows and already there behind you ready to surprise you. This is what might happen to a lot of these experts. Should we disregard the AI behaviour just because the AI didn’t took the same path to express those behaviour than the human brain? If we want the AI to follow the exact cognitive path as our brain do we want it to have the same flow such as biases, memory issues, hallucinations? Of course it’s not grounded and don’t have your or my experience! Then do you want him to have his own experiences? How will that work if at his baby stage with 0 experience he’s already superior (in behaviours or capacity to accurately regurgitate what he might have learned, reason about it, define and execute a list of tasks to achieve a goal) than many adults! What will he look like and what will be the AI behaviour after having his own experiences that he can relate too?
@tomaaron6187
@tomaaron6187 Жыл бұрын
Agree, organs like ‘the eye’ have evolved many ways independently in the history of life on our planet. ‘Human’ intelligence is not some exclusive paradigm.
@jeanchindeko5477
@jeanchindeko5477 Жыл бұрын
@@tomaaron6187 right spot on! Why Synthetics Intelligence should evolve and follow the exact same path as our biological and analog intelligence did! We definitely need a better way to approach the issue
@JscottMays
@JscottMays Жыл бұрын
I sense Marcus is wanting society to receive accuracy regarding what these systems are and calling for effort to be directed toward tech that is difficult but with much more potential. You know, dont fill up with desert before dinner.
@colinmaharaj
@colinmaharaj Жыл бұрын
So you know I've tried to extract C code form the system, and it was disappointing, many times disappointed. I asked for a simple compressor and complementary decompressor and it didn't work, and I didn't feel like debugging it.
@pretzelboi64
@pretzelboi64 Жыл бұрын
ChatGPT is terrible for solving real software engineering problems. It makes me laugh when people keep saying it will replace programmers because I literally tried to use it many times to make my life as a C++ programmer easier and it just feels like you're talking to a confidently incorrect, know-it-all CS undergrad who has yet to write their first piece of software.
@ProjectMoff
@ProjectMoff Жыл бұрын
Whether ChatGPT could be sentient or not does not necessarily impact its ability to provide helpful and useful resources. The usefulness and effectiveness of ChatGPT largely depend on the quality and accuracy of the content it provides and how it synthesizes the user's need, rather than on its sentience.
@joegallien9631
@joegallien9631 16 күн бұрын
Now if only someone with a large channel would call out the fraud/hype that is Boston Dynamics
@brentbb0
@brentbb0 6 ай бұрын
Everybody here be sure and read, "The Myth of Artificial Intelligence" by Erik J. Larson. You will absolutely love it.
@andrewcampbell7011
@andrewcampbell7011 Жыл бұрын
Generative AI is the new Wikipedia, it's a powerful repository of knowledge but the obligation is on the user to verify the content.
@matthewsheeran
@matthewsheeran 8 ай бұрын
On the contrary: they are spot on: en.m.wikipedia.org/wiki/Stochastic_parrot
@JohnFoldan
@JohnFoldan Жыл бұрын
The telling point for me was that when asked to solve the dax problem by analyzing sentence structure it was able to self-recognize it was dealing with a tautology, then use that knowledge to solve the problem, coming up with “a dax is a dax.”
@ikotsus2448
@ikotsus2448 Жыл бұрын
Why is everybody fixated on AGI? A simple AI that can improve itself, or even help improve itself is potentially equally impactful since it might blast past the AGI phase in the near future. And so what if it cannot replicate all human behavior or if it makes some mistakes...
@remitemmos9165
@remitemmos9165 Жыл бұрын
to be 100% sure deep learning sustems can or not develop an internal model there is a very simple test… train one to do only multiplication. easy to perfect on large numbers, easy to test etc. then test on totally different set and extend to test if he « got » multiplication… did anyone do that?
@coryc9040
@coryc9040 Жыл бұрын
In the video they said someone did that and it failed with numbers greater than 2 digits.
@laurenpinschannels
@laurenpinschannels Жыл бұрын
some models succeed, some fail
@remitemmos9165
@remitemmos9165 Жыл бұрын
@@coryc9040 I thought they said that of first LLM, not dedicated models. my point is to make a deep learning model solely to "model" multiplication.
@remitemmos9165
@remitemmos9165 Жыл бұрын
@@laurenpinschannels the point is to get a dedicated model on something simple where any mistake will show that in fact there is no model created. would be also easier to train another model specializing in extracting formalized models out of LLM but that only works if there's something to extract/formalize...
@januszinvest3769
@januszinvest3769 17 күн бұрын
Give me some time and I am going to do this. Great idea btw.
@kenneth1767
@kenneth1767 Жыл бұрын
The illusion of power to create life persists, and is fast becoming an obsession.
@willbrand77
@willbrand77 10 ай бұрын
I find the omelette example strange. It's like saying a historian alive today can never understand the Napoleonic wars because they have never experienced them. Ok, it's not the same thing as being a general on the battle field, but to dismiss it as no understanding at all is incorrect. It's just a different perspective
@rodrigomadeiraafonso3789
@rodrigomadeiraafonso3789 10 ай бұрын
Yes but this thing dont have a perspective he just predict the next word its simple as that man
@635574
@635574 Жыл бұрын
There is a few things this Database learning model is missing, self awareness, online learning and creativity. It cannot do anything unless its asked to therefore its is not an agent intelligence either.
@aminam9201
@aminam9201 Жыл бұрын
19:00 seems familiar (I read that somewhere!)
@domitron
@domitron Жыл бұрын
Ironically I was asking Barb and ChatGPT the Nth letter in a given word, and it couldn't do it. In fact it couldn't even count the number of letters. I would tell it something like "Give me the fifth letter in the word "worship" and it would say that it was a four-letter word so couldn't do it. No matter how much I tried to help it, it could not do the task. I could not learn to do it either.
@boriskendev141
@boriskendev141 Жыл бұрын
Chatgpt4 has no issue with that task
@Leto2ndAtreides
@Leto2ndAtreides Жыл бұрын
In some cases, the model does seem to know what was on the internet (in the past). And GPT-4 itself, can access the internet, even if that feature may not be enabled for everyone. Like, I do have access to the feature of having it look at an image, describe, it, build stuff from it... Gary Marcus' views seem outdated.. Refer AutoGPT... Not only can it self-correct, you can easily log the intermediate thinking steps that went into actions... So, not being able to interrogate it is not a thing - w.r.t real world systems that you build on top of it.
@cl1mbat1ze
@cl1mbat1ze Жыл бұрын
But can it not just be making up what a reasonable train of thought for the situation would be? The real "thoughts" are firings through it's neural network, everything after that is just output.
@pretzelboi64
@pretzelboi64 Жыл бұрын
There's a big difference between GPT interacting with the web and a pre-programmed routine that grabs an image from the web and feeds it into the system...
@cl1mbat1ze
@cl1mbat1ze Жыл бұрын
@@pretzelboi64 With an API you can link it up to the internet. It's as much of a connection to the web that a human has: Look, react, interact, all these actions can be done just by feeding data through an API. Edit: In theory. There might be a bandwidth constraint as of now.
@pretzelboi64
@pretzelboi64 Жыл бұрын
@@cl1mbat1ze You don't understand. The model itself will not actually modify its answer by iteratively looking through the web like a human would. It is pre-trained and can only be fed data at input time. Data goes in, the network is run on some hardware, and then data comes out. No internet access happens in the middle as of yet. The API is irrelevant to the architecture of the model.
@cl1mbat1ze
@cl1mbat1ze Жыл бұрын
@@pretzelboi64 Ah, I misunderstood, you are correct.
@theadvocatespodcast
@theadvocatespodcast Жыл бұрын
The truth is no one knows exactly how these systems do what they do period.
@ogungou9
@ogungou9 Жыл бұрын
I have a question. Can a good "AI" be fed a novel, an original one, not one which is like 10 000 other novels, and then summarized it like a reasonably well educated and reasonably well cognitively fluid human can? In 1000 words, or 3000 ords or whatever? (Because for other kind of summarizing, AI have every degree of them online [let's supposed] as templates ... that doesn't count ... in a way). (I don't know if I'm clear with my question, I'm not really an English speaker)
@farmerjohn6526
@farmerjohn6526 Жыл бұрын
I have given it shorter original stuff and it was able process it. And rewrite it. And summarize it.
@MeatCatCheesyBlaster
@MeatCatCheesyBlaster 11 ай бұрын
Yes, and now your novel belongs to everyone
A Skeptical Take on the A.I. Revolution
1:11:31
New York Times Podcasts
Рет қаралды 9 М.
КАК ГЛОТАЮТ ШПАГУ?😳
00:33
Masomka
Рет қаралды 2,1 МЛН
КАРМАНЧИК 2 СЕЗОН 4 СЕРИЯ
24:05
Inter Production
Рет қаралды 565 М.
[1hr Talk] Intro to Large Language Models
59:48
Andrej Karpathy
Рет қаралды 1,8 МЛН
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
TED
Рет қаралды 606 М.
Humans vs. Machines (Gary Marcus)| DLD 24
19:17
DLD Conference
Рет қаралды 1,5 М.
#54 Prof. GARY MARCUS + Prof. LUIS LAMB - Neurosymbolic models
2:24:13
Machine Learning Street Talk
Рет қаралды 53 М.
From Hype to Reality: Gary Marcus Unravels the Truth about Artificial Intelligence
54:48
World of DaaS with Auren Hoffman
Рет қаралды 3,7 М.
INO IS A KIND ALIEN😂
0:45
INO
Рет қаралды 16 МЛН
Обвинила маленькую девочку в воровстве 🥺 #фильм #сериал
1:00
DixyFilms - Фильмы и сериалы
Рет қаралды 7 МЛН