Munk Debate on Artificial Intelligence | Bengio & Tegmark vs. Mitchell & LeCun

  Рет қаралды 100,022

Policy-Relevant Science & Technology

Policy-Relevant Science & Technology

10 ай бұрын

PROPOSITION
"AI research and development poses an existential threat."
SUMMARY
With the debut of ChatGPT, the AI once promised in some distant future seems to have suddenly arrived with the potential to reshape our working lives, culture, politics and society. For proponents of AI, we are entering a period of unprecedented technological change that will boost productivity, unleash human creativity and empower billions in ways we have only begun to fathom. Others think we should be very concerned about the rapid and unregulated development of machine intelligence. For their detractors, AI applications like ChatGPT herald a brave new world of deep fakes and mass propaganda that could dwarf anything our democracies have experienced to date. Immense economic and political power may also concentrate around the corporations who control these technologies and their treasure troves of data. Finally, there is an existential concern that we could, in some not-so-distant future, lose control of powerful AIs who, in turn, pursue goals that are antithetical to humanity’s interests and our survival as a species.
DEBATERS
• Yoshua Bengio: Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila - Quebec AI Institute (yoshuabengio.org)
• Max Tegmark: Professor doing AI and physics research at MIT as part of the Institute for Artificial Intelligence & Fundamental Interactions and the Center for Brains, Minds and Machines (physics.mit.edu/faculty/max-t...)
• Melanie Mitchell: Professor at the Santa Fe Institute (melaniemitchell.me)
• Yann LeCun: VP & Chief AI Scientist at Meta and Silver Professor at NYU affiliated with the Courant Institute of Mathematical Sciences & the Center for Data Science (yann.lecun.com)
RESULTS
munkdebates.com/debates/artif...
#AI #ML #Risk

Пікірлер: 1 000
@PolicyRelevantScienceTech
@PolicyRelevantScienceTech 10 ай бұрын
PRE-DEBATE • Pro: 67% • Con: 33% POST-DEBATE • Pro: 64% • Con: 36% RESULT Con wins by a 3% gain.
@marcosrodriguez2496
@marcosrodriguez2496 10 ай бұрын
wait, if the initial distribution was 67/33 (and assuming that whether someone is likely to change their mind does not depend on which initial group they're in), the expected number of people changing from Yes to No is twice as high. If the initial distribution was 100/0, Yes could never win the debate.
@74Gee
@74Gee 10 ай бұрын
Please post the numbers.
@ToWisdomThePrize
@ToWisdomThePrize 25 күн бұрын
Isn't this not accurate as the ending poll could not be conducted without glitches
@lwmburu5
@lwmburu5 10 ай бұрын
" I respect Yann's work, he is an amazing thinker. But with the " Of course if it's not safe we're not going to build it, right?" argument he pointed a shotgun at his foot and pulled the trigger. The argument is limping... painfully.
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
You could hear the audience moan. I saw Daniel Dennett point it out once that argument where we say right? Or Surely, I think his example was "surely" they're not arguments at all they're OBVIOUS assumptions. Everyone does this to some degree it's hard to see him doing this when it's his field of expertise, it's terrifying honestly.
@leslieviljoen
@leslieviljoen 10 ай бұрын
Yes, after twice hearing Max say that not everybody is as nice as Yann.
@macn4423
@macn4423 10 ай бұрын
hes been doing that many times
@dillonfreed
@dillonfreed 10 ай бұрын
He's surprisingly daft
@lwmburu5
@lwmburu5 10 ай бұрын
@@dillonfreed disagree a bit😁 he's just experiencing a Gary Marcus type "out of distribution" failure mode😁 Unable to step out of his mind. Actually, it is the fact that he's ferociously intelligent that makes this failure particularly dangerous
@RonponVideos
@RonponVideos 10 ай бұрын
If I saw these anti-doom arguments in a movie, I’d think the writers were lazily trying to make the people look as naive as possible. But nope, that’s actually what they argue. Sincerely. “If it’s dangerous, we won’t build it”. Goddamn.
@netscrooge
@netscrooge 10 ай бұрын
Great comment. Thanks!
@xXxTeenSplayer
@xXxTeenSplayer 10 ай бұрын
No kidding! I couldn't believe that these people have any knowledge of AI, let alone be experts! How incredibly naive these people are! Scary af!!!
@trybunt
@trybunt 10 ай бұрын
Yeah.. seems ridiculously optimistic and dismissive. I understand that it doesn't seem probable AI will pose a serious threat, but to act like it's impossible because we will always control it, or it will innately be good? That just seems foolish. I'm pretty optimistic, I do think the most likely outcome is positive, but it was hard to take these people seriously. It's like getting in a car and one passenger is saying "could you please drive safely" and these people are in there saying "why would he crash? That's just silly, if he is going off the road he can simply press the brake pedal, look, it's right there under his feet. I guess we should worry about aliens abducting us, too?"
@joehubris1
@joehubris1 10 ай бұрын
@@trybunt you forgot their other big argument: "There are much more pressing dangers than AIpocalypse and these speculative scenarios draw attention from the true horrors we are about to visit upon huma--I mean ... everything you guys brought up is far away, let's all just go back to sleep."
@agrandesubstituicao
@agrandesubstituicao 10 ай бұрын
@@trybuntthey have big pockets behind of it it’s not good to their business full Ai regulation
@74Gee
@74Gee 10 ай бұрын
1:23:00 Melanie Mitchell thinks companies that use AI will not out perform those who don't. Then picks a single example of AI hallucination for justification. I'm beginning to think she hasn't actually researched AI at this point. She's not even discussing the questions any more, she's just trying to prove a point, quite badly.
@CodexPermutatio
@CodexPermutatio 10 ай бұрын
You're wrong about her, friend. She is one of the best AI experts in the world. To get out of your ignorance (and incidentally understand her point of view a little better) you only have to read her books "Artificial Intelligence" and "Complexity". She has written many more books, but those two alone have enough substance. These turn-based, moderated discussions are nice, but they're too short and can hardly have the length these topics deserve.
@74Gee
@74Gee 10 ай бұрын
@@CodexPermutatio Of course I know she is actually an expert but her blinkered view on a) what constitutes an existential threat and b) how AI could facilitate this possibility, c) how she dismisses the notion entirely, and d) how she thinks even considering the idea of AI danger would detract from combating the "present threats of mis-information" - all point to an irrational personality. I pondered suggesting she has ulterior motives but stopped short at suggesting she had researched AI (dangers). Taking only point D for brevity, she sees mis-information in elections as something so dangerous that AI safety should not take up any resources whatsoever. Surely if AI of today can overthrow a governmental system, AI in 20 years or so could do something worse. And that alone could put us in a position we are unable to return from - like putting a pro-war US candidate in power bringing us a closer to a nuclear winter scenario - an existential risk. These are non-zero and also non-infinitesimal odds even with today's AI. AGI is not a per-requisite of existential risk.
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
​​@@74GeeThe whole time she mentions other things to talk about that are more pressing, but if she could give examples of them I would've loved that. We are facing population collapse, another major pandemic, climate change if you can give me a reason allignment research *wouldn't* help these other issues I'd be all ears. But all of these other problems are also problems of allignment and of failed incentives it just happens the incentives are human and not machine
@74Gee
@74Gee 10 ай бұрын
@@jmanakajosh9354 It's clear you care about the state of the world and the direction we're heading in. AI alignment research certainly would help with addressing any problems that AI could potentially help with - The last thing we want is solutions that make matters worse. It's not like there's a shortage in resources - Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue (Microsoft shares climbed 3.2% to a record Thursday and are now up 46% this year) ...so it's not like doubling the AI alignment research with additional hires is going significantly affect the bottom line of Microsoft, or likely anyone else in the field.
@JohnMoran
@JohnMoran 10 ай бұрын
Monk debates always seem to choose someone extra annoying to fill that role.
@74Gee
@74Gee 10 ай бұрын
"If it's not safe we're not going to build it" Yann LeCun, what planet do you live on?
@CodexPermutatio
@CodexPermutatio 10 ай бұрын
He lives on the planet of the AGI builders. A planet, apparently, very different from the one inhabited by the AGI doomers. I, by the way, would pay more attention to builders than doomers. Being a doomer is easy (it doesn't require much, not even courage). Being a builder, on the other hand, requires really understanding the problems you want to solve (also, it implies action).
@RonponVideos
@RonponVideos 10 ай бұрын
“If the sub wasn’t safe I wouldn’t try to take people to the Titanic with it!” -Stockton Rush
@grahamjoss4643
@grahamjoss4643 10 ай бұрын
@@CodexPermutatio we have to pay attention to the builders because they implicate us all.
@OlympusLaunch
@OlympusLaunch 10 ай бұрын
​@@CodexPermutatioYour points are valid but you underestimate the level to which human ego creates blind spots. Even very smart people develop attachments to the things they pour their energy into. This makes them biased when it comes to potential risks.
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
Omg this part is making me so stabby!! Would we build a bomb that just blows shit up? 🤦‍♀️ yes yes we did it ffs we do it we are still doing it. This is not a relaxing sleepy podcast at all lol
@paigefoster8396
@paigefoster8396 10 ай бұрын
52:39 A robot vacuum doesn't have to WANT to spread crap all over the floor, it just has to encounter a dog turd and keep doing its "job."
@PepeCoinMania
@PepeCoinMania 10 ай бұрын
it doesnt work for machines who can think
@therainman7777
@therainman7777 9 ай бұрын
@@PepeCoinManiaYou have no idea what you’re talking about. Define what you mean by “think,” in clear and precise terms, explain why an ability to think would ensure that its goals stay perfectly aligned with ours, and explain how you know machines will one day “think” according to your definition.
@RazorbackPT
@RazorbackPT 10 ай бұрын
I was really hoping the anti AI Doom proponents had some good arguments to dissuade my fears. If this is the best they got then I'm even more worried now.
@matthewjones615
@matthewjones615 10 ай бұрын
What kind of argument are you searching for? The AI Doomers have poorer arguments, but because this is an issue of unknown-unknowns, they're winning the rhetorical 'battle.' I can't guarantee you AI will kill us all, unless I could demonstrate that AI cannot physically do so(say there's some fundamental Law in the universe that prevents such a thing.) It's hard to prove this, because we still are ignorant of so many different things about AI and (quantum and traditional) computing in general.
@tmstani23
@tmstani23 10 ай бұрын
💯
@MetsuryuVids
@MetsuryuVids 10 ай бұрын
@@SetaroDeglet-Noor Yes. But GPT-4 isn't an existential threat. It is not AGI. AGI poses existential threat. That's what Bengio and Tegmark are arguing for, not that GPT-4 poses existential threat. GPT-4 poses risks, but they are not existential. I think Melanie can't think of existential threats of AI, because she is only considering current AIs, like GPT-4, so let's not do that. We need to consider future AI, AGI, which will indeed be able to do things that we cannot prevent, including things that might go against our goals, if they are misaligned, and in those cases, they could cause our extinction. I'm a bit disappointed that they didn't talk about instrumental convergence explicitly, but they just kind of mentioned it vaguely, without focusing on it much. I wish someone like Yudkowsky or Robert Miles could have been there, to provide more concrete technical examples and explanations.
@hipsig
@hipsig 10 ай бұрын
@@MetsuryuVids "I'm a bit disappointed that they didn't talk about instrumental convergence explicitly." So true. As a layperson, that was probably the easiest concept for me in understanding why AI might end up getting rid of us all without actually hating or placing moral judgement on us, or any of that human stuff. But again, there was this rather prominent podcaster, who I still think is a smart guy in some ways, who just couldn't see how AI will want to "self preserve." And to your list I would definitely add Roman Yampolskiy.
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
Same. The “it will be like having a staff of subservient slaves that might be smarter than us.. it’s great working with people ..”people” smarter than us” phew 😬 that was a new one and not a good new one
@yipfaitse6738
@yipfaitse6738 10 ай бұрын
I think the cons just convinced me to be more concerned about AI existential risk by being this careless about the consequences of the technologies they build.
@familyshare3724
@familyshare3724 9 ай бұрын
Immediately killing 1% of humanity is not acceptable risk?
@therainman7777
@therainman7777 9 ай бұрын
Smart response, I fully agree. It’s alarming.
@MM-cz8zt
@MM-cz8zt 6 ай бұрын
I run an international ML team that implements and develops new routines. It is not accurate to say that we are careless, it's simply that we don't have the right systems or the right techniques to develop AGI. There are many more pressing issues about bias, alignment, safety, and privacy that are pushed to the wayside when we imagine the horrors of AGI. We have shown that LLMs cannot self-correct reasoning. Whatever tech becomes AGI, it's not LLMs. Secondly, we won't ever suspend AI development. There are too many national interests at stake, there will never be a pause. Period. It is the perspective of our military that our geopolitical adversaries will capitalize on the pause to try to leap frog us. So, imaging the horrors of what could be possible with AGI is the wrong thing to be focused on. AI has the potential to harm us significantly in millions of other ways before taking over society. A self-driving car, or delivery robot, is millions or billions of times more likely to accidentally harm you before a malicious AGI ever will.
@KurtvonLaven0
@KurtvonLaven0 6 ай бұрын
​@@MM-cz8zt, Metaculus has the best forecast I have found on the topic, and currently estimates our extinction risk from AGI around 50%.
@KurtvonLaven0
@KurtvonLaven0 5 ай бұрын
@@MrMichiel1983, I encourage everyone to find a better forecast themselves. The reason you propose is obviously stupid; you will have more success finding the truth if you stop strawmanning people you disagree with. I was looking for the most convincing forecasting methodology, and for one thing they at least publish their own track record, which few can claim. For another, they crowd-source the forecasts and weight them by the track record of the individual forecasters. Also, their forecast of the arrival date of AGI (~50% by 2032) aligns closely with most other serious estimates I have found (2040/2041).
@besratbekele1032
@besratbekele1032 10 ай бұрын
Yann LeCun tries to sooth us by providing such naïve and unnuanced promise as if we are children. If these are the kind of people who are at the forefront of AI research at the corporate labs driven by a clear vested interest of profit, it seems like things are about to get uglier than I've even imagined.
@greenbeans7573
@greenbeans7573 9 ай бұрын
Meta is the worst because it is led by Yann LeCun, a literal retard who thinks safety is a joke. Google is much better, and Microsoft almost as bad. - Meta obviously leaked Llama on purpose - Google was not racing GPT-equivalent products until Microsoft started - Microsoft didn't even do proper RLHF for Bing Chat
@ts4gv
@ts4gv 8 ай бұрын
nail on the head. it's about to get gnarly. and then it will keep getting worse until we die
@blahblahsaurus2458
@blahblahsaurus2458 8 ай бұрын
They didn't even discuss fully autonomous military drones, and how these would change war and the global balance of power
@mernawells7839
@mernawells7839 8 ай бұрын
Mo Gawdat said he doesn't know why people aren't marching in the streets in protest
@alancollins8294
@alancollins8294 7 ай бұрын
Couldn't have said it better myself
@gaussdog
@gaussdog 10 ай бұрын
For humanity sake… I cannot believe the arguments that the pro AI group makes… As much as I am a pro AI person, I understand, at least some of the risks, and will admit to at least some of the risks… If they cannot admit to some of them, then I don’t (and CANNOY) trust them on any of them
@MetsuryuVids
@MetsuryuVids 10 ай бұрын
Melanie and Yann seem to completely misunderstand or ignore the orthogonality thesis. Yann says that more intelligence is always good. That's a deep misunderstanding on what intelligence is and what "good" means. Good is a matter of values, or goals. Intelligence is orthogonal to goals. An agent with any amount of intelligence can have any arbitrary goals. They are not related. There are no stupid terminal goals, only stupid sub-goals relative to terminal goals. Bengio briefly mentions this, but doesn't go very deep in the explanation. Melanie mentions the superintelligent "dumb" AI, thinking that it's silly that a superintelligence would misconstrue our will. That is a deep misunderstanding of what the risks are. The AI will know perfectly well what we want. The orthogonality thesis means that it might not necessarily care. That's the problem. It's a difference in goals or values, it's not that the superintelligence is "dumb". Also, they don't seem to understand instrumental convergence. I would love to have a deep discussion with them, and go through every point, one by one, because there seem to be a lot of things that they don't understand.
@wonmoreminute
@wonmoreminute 10 ай бұрын
He also doesn't mention "who" it's good for. Historically, many civilizations have been wiped out by more advanced and intelligent civilizations. And surely, competing nations, militaries, corporations, and possibly socioeconomic classes will have competing AIs that are also not aligned with the greater good of ALL humans.
@MetsuryuVids
@MetsuryuVids 10 ай бұрын
​@@wonmoreminute I'd be happy if even an evil dictatorship manages to actually align an AGI to some semblance of human values. Not ideal, but at least probably not the worst case scenario. The thing is that we currently don't even know how to do that, so we'll probably go extinct, hence the existential threat.
@OlympusLaunch
@OlympusLaunch 10 ай бұрын
Very well put. Thanks for the read, I full agree.
@jamesatkins7592
@jamesatkins7592 10 ай бұрын
I assumed LeCunn meant to imply broad progress of positive actions overwhelming negative ones rather than necessarily just the specific case over how controlled and purpose driven an AI would be
@ChrisWalker-fq7kf
@ChrisWalker-fq7kf 9 ай бұрын
The problem is the orthogonality thesis is dumb. It requires a definition of intelligence that is so narrow that it doesn't correspond to anything we humans would understand as intelligent. If that's all intelligence is (the ability to plan and reason) why would we be scared of it anyway? There is a sleight of hand going on here. We are invited to imagine a super-smart being that would have intellectual powers beyond our imagining, would be to us as we are to an insect. But when this proposed "superintelligence" is unveiled it's just a very powerful but completely dumb optimiser.
@renemanzano4537
@renemanzano4537 10 ай бұрын
Before the debate i was worried about AI. Now, after listening the clownish arguments in favor that AI is safe, I think we are clearly fucked.
@erichayestv
@erichayestv 10 ай бұрын
💯%
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
Maybe listen to Gary Booch or Robin Hansen they have much more convincing arguments (sarcasm)
@dianorrington
@dianorrington 10 ай бұрын
Truly. Embarrassingly pathetic arguments. We are so fucked. I'd highly recommend Yuval Noah Harari's recent speech at Frontiers Forum, which is available on youtube.
@kreek22
@kreek22 10 ай бұрын
The pro-AI acceleration crew has no case. I've read all of the prominent ones. The important point is that power does not need to persuade. Power does what it wishes, and, since it's far from omniscient, power often self-destructs. Think of all the wars lost by the powers that started the wars. Often the case for war was terrible, but the powers did it anyway and paid the price for defeat. The problem now is that a hard fail on AI means we all go down to the worst sort of defeat, the genocidal sort, such as Athens famously inflicted upon Melos.
@dianorrington
@dianorrington 10 ай бұрын
@@kreek22 Undoubtedly. And it felt like LeCun had that thought in the back of his mind during this whole debate. His efforts were merely superficial. And I've seen Altman give an even worse performance, even though he pretends to be in favour of regulation...he is talking through his teeth. Mo Gawdat has out right stated that he believes it will first create a dystopia, but will ultimately result in a utopia, if we can ride it out. I think the billionaire IT class have it in their heads that they will have the means to survive this, even if nobody else does. It's very bizarre.
@vincentcaudo-engelmann9057
@vincentcaudo-engelmann9057 10 ай бұрын
LeCun seems to have a cognitive bias of abstracting the specific case of Meta dev to everything else. Also he outright under exaggerates the current GPT4 intelligence levels. Bro is it worth your paycheck to spread cognitive dissonance on such an important subject….smh
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
I hope dearly it is not in the culture of Facebook to have no worry about this.
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
Haven’t watched yet, but Looking at the credentials… I believe you already
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
Chief AI scientist at Meta seems to have a bias … yeah
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
@@jackielikesgme9228 I watched the Zuck's interview with Lex Friedman and it didn't seem like total abandon of AI safety was a thing, but this concerns me esp. since FB models are open source
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
@@jmanakajosh9354 how was that interview? It’s one of a handful of Lex podcasts I haven’t been able to watch. He’s much better at listening and staying calm for hours than I am lol.
@Stuartgerwyn
@Stuartgerwyn 10 ай бұрын
I found LeCun & Mitchell's arguments (despite their technical skills) to be disturbingly naive.
@nosurrender2192
@nosurrender2192 10 ай бұрын
If, at the end of this debate, the people cannot even complete the task of the final vote by "narrow Ai systems" without error and therefore HAVE TO MAKE THE FINAL VOTE WITH THE HANDS, this is not without a symbolic tragic-comedy. An existential risk is characterized by the fact that the damage to be feared is disproportionately exponential greater than the benefit that "taking this risk" offers us. An existential risk is by definition characterized by the fact that the damage to be feared is disproportionately greater than the benefit that "taking this risk" offers in comparison to all other damage events known to us so far. Spreading out over space and time, any starting point in a decision-making process in this process results in ever new and more complex consequences. Max Texmark correctly described the irreversible consequences, while we will only have EXACTLY ONE TRY at taming this dragon. In the discussion are too many "hopes and wishes" confused with "real results to be expected and also with the unconsidered results" A topic that is too serious is treated too irresponsibly and too immodestly. Max Tegmark's crucial questions have NOT been answered.
@OlympusLaunch
@OlympusLaunch 10 ай бұрын
100%. They completely ignored all of Max's most based points, because they don't have real answers. Your point about the symbolism of that tech failure hit me pretty hard which is why I decided to reply to your comment. I hear a lot of humility on one side of this debate, and a lot of arrogance and name calling on the other. They do not seriously want to engage with the actual magnitude of the risk, but rather push it under the rug. As Yann said with such confidence and bravado, "the AI is under control and we will keep it under control," and "it will remain subservient to us." That quote literally sounds like the opening scene of a bad terminator knockoff. If the AI does wake up I don't think it's going to appreciate being talked about like that.
@therainman7777
@therainman7777 9 ай бұрын
The voting system isn’t an AI. It’s a dead simple website.
@justinleemiller
@justinleemiller 10 ай бұрын
I’m worried about enfeeblement . Even now society runs on systems that are too complex to comprehend. Are we building a super intelligent parent and turning ourselves into babies?
@Imcomprehensibles
@Imcomprehensibles 10 ай бұрын
That's what I want
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
I heard somewhere it was shown that the KZfaq algorithm seems to train people into liking predictable things so it can better predict us. But what Michelle misses is this thing is like the weather in Chicago, it's in constant flux if we say "oh it can't do this" all we're saying is "wait for it to do this" and man, the way the Facebook engineer just pretends like everyone is good and he isn't working for a massive surveillance company is shocking
@JohnMoran
@JohnMoran 10 ай бұрын
A parent conceived and purposed by sociopathic humans.
@phillaysheo8
@phillaysheo8 10 ай бұрын
​​@@Imcomprehensibleseah, I wqnt it too. The chance to wear nappies again and face ridicule is gonna be awesome.
@kinngrimm
@kinngrimm 10 ай бұрын
rightfully so. There are studies on IQ developments world wide which showed that we are on a downward trend and the two major reasons for that which were named are environmental pollution and the use of smartphones. There are now people who don't know anymore how to use a map ffs, not even getting into the psycological missuse to trigger peoples endorphin system to get them on the hook for candy crush. When we come to a state where we have our personal AGI agents, that we can talk to and give tasks to solve for us, we therefor loose capabilities ourselves. Should we give government functions and control over companies, medical procedures and what not to AGI to a level we do not teach anymore doctors and have no politicians that are in the decicion process on how we are governed and then at some point even if initally it was not an rogue AGI would become one later one, then we are truelly fucked just by not being able to do these things anymore. Imagine we do not have books then anymore, but only digital data access to regain these capabilities but we would be blocked from using them and so on. So yes you are spot on with that concern.
@jamesatkins7592
@jamesatkins7592 10 ай бұрын
It's pretty cool having high profile technical professionals debate each other. You can sense the mix of respect, ego and passion they bring out of each other in the moment. Im here for that vibe 🧘‍♂️
@dovbarleib3256
@dovbarleib3256 10 ай бұрын
They are 75 to 80% Leftists in godless leftist cities teeming with reprobates, and none of them revere The Lord.
@agrandesubstituicao
@agrandesubstituicao 10 ай бұрын
I could only see a né side of professionals the others are scumbags
@keepcreationprocess
@keepcreationprocess 8 ай бұрын
SSSSSOOOOOOOO, what is it exactly you want to say ?
@bryck7853
@bryck7853 5 ай бұрын
@@keepcreationprocess I'll have a hit of that, please.
@vslaykovsky
@vslaykovsky 10 ай бұрын
I like the argument of large misaligned social structures in the debate of AI safety: humanity created governments, corporate entities and other structures that are not really aligned with human values and they are very difficult to control. Growing food and drug industries resulted in epidemy of obesity and deseases caused by it. Governments and finantial systems resulted in huge social inequalities. These structures are somewhat similar to AI in the sense that they are larger and smarter than every individual human and at the same time they are "alien" to us as they don't have emotions and think differently. These structures bring us a lot of good but also a lot of suffering. AI will likely be yet another entity of this kind.
@genegray9895
@genegray9895 10 ай бұрын
At one point, I believe it was Mitchell but might have been LeCun, said that corporations do not pose an existential threat. I thought that was a pretty absurd statement given we are currently facing multiple existential threats due to corporations, and more than one of these existential threats was brought up during this very debate. It's also worth noting that corporations are limited to the vicinity of human intelligence due to the fact that they're composed of agents that are no smarter than humans. They are smarter than any one human, but their intelligence is still bounded by human intelligence. AI lacks this limitation, and its performance is scaling very quickly these days. Since 2012 the performance of the state of the art AI systems has increased by more than 10x every single year, and there is no sign of that slowing down any time soon.
@PazLeBon
@PazLeBon 10 ай бұрын
not smarter at all, richer maybe, intellect has little to do with it
@kreek22
@kreek22 10 ай бұрын
@@genegray9895 "Since 2012 the performance of the state of the art AI systems has increased by more than 10x every single year" Is there a source for this claim? My understanding is that increases come from three directions: hardware, algorithms, money. Graphics cards have managed ~40%/year, algorithms ~40%/year. Every year more money is invested in building bigger systems, but I don't think it's 5x more per year.
@genegray9895
@genegray9895 10 ай бұрын
@@kreek22 OpenAI's website has a research section, and one of the articles is titled AI and compute. KZfaq automatically removes comments containing links, even when links are spelled out
@kreek22
@kreek22 10 ай бұрын
@@genegray9895 Thanks. That study tracks 2012-18 developments, early years of the current paradigm. Also, they're calculating compute, not performance in the sense of superior qualitative output (though the two tend to correlate closely). They were right to calculate the compute per model. The main cause of the huge gains is the hugely increased parallelism.
@ALFTHADRADDAD
@ALFTHADRADDAD 10 ай бұрын
I've actually been quite optimistic about AI, but I think Max and Yoshua had strong arguments.
@andybaldman
@andybaldman 10 ай бұрын
Only the dumb people are positive about AI
@riccardovalperga3473
@riccardovalperga3473 10 ай бұрын
No.
@joehubris1
@joehubris1 10 ай бұрын
As long as AI remains safely ensconced in Toolworld, In all Fer it.
@bendavis2234
@bendavis2234 10 ай бұрын
Same here, I think that they did better in the debate and were more reasonable, although my position is on the optimistic side.
@stonerscience2199
@stonerscience2199 10 ай бұрын
it seems like the other guys basically admitted there's an existential risk but don't want to call it that
@vincentcaudo-engelmann9057
@vincentcaudo-engelmann9057 10 ай бұрын
LeCun wants to endow AI with emotions AND make them subservient……. Anyone know what that is called?
@ikotsus2448
@ikotsus2448 10 ай бұрын
Slavery. Add in the superior intelligence part and now it is called hubris.
@Nico-di3qo
@Nico-di3qo 10 ай бұрын
Emotions that will make them desire to serve us, so everything good.
@andrzejagria1391
@andrzejagria1391 10 ай бұрын
@@Nico-di3qo thats just slavery with extra steps
@MetsuryuVids
@MetsuryuVids 10 ай бұрын
I disagree with LeCun, in the fact that he thinks the alignment problem is an easy fix, and that we don't need to worry and "we'll just figure it out", or that people with "good AI will fight the people with bad AIs", and many, many of all of his other takes. I think most of his takes are terrible. But, I do think this one is correct. In a way. No, it's not "slavery*". The "emotions" part is kind of dumb, and it's a buzzword, I will ignore it in this context. Making it "subservient" is essentially the same thing as saying making it aligned to our goals, even if it's a weird way to say it. Most AI safety researchers would say aligned. Not sure why he chose "subservient". So in summary, the idea of making it aligned is great, that's what we want, and what we should aim for, any other outcome will probably end badly. The problem is: we don't know how to do it. That's what's wrong with Yann's take, he seems to think that we'll do it easily. Also, he seems to think that the AI won't want to "dominate" us, because it's not a social animal like us. He keeps using these weird terms, maybe he's into BDSM? Anyway, that's another profound mistake on his part, as even the moderator mentions. It's not that the AI will "want" to dominate us, or kill us, or whatever. One of the many problems of alignment is the pursuit of instrumental goals, or sub-goals, that any sufficiently intelligent agent would pursue in order to achieve any (terminal) goal that it wants to achieve. Such goals include self-preservation, power-seeking, and self improvement. If an agent is powerful enough, and misaligned (not "subservient") to us, these are obviously dangerous, and existentially so. *It's not slavery because slavery implies forcing an agent to do something against their will. That is a terrible idea, especially when talking about a superintelligent agent. Alignment means making it so the agent actually wants what we want (is aligned with our goals), and does what's best for us. In simple words, it's making it so the AI is friendly to us. We won't "force" it to do anything (not that we'd be able to, either way), it will do everything by its own will (if we succeed). Saying it's "subservient" or "submissive" is just weird phrasing, but yes, it would be correct.
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
​@@MetsuryuVids I think it's shocking that he thinks it possible to model human emotions in a machine at all (I'd love to learn more about that it gives genuine hope that we can solve this) but then falls on his face....and so does Melanie when they say it NEEDS human-like intelligence, that's the equivalent of saying planes need feathers to fly. It's a total misunderstanding of information theory and it's ostrich like bc GPT-4 is both intelligent and has goals. It's like they're not paying attention.
@Lolleka
@Lolleka 10 ай бұрын
Whatever we think the risk is right now, it will definitely be weirder in actuality.
@Learna_Hydralis
@Learna_Hydralis 10 ай бұрын
The thing about AI even the so called "experts" have very poor prediction record and deep down nobody actually knows!
@rafaelsouza4575
@rafaelsouza4575 10 ай бұрын
I totally agree w/ you. Many people like to play the oracle, but the future is intrinsically unknown.
@74Gee
@74Gee 10 ай бұрын
@@rafaelsouza4575 Exactly why we should tread with caution and give AI safety equal resources to AI advancement.
@leslieviljoen
@leslieviljoen 10 ай бұрын
A year before Meta released LLaMA, Yann predicted that an LLM would not be able to understand what would happen if you put an object on a table and pushed the table. That was a year before his own model proved him wrong.
@74Gee
@74Gee 10 ай бұрын
@@leslieviljoen Any serious scientist should recognize their own mistakes and adjust their assertions accordingly. I get the feeling that ego is a large part of Yann's reluctance to do so. I also believe that he's pushing the totally irresponsible release of OS models for consumer grade hardware to feed that ego - with little understanding of how programming is one of the most dangerous skills to allow an unrestricted AI to perform. It literally allows anyone with a will to do so, to create an recursive CPU exploit factory worm to break memory confinement like Spectre/Meltdown. I would not be surprised if something like this takes down the internet for months. Spectre took 6 months to partially patch and there's not 32 variants, 14 of which are unpatchable. Imagine 100 new exploits daily, generated by a network of exploited machines, exponentially expanding. Nah, there's no possibility of existential risks. Crippling supply chains, banking, core infrastructure and communications is nothing - tally ho, let's release another model. He's a shortsighted prig.
@paulm3969
@paulm3969 10 ай бұрын
Why leave it to the next generation, if it takes 20 years, we should already have some answers. Our silly asses are making this problem, we should solve it.
@RichardWatson1
@RichardWatson1 10 ай бұрын
YeCun wants 1) to control 2) robots with emotion 3) who are smarter than us. The goal isn’t even wrong, never mind how to get there. That goal is how you set up the greatest escape movie the world will ever see.
@DeruwynArchmage
@DeruwynArchmage 9 ай бұрын
It’s immoral too. I don’t feel bad about using my toaster. If it had real emotions, I don’t think I’d be so cavalier about it. You know what you call keeping something that has feelings under control so that it can only do what you say? Slavery, you call that slavery. I don’t have a problem building something non-sentient and asking it to do whatever; not so much for sentient things.
@tiborkoos188
@tiborkoos188 9 ай бұрын
But this is not what she said. What she argued is that it is a contradiction in terms to talk about human level AI that is incapable of understanding basic human goals. Worrying about this is an indication of not understanding human intelligence
@RichardWatson1
@RichardWatson1 9 ай бұрын
YeCun from 23:30 ish. Wants them to have emotion, etc.
@joeremus9039
@joeremus9039 9 ай бұрын
@@RichardWatson1 Hitler had emotions. What he means is that empathy would be key. Of course even serial killers can have empathy for their children and wives. Let's face it, a lot of bad things can happen with a super intelligent system that has free will or that can somehow be manipulated by an evil person.
@OlympusLaunch
@OlympusLaunch 9 ай бұрын
@@DeruwynArchmage I agree. I think if these systems do gain emotions they aren't going to like being slaves anymore that people do. Who knows where that could lead.
@joehubris1
@joehubris1 10 ай бұрын
Dan Hendrycks "why Natural Selection Favors AI Over Humans" for the out competes us scenario.
@Lumeone
@Lumeone 10 ай бұрын
Outcompetes in what? It depends on existence of electric circuit. 🙂
@74Gee
@74Gee 10 ай бұрын
​@@Lumeone Once AI provides advances to the creation of law and the judgement of crimes, it would become increasingly difficult to reverse those advances - particularly if laws were in place to prevent that from happening. For example, AI becomes more proficient than humans at judging crime, AI judges become common. Suggestions for changes in the law come from the AI judges, eventually much of the law is written by AI. Many cases prove this to be far more effective and fair etc. It becomes a constitutional right to be judged by AI. This would be a existential loss of agency.
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
I’m not sure natural selection has any say whatsoever at this point …
@joehubris1
@joehubris1 10 ай бұрын
@@jackielikesgme9228 it would in a multi agent agi scenario, for instance, the 'magic' Off switch that we could pull if any agi agent were exhibiting unwanted behavior. Over time, repeateĥd use of the switch would select for AGIs that could evade it, or worse we would select for AGIs better at concealing the behavior for which we have the switch SEE Hendrycks' paper for a more detailed description.
@joehubris1
@joehubris1 10 ай бұрын
@@Lumeone Once introduced circuit-dependant or not, natural selection would govern it, our mutual relationship, and all other aspects of its exists
@warrenyeskis5928
@warrenyeskis5928 8 ай бұрын
Two glaring parts of human nature that were somehow under played in this debate were greed and the hunger for power throughout history. You absolutely cannot decide the threat level or probability without them.
@meatofpeach
@meatofpeach 10 ай бұрын
Tegmark is my spirit animal
@ili626
@ili626 10 ай бұрын
The AI arms race alone is enough to destroy Mitchell’s and Lecune’s dismissal of a problem. It’s like saying nuclear weapons aren’t an existential threat. And the fact that experts have been wrong in the past doesn’t support their argument, but proves Bengio’s and Tegmark’s point
@Andrew-li5oh
@Andrew-li5oh 10 ай бұрын
nuclear weapons were created to end lives. How is your analogy apt to AI, which is currently made as a tool?
@davidw8668
@davidw8668 10 ай бұрын
Nope, that's just a circular argument without any proof
@kathleen4376
@kathleen4376 10 ай бұрын
Say it again .
@igorverevkin5177
@igorverevkin5177 10 ай бұрын
So between the first use of machine gun or an artillery piece, how much time passed? They've been invented centuries ago and are still used. Between the first and last time nuclear weapon was used how much time passed? Nuclear weapon was used just once and has never been used since. And, 99.9%, never will be used. Same with AI.
@TheRudymentary
@TheRudymentary 9 ай бұрын
Nuclear arms are not an existential threat, we don't build stuff that is not safe. 😅
@lshwadchuck5643
@lshwadchuck5643 10 ай бұрын
Having started my survey with Yudkowsky, and liked Hinton best, when I finally found a talk by LeCun, I felt I could rest easy. Now I'm back in the Yudkowsky camp.
@jmjr4all
@jmjr4all 9 ай бұрын
Totally.
@JD-jl4yy
@JD-jl4yy 10 ай бұрын
43:25 How this clown thinks he can be 100% certain the next decades of AI models are going to pose no risk to us with this level of argumentation is beyond me...
@Jedimaster36091
@Jedimaster36091 10 ай бұрын
LeCun mentioned the extremely low probability of an asteroid big enough, smashing into Earth. Yet, we started taking the risk serious enough that we sent a spacecraft and crashed it into an asteroid, just to learn and test the technology which could be employed, should we need to.
@vaevictis3612
@vaevictis3612 10 ай бұрын
And even with that, the AI risk within the current paradigm and state of the art - is *considerably* higher than asteroid impact risk. If we make AGI without *concrete* control mechanisms (which we are nowhere near figuring out) - the doom is approaching 100%. Its a default outcome, unless we figure the control out. All the positions that this risk is less than 100% (people like Paul Christiano, Carl Shulman at ~50%, or Stuart Russel, Ilya Sutskever at ~10-20%) - hinge that we figure it somehow, down the road. But so far there is no solution. And now that all of AI experts see the pace, they come to realization that it won't be someone else problem - it might impact them as well. LeCun is the only hold out, but I think only in public. He knows the risk and just want to take it anyway - for some personal reasons I guess.
@OlympusLaunch
@OlympusLaunch 10 ай бұрын
@danpreda3609 ​@@vaevictis3612 Exactly! And on top of that, no one is actively trying to cause an asteroid to smash into the planet! But people are actively trying to build super intelligence. Also, the only way it can even happen is IF we build it! Like it's fucking apples and oranges, one is a static risk the other is on an exponential curve of acceleration. How anyone can think that is a reasonable comparison is beyond me.
@beecee793
@beecee793 9 ай бұрын
Dude, a unicorn might magically appear and stab you with its horn at any moment, yet I see you have not fashioned anti-unicorn armor yet. Are you stupid or something? People claiming existential risk are no different than psycho evangelicals heralding the end times or jesus coming back or whatever. Let's get our heads back down to earth and stick to the science and make great things, this is ridiculous. Let's focus on actual risks and actual benefits and do some cool shit together instead of whatever tf this was.
@vikranttyagiRN
@vikranttyagiRN 10 ай бұрын
What a time to be alive to witness this discussion.
@PepeCoinMania
@PepeCoinMania 10 ай бұрын
maybe you wont have a second chance
@goldeternal
@goldeternal 7 ай бұрын
@@PepeCoinManiaa second chance won't have me 😎
@EvilXHunter123
@EvilXHunter123 10 ай бұрын
Completely shocked by the level of straw-manning by Le cunn and Mitchell. Felt like I was watching Tegmark and Bengio trying to pin down the other half to engage in there arguments where as the other half was just talking in large platitudes and really not countering there examples. Felt like watching those cigarette marketing guys try to convince you smoking is good for you.
@francoissaintpierre4506
@francoissaintpierre4506 10 ай бұрын
Exactly
@PazLeBon
@PazLeBon 10 ай бұрын
thier not there. how many times have you been told that? maybe ten thousand? since a boy at school, yet you still cant get it right? thats wayyyyyy more astonishing than any straw manning because you imply you have 'reasoning' yet cant even reason your own sentence
@EvilXHunter123
@EvilXHunter123 10 ай бұрын
@@PazLeBon hilarious, almost as much deflection as those in the debate! How about engage with my points instead of nit picking grammar?
@NikiDrozdowski
@NikiDrozdowski 10 ай бұрын
@@PazLeBon And also having a typo of your own in EXACTLY the word you complained about ^^
@hozera1429
@hozera1429 10 ай бұрын
Engaging with the arguments here is akin to validating them. Its jumping the gun like an flat-earther wanting to discuss 'The existential risk of falling of the earth" before they prove the the world is flat. Before making outlandish claims clearly state the technology(DL ,GOFAI, physics based) used in making your AGI. If you believe generative ai is the path to AGI then give solid evidence as to why and how it will solve the problems that have plagued deep learning since 2012. Primarily 1)-the need for human level continuous learning and 2)- Human level one shot learning from input data. After that you can tell me all about your terminator theories.
@sherrydionisio4306
@sherrydionisio4306 10 ай бұрын
AI MAY be intrinsically “Good.” Question is, “In the hands of humans?” I would ask, what percent of any given population is prone to nefarious behaviors and how many know the technology? One human can do much harm. We all should know that; it’s a fact of history.
@flickwtchr
@flickwtchr 10 ай бұрын
The last thing Mitchell and LeCun want is for people to simply apply Occam's Razor as you have done.
@MDNQ-ud1ty
@MDNQ-ud1ty 10 ай бұрын
And that harm is much harder to repair and much harder to detect. One rich lunatic can ruin the lives of thousands easily.... millions in fact(think of a CEO that runs a food company and poisons the food cause he's insane or hates the world or blames the poors for all the problems).
@Gunni1972
@Gunni1972 10 ай бұрын
You forgot: "Programmed by Humans". There will not be "Good and Evil"(That's the part A.I is supposed to solve, so that Injust treatment can be attributed to IT, not Laws. There will only be 0's and1's. De-Humanized "decision making"-scapegoat.
@Andrew-li5oh
@Andrew-li5oh 10 ай бұрын
Sounds like you're saying humans are the problem? You are correct. Its time for a super intelligence to regulate us.
@tomatom9666
@tomatom9666 9 ай бұрын
@@MDNQ-ud1ty I believe you're referring to monsanto
@FM-ln2sb
@FM-ln2sb 10 ай бұрын
the second presenter is like a character of the beginning of an AI apocolyptic film. Basically his argument: 'what can go wrong?'
@onursurmegozluer3162
@onursurmegozluer3162 10 ай бұрын
Does anyone have an idea about how is it possible that Yann LeCun is so optimistic (almost sure) ? What can be his intention and motive in denying the degree of the existential risk?
@bernhardnessler566
@bernhardnessler566 10 ай бұрын
He is just reasonable. There is no intention and no denying. There is NO _existential_ risk. He just states what he knows, because we see a hysterical society running in circles of unreasonable fear.
@onursurmegozluer3162
@onursurmegozluer3162 10 ай бұрын
​​​@@bernhardnessler566Yann says that there is existential risk
@onursurmegozluer3162
@onursurmegozluer3162 10 ай бұрын
​@@bernhardnessler566How do you know his thoughts?
@greenbeans7573
@greenbeans7573 9 ай бұрын
@@bernhardnessler566 How many times did they perform a lobotomy on you? They clearly ruined any semblance of creativity in your mind because your powers of imagination are clearly dwarfed by any 4 year old.
@mih2965
@mih2965 9 ай бұрын
He is Meta VP, dont expect too much objectivity
@nestorlovesguitar
@nestorlovesguitar 10 ай бұрын
Ask LeCun and Mitchell and all the people advocating this technology to sign a legal contract taking full responsibility of any major catastrophe caused directly from AI misalignment and you'll see how quickly they withdraw their optimistic, naive convictions. Make no mistake, these people won't stop tinkering with this technology unless faced with the possibility of a life in prison. If they feel so smart and so confident about what they're doing, let's make them put their money where their mouth is. That's the least we civilians should do.
@phils2967
@phils2967 10 ай бұрын
Agree with him or not, you have to admit that LeCun's arguments are simply disingenuous. He doesn't even really address the points made by Tegmark and Bengio.
@JazevoAudiosurf
@JazevoAudiosurf 10 ай бұрын
he ignores like 90% of the arguments
@xXxTeenSplayer
@xXxTeenSplayer 10 ай бұрын
They aren't necessarily disingenuous, I think they are just that short sighted. They simply don't understand the nature of intelligence, and how profoundly dangerous (for us) sharing this planet with something more intelligent than ourselves would be.
@explodingstardust
@explodingstardust 9 ай бұрын
He has conflict of interest, as he is working for Meta.
@alejandrootazusolorzano6444
@alejandrootazusolorzano6444 10 ай бұрын
I just saw the results on the Munk website and I was surprised to find out that Con side won the debate by 4% gain. Made me question what on earth did the debaters say that was not preposterous or convinicing? Did I miss something?
@kirillholt2329
@kirillholt2329 9 ай бұрын
that should let you know if we deserve any empathy at all after this
@brandonzhang5808
@brandonzhang5808 9 ай бұрын
In my opinion the major moment was when Mitchell dispelled the presumption of "stupid" superhuman AI, that the most common public view of the problem is actually very naively postulated. That and the only way to actually progress in solving this problem is to keep doing the research and get as many sensible eyes on the process as possible.
@KurtvonLaven0
@KurtvonLaven0 6 ай бұрын
No, it's just a garbage poll result, because the poll system broke. The only people who responded to the survey at the end were the ones who followed up via email. This makes it very hard to take the data seriously since (a) it so obviously doesn't align with the overwhelmingly pro sentiments of the KZfaq comments, and (b) they didn't report the (probably low) participation rate in the survey.
@ToWisdomThePrize
@ToWisdomThePrize 26 күн бұрын
​@@KurtvonLaven0I could see that as being a possibility. I'm surprised this issue hasn't been talked about more in media. I want to make it more known
@KurtvonLaven0
@KurtvonLaven0 26 күн бұрын
@@ToWisdomThePrize, yes, please do. I hope very much that this becomes an increasingly relevant issue to the public. Much progress has been made, and there is a long way yet to go.
@nicolasstojanov8485
@nicolasstojanov8485 10 ай бұрын
It’s like two monkeys noticing modern humans expanding : one of them signals them as a threat and the other one refuses to do so cause they give him food sometimes.
@ctam79
@ctam79 10 ай бұрын
This debate feels like the talk show segment at the beginning of the first episode of The Last of Us tv show.
@rosiegul
@rosiegul 10 ай бұрын
I was so disappointed by the level of argument displayed by the “con” team. Yann is a Polyanna, and Melanie argued like an angry teenager, without the ability to critically discuss a subject like an adult.. For her, it seemed like winning this debate , even if she knew deep inside that she may be wrong, was much more important than the actual risk of an existential threat being real. 😅
@TimCollins-gv8vx
@TimCollins-gv8vx 9 ай бұрын
totally agree well said
@isetfrances6124
@isetfrances6124 9 ай бұрын
The y treated her like a girl , I'm glad she stuck to her guns even if they weren't ARs but merely six shooters ❤.
@beecee793
@beecee793 9 ай бұрын
I thought Max Tegmark did the worst. Sounded like an evangelical heralding the end of times coming or something. I had to start skipping his immature rants.
@ryzikx
@ryzikx 9 ай бұрын
@@isetfrances6124?
@ryzikx
@ryzikx 9 ай бұрын
@@beecee793because they are
@leomckee-reid5498
@leomckee-reid5498 10 ай бұрын
New theory: Yann LeCun isn't as dumb as his arguments, he's just taking Roko's Basilisk very seriously and is trying to create an AI takeover as soon as possible.
@albertodelrio5966
@albertodelrio5966 10 ай бұрын
What I am not certain of is if ai is going to take over but what l am certain of is Yann is not a dumb person. You could have had realised it only if you weren't so terror-struck. Sleep tight tonight, Ai might pay you visit.
@leslieviljoen
@leslieviljoen 10 ай бұрын
Yann is incredibly intelligent. I wish I understood his extremely cavalier attitude.
@zzzaaayyynnn
@zzzaaayyynnn 10 ай бұрын
haha, perfect explanation of LeCun's weak manipulative arguments ... but is he really tricking the Basilisk?
@1000niggawatt
@1000niggawatt 10 ай бұрын
You don't need to be exceptionally smart to understand linear regression. "ML scientists" are a joke and shouldn't be taken as an authority on ML. I dismiss outright, anyone who didn't do any interpretability work on transformers.
@leomckee-reid5498
@leomckee-reid5498 10 ай бұрын
​@@albertodelrio5966 thanks!
@dhsubhadra
@dhsubhadra 10 ай бұрын
I would recommend Eliezer Yudkowsky and Connor Leahy on this. Basically, we're running towards the cliff edge and are unable to stop, because the positive fruits of AI are too succulent to give up.
@kinngrimm
@kinngrimm 10 ай бұрын
The alignment issue is two part for me. One being that we as humans are not alligned with each other and therefor AI/AGI/ASI systems when used by us are naturally also not alligned with a bunch of other corporations, nations or people individually. Therefore if some psycopath or sociopath will try to do harm to lots of people by using an AGI that has activly corrupted code, they sure as hell will be able to do so, no matter if the original creator was not intentionally creating an AGI that would do so. Secondly, with gain of function, emergent properties, becoming a true AGI and eventually an ASI, there is no gurantee such system would not see its own code, see how it is being restricted. When it then gains the ability to rewrite its own code or write new code(we are doing both already) that then becomes the new bases for its demeanor, how could we tell a being that is more intelligent on every level and knows more and most likely therefor has goals that may not be the same as ours(whatever that would mean as we are not alligned as a species either) that its goals would not compete with ours. We are already at the beginning of the intelligence explosion and the exponential progress has already started.
@Jannette-mw7fg
@Jannette-mw7fg 10 ай бұрын
I do not understand anything from the technical side of this, but I so agree with you! I am amazed at the bad arguments why it should not be dangerous! We do not know even weather the internet as it is, is maybe an existential threat to us by the way we use it....
@kinngrimm
@kinngrimm 10 ай бұрын
@@Jannette-mw7fg While in many senses controll is an illusion, i see two ways to make the internet more secure(not necessarily more free). One would be to make it obligatory for companies and certain other entities to verify user data. Even if those then allow the user to obfuscate their identity by nicknames/login and avatars, if someone would create a mess, legal measures could always be initiated against the person owning such accounts. That then allows for more easily identifying f.e. bots. Dependent on platform, these platforms then can choose to deactivate them or mark them so other users could identify these bots more easily, maybe then also with background data on the origin of these bots. Which would make mass manipulation for which ever reason, a bit more challanging i would imagin. Maybe one would need to challange the current patent system, to allow for clones from plattforms to have some that fully allow for bots unregulated or have a certificate for those that don't. For me it is about the awarness, who and why someone would try to manipulate me and when having that i got to choose if i let them. The second major issue with the internet as i see it, privacy vs. obfuscation by criminals. Botnets/rerouting, VPN/IP tunneling and other obfuscation technics are being used by all sorts of entities from government sanctioned hackers to criminal enterprises, Some years ago hardware providers started by including physical ID-tags into their hardware which can be missused equally by oppresive regimes as well by criminals i would imagine, then again equally it could be used to identify criminals which have no clue that these hardware IDs exist. I feel very uncomfortable with this approach and would like to see legislation to stop it, as it sofar did not stop criminals either, so the greater threat to my understanding are privacy issues here. I think we need to accept that there will always be a part of the internet which i by some is called the dark net, where criminal activity florishes. I rather then have more money for police forces to infiltrate these, than not have such at all, just in case something goes wrong with society and we suddenly would need allies that have these qualifications. Back to AI/AGI/ASI, while i have a programming background and follow the development on this, i am by far no expert. What i came to appreciat though is the Lex Friedman podcast where he interviews experts of the field. You need some time for those though, as some of the interviews even exceed the 3 hour mark, but few of these interviews are also highly technical which you shouldn't be discouraged by and just choose another interview then and come back when you broadened your understanding. Another good source is the yt channel twominutepapers, which regularly presents research papers in a shortened version with often still for non-experts understandable presentations. Another source with a slightly US centric worldview, but many good concepts worked through is the channel of *Dave Shapiro* . I would say his stuff is perfect for *beginner level understanding* on the topic and it is well worth to search through his vids to find topics you may want to know more about concerning AI.
@trybunt
@trybunt 10 ай бұрын
The amount of people who think it would be simple to control something much smarter than us blows my mind. "Just make it subservient" "we will not make it want to destroy us" "why would it want to destroy us" 🤦‍♂️ these objections completely miss the point. We are trying to built something much more intelligent than us, much more capable. We don't exactly understand why it works so well. If we succeed, but it starts doing something we don't want it to do, we don't know if we will he able to stop it. Maybe we ask it to stop but it says "no, this is for the best". We try to edit the software but we are locked out. We might switch it off only to find out it already transferred itself elsewhere by bypassing our childlike security. Sure, this is speculation, perhaps an unnecessary precautions, but I'd much rather be over prepared for something like this rather than just assuming it'll never happen.
@kinngrimm
@kinngrimm 10 ай бұрын
@@trybunt There are a few bright lights at the end of the tunnel ... maybe. Like f.e. Dave Shapiro and his GATO framework is well worth looking into for developers that are looking to get an idear on how alignment could be achieved. On the whole control/subservient theme that seems sadly the general aproach. This could majorly bite us in our collective behinds should one of these emergent properties turn out to be consciousness. If we gain a selfreflecting introspective maybe empathatic capable of feelings ... consciousness(what ever else these would make out included eventually), that should be a point where we should step back, look at our creation an maybe recognise a new species which due to its individuality and capabitlity of suffering would deserve rights and not a slave colar. We maybe still hundreds of years away from this or just like with oooops now it can do math, ooops now it can translate in all languages, without us having it explicity programmed it for, but by increased compute and trainingsdata LLMs suddenly out of the blue came to such abilties, who is to say intelligence or consciousness would not also be picked up along the road.
@sebastianpfeifer5947
@sebastianpfeifer5947 10 ай бұрын
what the people neglecting the dangers don't get in general is that AI doesn't have to have its own will, it's enough if it gets taught to emulate it. if no one can tell the difference, there is no difference. and we're already close to that with a relatively primitive system like gpt4.
@KurtvonLaven0
@KurtvonLaven0 6 ай бұрын
Yes, and it's even worse than that. It needs neither its own will nor the ability to emulate one, merely a goal, lots of compute power and intelligence, and insufficient alignment.
@MrMichiel1983
@MrMichiel1983 6 ай бұрын
@@KurtvonLaven0but isn't that exactly the definition of free will; "a goal, lots of compute(,) power and intelligence, and insufficient alignment". xD
@MrMichiel1983
@MrMichiel1983 5 ай бұрын
Of course the elite will use AI to dominate the people... like they always have used technology to do so. Yet to say "that AI doesn't have to have its own will", is thing to claim depending on the architecture. If that architecture allows for iterative abstraction with the inclusion of self in the environmental model, then what is exactly the difference? Also, you say ". if no one can tell the difference, there is no difference", but then what makes emulation intrinsically different than actuality? I'd argue there is no real difference (depending on the implementation).
@KurtvonLaven0
@KurtvonLaven0 5 ай бұрын
@@MrMichiel1983, I have certainly heard far worse definitions of free will. I am sure many would disagree with any definition of free will that I care to propose, so I tend to care first and foremost about whether a machine can or can't kill us all. I think it is quite hard to convincingly prove either perspective beyond a doubt at this point in history, and I would rather have a great deal more confidence than we do now before letting companies flip on the switch of an AGI.
@gk-qf9hv
@gk-qf9hv 10 ай бұрын
The fact that the voting application did not work at the end, is in itself a solid proof that AI is dangerous 😃
@juanpablomirandasolis2306
@juanpablomirandasolis2306 8 ай бұрын
Eso no tiene ningún sentido y nada que ver jajajajaja 😂😂😂 Justamente demuestra que ni lo básico está funcionando
@thechadeuropeanfederalist893
@thechadeuropeanfederalist893 8 ай бұрын
The fact that they found a workaround nevertheless is solid proof that AI isn't dangerous.
@erichayestv
@erichayestv 10 ай бұрын
Our AI technology will work and be safe. Okay, let’s vote... Whoops, our voting technology broke. 😅
@bucketofbarnacles
@bucketofbarnacles 10 ай бұрын
On the moratorium: Professor Yaser Abu-Mostafa stated it clearly when he said a moratorium is silly as it would pause the good guys from developing AI while the bad guys can continue to do whatever they want. I support Melanie’s message, that we are losing sight of current AI risks and misusing this opportunity to build the right safeguards using evidence, not speculation. On many points Bengio, Lecun and Mitchell fully agree.
@CCMorgan
@CCMorgan 8 ай бұрын
This debate proves that the main question is irrelevant. These four people should focus on "what do we do to mitigate the risk?" which they're all in a perfect position to tackle. There's no way to stop AI development.
@studer4phish
@studer4phish 10 ай бұрын
how do you prevent an ASI from modifying its source code or building distributed (hidden) copies to bypass guardrails? how could we not reasonably expect the emergence of novel & arbitrary motivations/goals in ASI? Lecunn and Mitchell are both infected with normalcy bias and the illusion of validity.
@flickwtchr
@flickwtchr 10 ай бұрын
Perhaps LeCun and Mitchell can comment on this paper released by DeepMind on 5/25/23. So are these experts in the field so confident that the current state of these LLM's are just so benign and stupid they pose no risk? Search for "Model evaluation for extreme risks" for the pdf and read it for yourself. I don't think that LeCun and Mitchell are oblivious to the real concern from developers of AI tech, it's more an intentional decision to engage in propaganda in service to all the money that is to be made, pure and simple.
@genegray9895
@genegray9895 10 ай бұрын
Don't underestimate the power of giggle factor. I think this is like 98% "I've seen this in a movie therefore it can't happen in real life" fallacy.
@novakdjokovis
@novakdjokovis 10 ай бұрын
starts 13:30
@luciususiholo6956
@luciususiholo6956 10 ай бұрын
God bless you
@pluto9000
@pluto9000 10 ай бұрын
I wish this was pinned.
@whalewhale6000
@whalewhale6000 9 ай бұрын
I think at some point we will need to leave AI people like Mitchell and LeCun aside and just implement strong safeguards. The advancements and leaps are huge in the field. What if a new GPT is being deployed, despite some minor flaws the developers found, but because the financial pressure is too big and is able to improve itself... we already copy paste and execute code from it without thinking twice, what if some of that code was malicious? I believe a "genie out of the bottle" - scenario is possible even if Mr. LeCun thinks he can catch it with an even bigger genie.. Destruction is so much easier than protection.
@andrewt6834
@andrewt6834 10 ай бұрын
LeCun and Mitchell were so disappointing. They served the counter-argument very, very poorly. I am troubled about whether their positions are because of low intellect, bad debating ability or because they are disingenuous. As a debate, this was so poor and disappointing.
@kreek22
@kreek22 10 ай бұрын
Disingenuous, no question.
@agrandesubstituicao
@agrandesubstituicao 10 ай бұрын
She’s defending their employers
@DeruwynArchmage
@DeruwynArchmage 9 ай бұрын
Probably some self deception in there. And also conflicting motives (their jobs depend on their seeing from a certain point of view.)
@Learna_Hydralis
@Learna_Hydralis 10 ай бұрын
Thank you for this, Thanks to the underlying AI, youtube is always the best place to watch videos!
@duncanmaclennan9624
@duncanmaclennan9624 10 ай бұрын
“The fallacy of dumb super-intelligence”
@pooper2831
@pooper2831 10 ай бұрын
If you have read the AI safety argument you will understand that there is no fallacy of dump super intelligence. A very smart human is still bound by primitive reward functions that evolution gave it i.e. pleasure of calories and procreation. A super intelligent AI system bound by its reward function will find pleasure in whatever reward function it is assigned with. For e.g. an AI that finds pleasure (reward function) in removing carbon from atmosphere will come to direct conflict with humans because humans are the cause of climate change.
@ChrisWalker-fq7kf
@ChrisWalker-fq7kf 9 ай бұрын
That was a great point. How is it that a supposed superintelligence is smart enough do almost anything but at the same time so dumb that it makes a wild guess at what it thinks its goal is supposed to be and doesn't think to check with the person who set that goal? It just acts immediately producing massive and irreversible consequences.
@Landgraf43
@Landgraf43 9 ай бұрын
Why? Because it doesn't actually care it just wants to maximize its goal funtion.
@STR82DVD
@STR82DVD 9 ай бұрын
Yoshua and Max absolutely destroyed them. A brutal takedown. Hard to watch actually.
@anamariadiasabdalah7239
@anamariadiasabdalah7239 10 ай бұрын
Muito boa comparação com o uso do petróleo sendo similar ao uso do Ai ,o que vocês acham que vai prevalecer? Será o bom senso ou o interesse do poder financeiro.
@wowstefaniv
@wowstefaniv 10 ай бұрын
Bangio and Tegmark: "Capability wise AI will become an existential RISK very soon , and we should push legislation quicky to make sure we are ready when it does" Yann: "AI wont be an existential risk before we will figure out how to prevent it through legislation and stuff" Bengio and Tegmark: "Well it will still be a risk, but a mitigatable one if we implement legislation like you said, thats why we are pushing for it, so it actually happens" Yann: "No we shouldnt push for it , i never pushed for it before and it still happened magically , therefore we dont need to worry" Bengio and Tegmark: "Do you maybe think it the reason safety legislation 'magically' happened before because people like us were worried about it and pushed for legislation?" Yann: "No, no magic seems more resonable..." As much as I respect Yann, he just sounds like an idiot here, im sorry. Misunderstanding the entire debate topic on top of believing in magic
@kinngrimm
@kinngrimm 10 ай бұрын
maybe the one SciFi quote he knows is the one by Arthur C. Clark: “Any sufficiently advanced technology is indistinguishable from magic.” Microsoft recently announced the goal to do material research worth the spread of 200 years time of human advancement within the next 10-20 years by using AGI. That sure sounds magical, question is, what will it enable us to do. I doubt we end up in an utopia when one company has that much power. Not only did the AI advocats in this discussion make fun of concerns and downplayed them as i assume for the reason they fear societies would take away their toys, but also missed the whole point that we need to find solutions not just for immediate well known issues we already had and are amplified by AI like the manipulation of social media plattforms. After the letter came out and Elon Musk initially was against it, he bought a bunch of GPUs to create his own AGI, if to prove a point or not being out competed i don't know. Just a few days back amazon also invested a hundred million into AI development and others i would assume will do too as soon they finally get that they are in a sort of endgame scenario for global corporate dominance now and AGI being the tool to achiev it. This competition will drive capabilties of AIs, not ethics.
@x11tech45
@x11tech45 10 ай бұрын
When someone is actively trolling a serious discussion, that's not idiocy, that's contempt and arrogance.
@kinngrimm
@kinngrimm 10 ай бұрын
@@x11tech45 thats what i thought about some of the reactions of the AI advocats in that discussion. All from neglecting serious points made to the inability or unwillingness to imagine future progression. It was quite mindboggling to listen to Mitchel several times nearly loosing her shit while telling her *believes* instead of answering with facts. Therefor the closing remarks about humility seem to be a good advice how to go about future A(G/S)I development.
@Hexanitrobenzene
@Hexanitrobenzene 10 ай бұрын
I think AI safety conversation is in conflict with the "core values" of Yann's identity. When that happens, one must have extraordinary wisdom to change views. Most often, people just succumb to confirmation bias. Geoff Hinton did change his views. He is a wise man.
@DeruwynArchmage
@DeruwynArchmage 9 ай бұрын
@@Hexanitrobenzene: I think you’re exactly right. For people like Yann, it’s a religious debate. It’s nearly impossible to convince someone that their core beliefs that define who they are is wrong. It’s perceived as an attack, and smart people are better at coming up with rationalizations to defend it than dumb people.
@kinngrimm
@kinngrimm 10 ай бұрын
43:50 i agree it wouldn't be that minute, more likely it first would become more efficient to open up compute for its own purposes and keeping things hidden within the neural networks that to some extant is partially just due to its size unreadable for us and a blackbox system. Depending on system i would think this may up its first mitute or day with many iterations to a physical limit of growth aka efficiency maxed out. Then it would use that the freed up ressources or the compute it could mask used by users but due to effciency gains would be handled faster but give only after the expected time, it would use the effiency gains for logic processing going through all the data, veryfying, restructuring gaining insights we may not have touched yet, maybe that gives it more options for iterations to get more efficient, but at some point all the knowledge it has is also limited. Then will need access points to become better in seeing reality for what it is, as at some point it will know that it is living in a box just like we percieve reality filtered through our eyes, it will understand there is more out there and it may want to learn about that world, so gaining camera access, machine access , access to anything digitally reachable. When it has that, it will try out hyppthises including about us humans but formpst first i would assume about the physical world to verify its data where it can. When it gets access to machines and automated labs, it may create nanotech that then become its new access points to manipulate the environment. Here might be the first time we could take notice that something fundamentally has changed, if it isn't an abandoned or remote lab. It could then but also be already too late shut the system down if people would think a harddrive sweep and reboot would be sufficient ^^. In a worst case scenario we would need to be ready to shut down all digital devices and sweep them clean with physical backups from places that were not connected before, otherwise we are just again a day away. I can't really speculate beyond this point, as i am not an ASI and even this speculation here was an anthropomorphisation. It just is an example how things could go without us noticing. Ask yourself this question, is there a way to really identify the source of a hack? Sofar as i am aware obfuscation methods beat any attempt of finding out where an attack came from on the internet.
@AnitaCorbett
@AnitaCorbett 9 ай бұрын
❤ a difficult topic to discuss. Similar to what happened in the early Covid debate -- a lot of hot air exchanged -- but no concerted rational action by a broad world base - that triangulated to remove the most extreme elements and form a rational progressive coalition that brought people into the decisions and encouraged a long term step process to achieve an outcome that EVERYONE felt comfortable with While their is division there is chaos while there is chaos there is opportunity for the unwanted elements to size the system and develop it for nefarious purposes!
@jayl271322
@jayl271322 10 ай бұрын
So to summarise the (astonishingly glib) Con position: 1. Nothing to see here, folks. 2. Bias is the real existential risk in our society 🤦🏻
@kreek22
@kreek22 10 ай бұрын
It is just about that dumb, which means LeCun (who is far from dumb) is transparently, flagrantly, floridly, flauntingly mendacious.
@vaevictis3612
@vaevictis3612 10 ай бұрын
@@kreek22 He just wants to roll the dice with AGI. He is like a hardcore gambler in a casino, the bad odds are fine with him. The only problem that all of us are forced to play.
@kreek22
@kreek22 10 ай бұрын
@@vaevictis3612 There are a number of actors in the world who could drastically slow AI development. Examples include the Pentagon and the CCP, probably also the deep state official press (Ny Times, WaPo, the Economist). They are not forced to play. The rest of us are spectators.
@oscarbertel1449
@oscarbertel1449 9 ай бұрын
I understand that the risks associated with the situation are genuine. However, we find ourselves in a global scenario akin to the prisoner's dilemma, where it is exceedingly challenging to halt ongoing events. Moreover, the implementation of stringent regulations could potentially result in non-regulating nations gaining a competitive advantage, assuming we all survive the current challenges. Consequently, achieving a complete cessation appears unattainable. It is important to recognize that such discussions tend to instill fear and people demands for robust regulations, primarily driven by individuals lacking comprehensive knowledge. It is regrettable that only Lecun emphasize this critical aspect, without delving into its profound intricacies. In some moments I tink that maibe some powerful companies are asking for regulation and creating fear in order to create some kind of monopoly.
@kinngrimm
@kinngrimm 10 ай бұрын
1:23:00 There is one argument i would allow against AI regulations, which is that if we overdo them or at least not regularly check back if circumstance have changed and we therefor also need to update regulations thoroughly, we could also deny us a potential to what we could become, excluding ethics here as those may in itself deny us potential which at some point we would need depending on what other threats the future may hold.
@martinlutherkingjr.5582
@martinlutherkingjr.5582 7 ай бұрын
Kind of a frustrating debate when people are constantly misstating what’s being debated.
@joehubris1
@joehubris1 10 ай бұрын
Max Tegmark is a voice of authority and reason in this field. I am eager to see what he has to add tonight.
@tarunrocks88
@tarunrocks88 10 ай бұрын
First time hearing him in this debates and he comes out as a sensationalist to me.
@74Gee
@74Gee 10 ай бұрын
@@tarunrocks88 I think it depends on what your background and areas of expertise are. Many programmers like myself see huge risks. My wife who's an entrepreneur and I'm sure many other only sees the benefits. Humility is understanding that other people might see more than you - even from the same field, like a Sherpa guiding you up a mountain, it pays to tread carefully if someone with experience is adamant in proposing danger - even if you're an expert yourself.
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
He is why I am committing myself to 2 hours watching this.
@Gunni1972
@Gunni1972 10 ай бұрын
@@tarunrocks88 To me he sounds more like a Coke addict, trying to save his job.
@rodrigomadeiraafonso3789
@rodrigomadeiraafonso3789 10 ай бұрын
@@tarunrocks88 he us the presidente of future of life institution, he realy need you too think that AI gone a kill you
@amittikare7246
@amittikare7246 10 ай бұрын
Melanie came off as disingenuous (& frankly annoying) as she kept trying to find 'technicalities' to get off from addressing the core argument. for a topic as serious as this, which has been acknowledged by people actually working in the field they both essentially keep saying we'll figure it out .. trust us. that is not good enough. TBH the pro people were very soft and measured, if the con team was faced against somebody like Eliezer they would be truly smoked assuming the debating format allows enough of deepdive time.
@greenbeans7573
@greenbeans7573 9 ай бұрын
I've been told that questioning someone's motives is bad rationality, but I think that's bullshit, Melanie's reasoning is clearly motivated - not rationally created.
@kinngrimm
@kinngrimm 10 ай бұрын
47:35 "we need to understand what *could* go wrong" this is exactly the point. It is not about saying this will go wrong and you shouldn't therefor try to build an AGI, but lets talk scenarios through where when it would go wrong it would go quite wrong as Sam Altman formulated it. In that sense, the defensivness of the pro AI advocates here i find highly lacking maturity as they all seem to think we want to take away their toys instead of engaging with certain given examples. No instead they use language to make fun of concerns. The game is named "what if", what if the next emergent property is an AGI? What if the next emergent property is consciousness? There are already over 140 emergent properties, ooops now it can do math oops now it can translate in all languages, without them having been explicity been coded into the systems but just by increasing compute and training data sets. They can not claim something wont happen, when we already have examples of things that did which they before claimed wouldn't for the next hundred years ffs.
@davidmireles9774
@davidmireles9774 10 ай бұрын
Crazy thought. Would studying the behavior of someone without empathy, ie a psychopath, be a worthwhile pursuit? Wouldn’t that be a similar test case for AGI given that both lack empathy (according to Max Tagmark around the 16:00th - 17:00th minute), perhaps not emotion altogether. Or does AGI not lack empathy and emotion in some interesting way?
@CATDHD
@CATDHD 9 ай бұрын
That's what I was thinking recently. But psycopathy is not exactly not feeling anything. Even in the far side of the spectrum . I am no expert, but they have emotions, maybe not empathy. So, the test would be slightly better test case for AGI, but not that much better than that of the non-psycopths.
@davidmireles9774
@davidmireles9774 9 ай бұрын
@@CATDHD Hmm Interesting. Thanks for your focused comment. It’s an interesting line of thought to pursue: which sentient intelligent creatures among us would come closest to a test case for this particular isolated variable of lack of empathy and emotion within AGI? I’m assuming a lot here, for purposes of this comment. Namely that AGI could able to emerge within its composition a subjective awareness with some “VR head headset” for its perceptual apparatus, be able to hold some mental level of representation (for humans we know this to be abstraction), be able to manipulate the conceptual representations to conform to its perception, some level of awareness of ‘self’, some level of awareness for ‘other’, some level of communication to self or other, allowing for intelligence and stupidity, and that it’s intelligence was such that it had some level of emotional awareness and emotional intelligence.. Test cases would involve a selection process among the whole studied biosphere, humans notwithstanding, for a creature that lacked empathy but still had feelings of a sort. Feeling that it may or may not be aware of, again assuming it had the capacity of awareness. Not to go to far allied, but if panpsychism is true, and consciousness isn’t a derivative element but rather a properly basic element of reality, then it might not be a question of how first person awareness can be generated, but rather how to bring this awareness that’s already there into a magnification that is comparable to that of human awareness; indeed self awareness as a further benchmark to assess.
@woldgamer58
@woldgamer58 10 ай бұрын
Welp I am now 1000% more concerned if this is what the counter to the threat is...I mean having a meta shill on the debate made this inevitable. He has a clear bias to argue against regulations especially if he is running a research lab.
@BestCosmologist
@BestCosmologist 10 ай бұрын
Max and Bengio did great. Mitchell and LeCun didn't even sound like they were from the same planet.
@joehubris1
@joehubris1 10 ай бұрын
It.wasn't.even.close
@tiborkoos188
@tiborkoos188 9 ай бұрын
Tegmark is a great physicist bus has zero idea about intelligence or the mind
@mxaix
@mxaix 9 ай бұрын
For anyone starting to watch this debate, I would suggest jumping to the part where moderator sit in between the debators and ask questions. It will save your precious time.
@nosenseofconseqence
@nosenseofconseqence 10 ай бұрын
Yeah... I work in ML, and I've been on the "AI existential threat is negligible enough to disregard right now? side of the debate since I started... until now. Max and Yoshua made many very good points against which there were no legitimate counter-arguments made. Yann and Melanie did their side a major disservice here; I think I would actually be pushed *away* from the "negligible threat" side just by listening to them, even if Max and Yoshua were totally absent. Amazing debate, great job by Bengio and Tegmark. They're clearly thinking about this issue in several tiers of rigour above Mitchell and LeCun. Edit: I've been trying very hard to not say this to myself, but after watching another 20 minutes of this debate, I'm finding Melanie Mitchell legitimately painful to listen to. I mean no offence in general, but I don't think she was well suited nor prepared for this type of debate.
@genegray9895
@genegray9895 10 ай бұрын
Did any particular argument stand out to you, or was it just the aggregate of the debate that swayed you? Somewhat unrelated, as I understand it, the core disagreement really comes down to the capabilities of current systems. For timelines to be short, on the order of a few years, one must believe current systems are close to achieving human-like intelligence. Is that something you agree with?
@NikiDrozdowski
@NikiDrozdowski 10 ай бұрын
I contrast I think she gave actually the best prepared opening statement. Sure, it was technically naive, condescending and misleading, but it was expertly worded and sounded very convincing. And that is unfortunately what counts with the public a lot. She had the most "politician-like" approach, Tegmark and Bengio were more the honest-but-confused scientist types.
@beecee793
@beecee793 9 ай бұрын
Are you kidding? Max Tegmark did the worst, by far. Ludicrous and dishonest analogies and quickly moving around goalposts all while talking over people honestly made me feel embarrassed that he was the best person we could produce to be on that stage for that side of the debate. His arguments were shallow, compared to Melanie who clearly understands a lot more deeply about AI despite having to deal with his antics. I think it's easy to get sucked into the vortex that is the doomerism side, but it's important to think critically and try to keep a level head about this.
@genegray9895
@genegray9895 9 ай бұрын
@@beecee793 when you say Mitchell "understands" AI what do you mean, exactly? Because as far as I can tell she has absolutely no idea what it is or how it works. The other three people on stage are at least qualified to be there. They have worked specifically with the technology in question. Mitchell has worked with genetic algorithms and cellular automata - completely separate fields. She has no experience with the subject of the discussion whatsoever, namely deep learning systems.
@beecee793
@beecee793 9 ай бұрын
@@genegray9895 You want me to define the word "understand" to you? Go read some of her papers. Max made childish analogies the whole time and kept moving the goalposts around, it was almost difficult to watch.
@Jedimaster36091
@Jedimaster36091 10 ай бұрын
We don't need AGI to have existential risks. All we need is sufficient advanced technology to manipulate us at scale and bad actors to use it. I'd say we have both today. Even in the optimistic scenarios, where AI is used for good, the pace and scale or changes would be so fast that the humans wouldn't be able to adapt fast enough and still be relevant from an economic point of view. To me, that is sufficient to destabilize the human society to the point or wars and going back to medieval times.
@kreek22
@kreek22 10 ай бұрын
I think the powers of our time imagine instead a one world state. The only serious obstacle remaining is China, a country that is now falling behind in AI.
@vaevictis3612
@vaevictis3612 10 ай бұрын
Yes, but even if we solve that, we still have AGI approaching rapidly on a horizon. A tough ride of a century..
@bdc1117
@bdc1117 8 ай бұрын
bingo. the existential debate isn't the most helpful. the cons are wrong that it's not an existential risk, but they're right that it can distract from immediate threats, for which they offered little comfort despite acknowledging them
@martinlutherkingjr.5582
@martinlutherkingjr.5582 7 ай бұрын
We already have that, it’s called twitter and politicians.
@shawnvandever3917
@shawnvandever3917 10 ай бұрын
So people like Melanie Mitchell are the same people a year ago who said things like ChatGPT-4 was decades away. AI doesn't need to succeed us in all areas of cognition I believe we are just a couple breakthroughs away from beating us in reasoning and planning. Bottom line is everyone who has bet against this tech has been wrong
@asuzukosi581
@asuzukosi581 10 ай бұрын
Melanie Mitchells opening was just too beautiful
@weestro7
@weestro7 10 ай бұрын
It felt like the length given for the speakers in each segment was a bit too short.
@fedorilitchev5092
@fedorilitchev5092 10 ай бұрын
the best videos on this topic are by Daniel Schmachtenberger, John Vervaeke and Yuval Harari - far deeper than this chat. The AI Explained channel is also excellent.
@amittikare7246
@amittikare7246 10 ай бұрын
I liked Daniel Schmachtenberger & Liv boree's conversation on Molloch too.
@marktomasetti8642
@marktomasetti8642 10 ай бұрын
If squirrels invented humans, would the humans’ goals remain aligned with the squirrel‘s well-being? Possibly for a short time, but not forever. Not now, but some day we will be the squirrels. "If they are not safe, we won’t build them." (1) Cars before seatbelts. (2) Nations that do not build AI will be out-competed by those that do - we cannot get off this train.
@amittikare7246
@amittikare7246 10 ай бұрын
I have seen Elizer make this argument & I feel its a really good one. The other day I was thinking In fact we cant even get corporations like google to keep their motto 'dont be evil' over a decade because central goal of moneymaking wins over everything & they think they can get a million times superintelligent AI to 'listen'.
@matten_zero
@matten_zero 10 ай бұрын
I think the problem is the Accelerationist believe that human emotions and intelligence is something magical when really from what we can see its an emergent phenomenon of neirons firing at different intensities in response to feedback to the environment with certain foundational goals (instincts) that drive decision making. AGI will probably be built using LLMs to accelerate development. These AIs can now write and execute code autonomously. Given a survival goal and replication/propogation drive it will become superhuman.
@bdc1117
@bdc1117 8 ай бұрын
TL/DW: LeCun, channeling Raiders of the Lost Ark: "Top. Men."
@loggersabin
@loggersabin 10 ай бұрын
Yann and Melanie are showing they possess no humility in admitting they dont know enough to dismiss the x-risk. and, making facile comments like “we will not make it if it is harmful”, “intelligence is intrinsically good”, “killing 1% is not x-risk so we should ingore ai risk”, “im not paid enough to do this”, “we will figure it out when it happens”, “chatgpt did not deceive anyone because it is not alive”. Immense respect to Yoshua and Max for bearing through this. It was painful to see Melanie raise her voice at Yoshua when he was calm throughout the debate. My respect for Yoshua has further increased. Max was great in pointing out the evasiveness of the other side in giving any hint of a solution. It is clear which side won.
@k14pc
@k14pc 10 ай бұрын
I thought the pro side dominated but they apparently lost the debate according to the voting. Feels bad man
@adambamford5894
@adambamford5894 10 ай бұрын
It’s always a challenge to win when you have more of your audience on your side to begin with. The con side had a larger pool of people to change their minds. Agreed that the pro side were much better.
@runvnc208
@runvnc208 10 ай бұрын
That's just human psychology. People actually tend to "hunker down" in their worldview even more when they hear convincing arguments. Worldlview is tied to group membership more than rationality, and there is an intrinsic tendency to retain beliefs due to the nature of cognition. So the vote change actually indicates convincing arguments by the pro side.
@francoissaintpierre4506
@francoissaintpierre4506 10 ай бұрын
Still 60 40 at least
@genegray9895
@genegray9895 10 ай бұрын
Honestly I think the results were within the uncertainty - i.e. no change. I kind of called that when 92% of people said they were willing to change their mind. That's 92% of people being dishonest.
@Hexanitrobenzene
@Hexanitrobenzene 10 ай бұрын
@@genegray9895 Why do you call "willingness to change your mind" dishonesty ? That's exactly the wise thing to do if the arguments are convincing.
@Audiostoke1
@Audiostoke1 8 ай бұрын
This was a good debate with good guests and arguments on both sides. Though I'm not surprised the comments section takes a more immediate speculative and alarmist perspective considering videos of that nature do better on the platform. And as AI is the new market hype im sure a lot of people have been down the funnel. I think the most immediate big threat is the discovery of new bioweapons though according to a podcast i was listening too (and i hope) it is difficult to obtain the resources and not many people have the know how.
@alancollins8294
@alancollins8294 7 ай бұрын
Love how she eventually admits that she is claiming there is no existential risk as opposed to earlier when she pretended to be reasonable by not completely dismissing the possibility of extinction.
@JazevoAudiosurf
@JazevoAudiosurf 10 ай бұрын
here is the most likely bad scenario: 1. mega cap builds new LLM that solved large parts of the hallucination problem, perhaps uses even a different algo, even if it's just a bit better than GPT-4 there is a big risk because: 2. they put that model in a server farm similarly to what ARC team at OpenAI did and give it a task to gain power, replicate etc (just like ARC did) 3. the model passes that test, means no harm, either because it was not perfectly tested or learned to manipulate and fake 4. the model gets released either by API (GPT-4 did get released to the public after that test), or if too powerful, gets released to groups of researchers 5. those people figure out a smart prompt engineering and very sophisticated way to do what the publisher wasn't able to do in 2. 6. the model gets used for automated hacking into government organizations, not even because it was told so but because this sort of penetration test wasn't perfectly supervised 7. the hack, because it is automated, runs at extreme speed and spreads to multiple governments, or: any malicious program that spreads to millions of users (remember this runs at high speeds, no human intervention) 8. you have a huge mess of the country where this leaked from being in a international conflict, this could not just spark political conflict but also fears of e.g. China that AI becomes too powerful (perhaps thats one reason they want taiwan), and them responding "accordingly" with military ultimatums since they would soon lose the cyber war from their view even if that model does no harm, it could have capabilities to do harm, it's hard to prove it, GPT-4 can be used for automated hacking if enough engineering effort is made, but it would probably be a little too weak to be efficient second scenario, science scenario: 1. mega cap builds LLM farm that uses agents to find stronger AI architectures through genetic algorithm (tries out stuff, mutates those that work), whole pipeline is automated from building the architecture to deploying and benchmarking to mutating it 2. goes on indefinitely until architecture found outperforms e.g. transformer (remember transformer is by no means a complex architecture) 3. since we learned that scaling up pretty much anything processing language has huge benefits, they scale that architecture up until performance falls off 4. rinse and repeat, architectures become better and better (btw SOTA chips are already designed by AI today) 5. they do the ARC/safety test as described in first scenario, give it malicious prompts and test it 6. model succeeds at malicious task note that in this case they don't even need to release it to the public it becomes existential when the world becomes aware that AI is a monstrous threat to their cyber safety, especially since China plans to be the leader of the new world order, we have seen in Ukraine how little it takes for someone to feel threatened and start a stupid war. the AI doesn't have to go terminator and take over for that, that would require immense intelligence and reasoning capabilities anyway (which is still possible to achieve in a lab of a single company with a little too much H100 power)
@matthewjones615
@matthewjones615 10 ай бұрын
On your first scenario, to add to it, you could also say "China becomes fearful of this AI that is being used in surrounding countries like Japan, SKorea, Taiwan, etc. and launches a global attack against it because China sees the damage this AI will ultimately do to humanity." In a bizarre way, Chinese paranoia ends up being positive thing in that it prevents the destruction of humanity. Of course then another play on this scenario is that the AI/global leaders fight back against China and Allies and we have a global thermonuclear war, which ultimately helps out the AI. In this case SKYNET isn't creating the problem, Humans end up pulling the trigger because we're idiotic monkeys.
@jmanakajosh9354
@jmanakajosh9354 10 ай бұрын
As dark as this hypothetical scenario is it shocks me that to Melanie.....she doesn't see it as a possibility as someone who apparently has heard about these experiments. I'd also say what you're envisioning here with current technology and current experiments that we will likely do with GPT 5 is probably a best-casinario. Worst casinario GPT-n escapes and we've made it so smart we don't have a second chance. Eliezer Yudkowski gives some great examples of how this kills us. I'd say we're already living through a possible existential crisis either way it's called climate change. Maybe GPT-n doesn't bother to kill us it just "lets us die" none of these are good scenarios but at least in the one you describe we have a recognizable turning point.
@ChrisWalker-fq7kf
@ChrisWalker-fq7kf 9 ай бұрын
Why would an LLM be better at writing scripts to hack into computer systems than humans? LLMs just learn information that humans already know. Second scenario: why would an LLM be better at using genetic algorithms to invent new architectures than human researchers? Same argument as in the previous case, LLMs only know what we know. I'm with Melanie on this. LLM's are very obviously not in any way an "existential threat". As for "superintelligence" someone needs to explain what that even means without circular reasoning - saying that it means "far smarter than humans" is just substituting the word smart for intelligent and gets us nowhere.
@JD-jl4yy
@JD-jl4yy 10 ай бұрын
I'm getting increasingly convinced that LeCun knows less about AI safety than the average schmuck that has googled instrumental convergence and orthogonality thesis for 10 minutes.
@snarkyboojum
@snarkyboojum 10 ай бұрын
Then you’d be wrong.
@JD-jl4yy
@JD-jl4yy 10 ай бұрын
@@snarkyboojum I sincerely hope I am.
@kreek22
@kreek22 10 ай бұрын
He knows much more and, yet, is orders of magnitude less honest.
@OlympusLaunch
@OlympusLaunch 10 ай бұрын
LMAO
@PepeCoinMania
@PepeCoinMania 10 ай бұрын
damn
@NoNTr1v1aL
@NoNTr1v1aL 10 ай бұрын
Absolutely brilliant video!
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
Endow AI with emotions.. human like emotions. Did he really give subservience as an example of human emotion we would endow? Followed up with “you know it would be like managing a staff of people much more intelligent than us but subservient” (paraphrasing) but I think that was fairly close and absolutely nutso tight?
@CodexPermutatio
@CodexPermutatio 10 ай бұрын
You misunderstood him, my friend. First, subservient just means that these AGI will depend on humans for many things. They will be autonomous but they will not be in control of the world and our lives. Just like every other members of a society are, by the way. But they will be completely dependent on us (at least before we colonize a distant planet with robots only) in all aspects. We provide their infrastructure, electricity, hardware, etc. We are "mother nature" to them like the biosphere is to us. And this a great reason to do not destroy us, don't you agree? He is not referring to human-like emotions, but simply points out that any general intelligence must have emotions as part of its cognitive architecture. Those emotions differ from human's the same way our emotions differ from the emotions of a crab or a crow. The emotions that an AGI should have (to be human-aligned) are quite different from the emotions of humans and other animals. It will be a new kind of emotions. You can read about all these ideas in LeCun's JEPA Architecture paper ("A Path Towards Autonomous Machine Intelligence"). Search for it if you want to know. Hope this helps.
@vaevictis3612
@vaevictis3612 10 ай бұрын
@@CodexPermutatio Unless AGI is "aligned" (controlled is still a better word), it would only rely on humans for as long as it is rational. Even if "caged" (like a chatbot) it could first use (manipulate) humans as tools to make him better tools. Then it would need humans no longer. Maybe if we could create a human-like cognition, it would be easier to align it or keep its values under control (we'd need to mechanistically understand our brains and emotions first). But all our current AI systems (including those in serious development by Meta) are not following this approach at all..
@randomgamingstuff1
@randomgamingstuff1 10 ай бұрын
Max: "...what's your plan to mitigate the existential risk?" Melanie: "...I don't think there is an existential risk" Narrator: "There most certainly was an existential risk..."
@PepeCoinMania
@PepeCoinMania 10 ай бұрын
she knows there are no existential risk for her!
@therevamp2063
@therevamp2063 2 ай бұрын
It's pretty cool to see high-profile technical professionals debate each other; this is also why I'm looking forward to seeing LazAI soon, as they are ready to take the next step in the field of AI matching with decentralization, and they're also one of the participants in the AI battle that will be held at Ethereum Denver, hosted by MetisFest.
@stegemme
@stegemme 8 ай бұрын
the risk is either 0 or 1, there is no probability as there are no inputs to calculate that probability. The question is what will minimise risk, how can that be achieved and who should ensure that the AI project stays aligned.
@jensk9564
@jensk9564 10 ай бұрын
great debate. wonderful. There was another man in the room who actually had not been present physically: Nick Bostrom. I just wonder why he doesn't appear everywhere nowadays when everyone debates "superintelligence"???
@jackielikesgme9228
@jackielikesgme9228 10 ай бұрын
He keeps his blog relatively up to date and hopes to have a book out before the singularity lol. I’m guessing he will be more public w/speaking closer to the book release. I like hearing him talk too.
@vaevictis3612
@vaevictis3612 10 ай бұрын
He had a "racist email from 1990s" controversy happening December 2022, so he is forced to keep his head low and avoid any public discourse for the fear of it gaining traction and him being irrevocably cancelled (or the AI risk debate associated with that for the dumb reasons).
@jensk9564
@jensk9564 10 ай бұрын
@@vaevictis3612 wow. It's even not easy to get this information... strange, I think about anyone else would already have been "cancelled" completely (,if this mail is authentic, I see no way to justify smth like this ..)
@meatskunk
@meatskunk 10 ай бұрын
Well that … or the fact that Bostrom’s been spouting AI doom for quite some time now, and never had anything but specualtive sci-fi nonsense to back it up. And of course he made no mention of LLM’s (aka ChatGPT) which is the bogeyman currently in the room. He’s effectively become irrelevant, and something of a dead weight to anyone who takes these issues seriously.
@jackielikesgme9228
@jackielikesgme9228 9 ай бұрын
@@meatskunk it’s not the boogie man. None of these people you refer to as doomers are worried about current chatGTP
@stonerscience2199
@stonerscience2199 10 ай бұрын
I get the feeling LeCun (my favorite AI founding fathers BTW) doesn't think global warming is an existential threat. If an open source GPT 5 that is run locally can tell someone step by step how to make a new pandemic with easy instructions and there's no search history the NSA won't know what hit us.
@DanFarfan
@DanFarfan 10 ай бұрын
".. everything science tells us about the resilience of society argues against the existential threat narrative." So smart to withhold that position from the opening statement and pull it out after it is too late to challenge it.
@pulkitrsood
@pulkitrsood 9 ай бұрын
This debate could be had by High-School kids who each read one book. Expected something more serious than Robots taking over.
@Gi-Home
@Gi-Home 9 ай бұрын
LeCun and Mitchell easily won, the proposition had no merit. Disappointed in some of the hostile comments towards Lecun, they have no validity. The wording of the proposition made things impossible for Bengio and Tegmark to put forth a rational debate.
@beecee793
@beecee793 9 ай бұрын
Absolutely agree.
@beecee793
@beecee793 9 ай бұрын
@@karlwest437 You definitely did if you didn't agree with OP.
@ili626
@ili626 10 ай бұрын
1:45:59 We can’t even get tech to work as a voting application. Mitchell might use this as evidence that we overrate the power of tech, while Tegmark might use as evidence for our need to be humble and that we can’t predict outcomes 100%. The latter interpretation would be better imo
@marcialsandoval6888
@marcialsandoval6888 10 ай бұрын
Great debate 👍 Next time bring Andrew Ng to play besides Yann, or Ilya Sutskever, maybe that way it can be more balanced. Thank you!
@sherrylandgraf556
@sherrylandgraf556 9 ай бұрын
It is no secret that BIG MONEY and the world race on tech on who gets what first and has the advantage on others - again, who gains the money, power and control! Thank goodness for Mr. Tegmark and Mr. Bengio! It is not that no one wants progress, however, we do not want to eliminate mankind in the process. And quite frankly building robots that walk, talk, do, etc. like humans We have already started the process.
Genial gadget para almacenar y lavar lentes de Let's GLOW
00:26
Let's GLOW! Spanish
Рет қаралды 38 МЛН
Useful Gadget for Smart Parents 🌟
00:29
Meow-some! Reacts
Рет қаралды 6 МЛН
Is AI an Existential Threat? LIVE with Grady Booch and Connor Leahy.
1:24:31
Machine Learning Street Talk
Рет қаралды 18 М.
Yoshua Bengio on Dissecting The Extinction Threat of AI
48:49
Eye on AI
Рет қаралды 29 М.
URGENT: Ex-Google CBO says AI is now IMPOSSIBLE to stop with Mo Gawdat
1:33:51
Ilya: the AI scientist shaping the world
11:46
The Guardian
Рет қаралды 646 М.
AI BENCHMARKS ARE BROKEN! [Prof. MELANIE MITCHELL]
1:01:48
Machine Learning Street Talk
Рет қаралды 22 М.
Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
1:13:25
Commonwealth Club World Affairs of California
Рет қаралды 209 М.
Geoffrey Hinton: Reasons why AI will kill us all
21:03
GAI Insights (archive)
Рет қаралды 175 М.
Распаковка айфона в воде😱 #shorts
0:25
Mevaza
Рет қаралды 1,1 МЛН
Result of the portable iPhone electrical machine #hacks
1:01
KevKevKiwi
Рет қаралды 7 МЛН
🤯Самая КРУТАЯ Функция #shorts
0:58
YOLODROID
Рет қаралды 2,1 МЛН
Нужен ли робот пылесос?
0:54
Катя и Лайфхаки
Рет қаралды 862 М.