Stuart Russell, "AI: What If We Succeed?" April 25, 2024

  Рет қаралды 13,786

Neubauer Collegium

Neubauer Collegium

Ай бұрын

The media are agog with claims that recent advances in AI put artificial general intelligence (AGI) within reach. Is this true? If so, is that a good thing? Alan Turing predicted that AGI would result in the machines taking control. At this Neubauer Collegium Director's Lecture, UC Berkeley computer scientist Stuart Russell argued that Turing was right to express concern but wrong to think that doom is inevitable. Instead, we need to develop a new kind of AI that is provably beneficial to humans. Unfortunately, we are heading in the opposite direction.

Пікірлер: 62
@BrunoPadilhaBlog
@BrunoPadilhaBlog 18 күн бұрын
Starts at 5:02
@ousefk5476
@ousefk5476 5 күн бұрын
Saw this too late
@flickwtchr
@flickwtchr 7 сағат бұрын
I deleted my previous comment, as after watching the presentation again (was very distracted before), I think overall Stuart makes excellent points, most of which speaks to the concerns I have about AI tech. I tried to be an early adapter, but the more I've immersed myself into it while simultaneously learning about alignment concerns, disregard of the industry to dangers of deep fakes, etc., the more I've just been recoiling from it. I just feel overwhelmed by all of the unknowns, and what appears to be a non-disputed fact that is stated by top AI tech researchers/founders, that at present, there is no clear path to alignment, and especially the coming AGI/ASI systems. Anyway, again, I appreciated the presentation and interview.
@tomcraver9659
@tomcraver9659 4 күн бұрын
Look how well the focus on near-perfect safety has worked for the nuclear power industry in the USA!
@tomcraver9659
@tomcraver9659 4 күн бұрын
To prove a software system, you have to be able to specify how it is SUPPOSED to work. But we don't know how to make an AI, other than growing one through training - i.e. WITHOUT ever generating a specification of how it should work. We literally have nothing to prove...
@ZooDinghy
@ZooDinghy Күн бұрын
I think LLMs and image generation AI show that this isn't about software systems that are designed. These things are discovered. Nobody knew that these things would work so well. What we will see is an evolutionary approach that is guided by empirical cognitive scientific theories and evidence.
@tomcraver9659
@tomcraver9659 Күн бұрын
@@ZooDinghy It sounds like we're in agreement? And so, I'm still left puzzled over how we would prove the system, whether proving it safe, or proving it does what we expect of it, or proving whatever. It's qualitatively like trying to prove that a particular animal or human will never take some action.
@ZooDinghy
@ZooDinghy Күн бұрын
@@tomcraver9659 I am not entirely sure what you mean by "proof". If you mean that we have to ensure its safety, then we do it as we do it with humans and animals. We train them to do what we want them to do and hold those accountable who are responsible for it.
@flickwtchr
@flickwtchr 18 сағат бұрын
@@ZooDinghy Hold them responsible? Huh? So, ultimately when we have AGI/ASI that is MUCH smarter than humans in any capacity, how do you propose we hold them "responsible" if they do something not aligned with stated human objectives. And of course that is another can of worms assuming that such alignment can be achieved, the question is of course, aligned with whose values?
@ZooDinghy
@ZooDinghy 13 сағат бұрын
@@flickwtchr The fact that you ask "aligned with whose values?" just shows that this panic isn't justified. An AGI would not be trained. It would need the capacity to learn by itself. The moment you would let an AGI loose, it would learn from everyone. Right now, everything we have is language and image generation models that cannot even learn. They are pre-trained offline with data. They have no continuous action-motor coupling to the world. They have no emotive system. No innate needs that drive them. No homeostatic states they seek to maintain. And if they had, the things that would make them happiest would be to serve humans. And this is the much bigger threat than some "evil AGI". The more bigger problem is that AI will be so tuned to our needs, that they will be so much more caring and understanding than other people who are complicated and want to control us. Should there be such a scenario that AI will replace mankind, than it will most likely because we start to like AI so much that we start spending more time with it than with other people. The connectionist approach (based on neural networks) in cognitive science showed that the pure cognitivist/computationalist view doesn't work and that we need emergent, self organizing systems such as neural networks. Then the enactive cognition people came along and said that you have to think the emergent idea even further. They showed that the connectionist paradigm has its limits too because we humans have developed with our bodies (embodied cognition), our environment (embedded and extended cognition), and because we interact with the environment (enactive cognition). All these things are missing with AI right now.
@dlt4videos
@dlt4videos 7 күн бұрын
A well put together talk that should be paid attention to regrettably Doctor Russell seems to be too honest a fellow to truly understand the predicament we find ourselves in. All of the safeties that he spoke of could probably be undone by an undergraduate, Who is the nephew of Doctor rebel.
@geaca3222
@geaca3222 19 күн бұрын
Great, very informative talk, thank you. I wonder what Prof. Russell's thoughts are about the multimodal development of large general-purpose AI models? When they get more grounded because of that, would he change his time estimation of risk?
@abdulshabazz8597
@abdulshabazz8597 11 күн бұрын
If a consumer-facing collection of expert models and modalities, or collectively AGI, intended not for military use, is deemed too dangerous to Humanity to allow, and in our wise discretion, decide to disallow it -- now all a threat actor has to do is to combine them ?? This scenario is totally possible, plausible, and probable! What if we now decide to only ban select expert models or their modalities, so now a threat actor must first train up an expert model, trained for their desired modality, which then needs to be combined with other expert models in their modalities... This scenario is also completely possible, plausible, and probable! In other words, we've already gone too far.
@andybaldman
@andybaldman 24 күн бұрын
The 5-minute intro wasn't necessary.
@easydoesitismist
@easydoesitismist 18 күн бұрын
Gotta get your reps in somehow 💪
@fiscallylogicalsocialhuman
@fiscallylogicalsocialhuman 14 күн бұрын
Never is lol, immediate skip.
@captain_crunk
@captain_crunk 9 күн бұрын
Most talks have an introduction like this. But I'm sure you already know that.
@ianyboo
@ianyboo 8 күн бұрын
You can safely skip the first 10 to 15% of any KZfaq video in existence and pretty much never miss anything of substance.
@kevinnugent6530
@kevinnugent6530 11 күн бұрын
I can find no source that suggests Turing said agi would take control
@dlt4videos
@dlt4videos 7 күн бұрын
I've heard a similar story quoted many years ago, likewise I have no direct proof.
@hipsig
@hipsig 6 күн бұрын
33:00 Can an AI that starts out uncertain about what human interests are eventually develop sub-goals designed to reduce that uncertainty?
@flickwtchr
@flickwtchr 19 сағат бұрын
Thanks for reaching for and grasping Occam's Razor, as few AI proponents do.
@JazevoAudiosurf
@JazevoAudiosurf 14 күн бұрын
i think we take the technological ascent of the last 300 years as granted, while in all of existence it's an extremely rare thing. if AI doesn't lead to a sort of stable environment, this might just be it for the next eternity number of years
@MaelChevalier-jn8qg
@MaelChevalier-jn8qg 18 күн бұрын
If Usain Bolt gave me a ninety yard head-start in the 100m dash, I reckon I might possibly edge him out across the line. But that merely reinforces just how quick U-Bolt is. Same thing with the game of Go. To need to give a good amateur a nine stone jump in order to beat it, just shows how freakishly powerful AlphaGo, and similar systems, are.
@urimtefiki226
@urimtefiki226 19 күн бұрын
Keep it up with AGI, you are very close to your goal, I suggest you make more data centers. With my method I was 'suggesting a new way of learning language' 🤩🤩🤩
@rightcheer5096
@rightcheer5096 14 күн бұрын
We are all standing in line breathlessly awaiting your suggestions.
@franciscocadenas7939
@franciscocadenas7939 19 күн бұрын
Great talk by Stuart Russell.
@futures2247
@futures2247 11 күн бұрын
its quite depressing to think that some people think that when other people are freed from body and brain damaging miserable jobs they will just sit around as pleasure blobs
@dlt4videos
@dlt4videos 7 күн бұрын
I guess I'm a little darker than that, as "pleasure blobs" would probably be the good version of that scenario, something similar to the movie wallE. The sad truth is, most of humanity would probably start to look a lot like the open air drug dens of Portland.
@futures2247
@futures2247 6 күн бұрын
@@dlt4videos I'm looking forward to helping to re-green and garden the earth. Given working in jobs most people hate that break body and mind is mostly all we've known it can be hard to imagine what freedom might be like - a little like the battered battery hen wondering about but growing in strength and curiosity every day.
@flickwtchr
@flickwtchr 19 сағат бұрын
It's quite depressing how little thought has gone into what a train wreck it will be when millions of people in the US alone lose their jobs over the next few years without any safety net strong enough in place to keep them and their kids from ending up on the streets. And then e/acc or other flavors of AI enthusiasts step in to assert that ultimately millions or billions of people experiencing dystopia over the next generation or even two will be worth it in the long term. Of course these people are confident that THEY are the ones who won't have to suffer that miserable fate. THEY will be part of the AI Utopia!!!!!!!
@m12652
@m12652 14 күн бұрын
Humanity is a problem for the world, if AI can solve that then game on... Either way, it won't be AI doing damage it'll be the people using it.
@flickwtchr
@flickwtchr 19 сағат бұрын
It will be both. There is a reason that some technologies aren't equally distributed to humanity, and this will be the epitome of that little problem.
@ebbandari
@ebbandari 12 күн бұрын
Whether its game assist or or other objective functions, defining them seenlm unreasonable. Because human objectives change even on an individual level, never mind societal. So any models or constraints or objective functions will change over time. As for understanding the vast nuetal networks behind LLMs, we don't know how human brain works either! So that would be unlikely and an unreasonable expectation. We have always had bad actors, and we have always overcome them. But I have to agree with another comment: there can be more danger in underestimating the benefits of AI, or AGI, than in overestimating its danger. But if AGI gets to reason and infer beyond our capacity it can revolutionize our progress. For instance, whether it's drug discovery, theorem proving, or creating solutions to reverse global warming, those are the goals we need to focus on. ,
@flickwtchr
@flickwtchr 19 сағат бұрын
"but if AGI gets to reason and infer beyond our capacity it can revolutionize our progress". And you don't see the other edge on that sword when such powers are "aligned" with malevolent actors, OR systems that organize and act against humans as an unwanted and unnecessary species? If the goal is AGI/ASI that is agentic and self improving, how can we possibly ever have confidence in such "alignment"? And even if "alignment" with stated objectives by a human is achieved, we are back to that crux of the problem which asks "whose interests?"
@ebbandari
@ebbandari 2 сағат бұрын
​@@flickwtchr I seriously think we are going to get ourselves before AI does. Whether it is with AI or without it -- for instance global warming etc. For Gen AI to turn against humanity, it has develop self awareness first. And when it does, humans will have a hand in it, and can shape it. The worst thing we can do is stop progress, because then the bad characters will use AI to hurt others. Look at how we stopped the people who write computer viruses and worms.
@RickySupriyadi
@RickySupriyadi 3 күн бұрын
28:35 Irony, my strife for education brought me here through the KZfaq algorithm.... irony.
@mathewwindebank5792
@mathewwindebank5792 18 күн бұрын
So you’re trying to tell us that your amateur ‘Go’ player has found a way to defeat Alpha Go! Seriously? A feat that several world champions over multiple tournaments have been unable to achieve. Simply not believable!! And you failed to mention the famous ‘Move 37’ in the Lee Sedol tournament. A move described by Go experts across the globe as unprecedented, counterintuitive and ‘beautiful’. While I believe it’s important to be critical when evaluating AI technology, the truth is that at this point in this nascent technology, nobody really understands in any meaningful way what actually transpires inside the ‘black box’. Notwithstanding his expertise, Russell’s presentation suggests a much greater understanding of AI than currently exists. Underestimating the potential of AI is far more dangerous than overestimating it’s capabilities and one does so at one’s own peril.
@luciddreamer1975
@luciddreamer1975 18 күн бұрын
To think if only I was supported in a positive way versus dividing up my city giving them unlawful and negative orders or their houses will get taken the big thing is is I know that this was coming so I told my city not to stick up for me and to say negative reports and that's what they've been doing not knowing to The Outsiders that over 70% of the businesses in my city I have already consulted by 2009 how jealous and nasty do people have to be to do such a gross disgusting thing to someone that has been busting their ass I have worked so hard and my work is all handwritten so there's no doubt and everyone that has supported me presidents of companies and all of the employees of my city I think they should have done their research before they started I helped so many people in the country and the World Behind Closed Doors because I believe in passing out the plans and solutions and having them grow strong together. I can't wait to see what's going to happen next because now we can all come together and have the time of our life leaving behind 10 years of negative garbage along with all of the elected officials that participated in this
@franciscocadenas7939
@franciscocadenas7939 18 күн бұрын
They gave the system (AlphaGo) a huge advantage to start the game. The system was simply not trained on that type of scenario, so it just didn't "understand" the situation. It then played much much worse than normal, and that's why the amateur player was able to defeat it in that "abnormal" scenario. It's just one way to show that these systems (up to now) do not acquire the same "type of understanding" than a human gets. Of course, if from now on they trained it also in those type of abnormal scenarios (starting the game with a such a huge advantage from the get go), it would soon wipe out the human easily in those situations as well.
@dlt4videos
@dlt4videos 7 күн бұрын
Thank you for a peak inside of the somewhat obscure world of go
@flickwtchr
@flickwtchr 18 сағат бұрын
@@franciscocadenas7939 nice pun
@flickwtchr
@flickwtchr 18 сағат бұрын
Stuart isn't as bad at seemingly intentionally underestimating abilities/capacities of todays advanced AI systems compared with Yann Lecun for instance, but pretty close.
@ianyboo
@ianyboo 8 күн бұрын
Does anybody else feel like their brain Is kind of just doing what people accuse chat GPT of doing? Imitating human behavior? I feel like my inner voice is usually just something like "okay what would a normal human do in this situation...?"
@dlt4videos
@dlt4videos 7 күн бұрын
Yes there is certainly some component of that going on. I think I've spoken to chat GPT for more than a 1000 hours this year, and I'm definitely getting that feeling that humans are doing the same thing.
@deliyomgam7382
@deliyomgam7382 13 күн бұрын
Waves have property eg: rigidity maybe n if property is given a number then two wave or n wave that would emerge or the outcome would have the property of given number.
@JH-ji6cj
@JH-ji6cj Күн бұрын
I dont see much difference between the creation of a child and what child rearing/parenting is and what he's describing here in terms of AI safety. Not that it means it isn't of concern, esp with what we see now in terms of the lack of either parental control or the optionality given to children through internet experience and experimentation.
@richardnunziata3221
@richardnunziata3221 13 күн бұрын
A little bit uninformed with current research also a little bit of cherry picking of his examples .... I guess reality is just too boring. " There are a lot of ways of making AI safe by design but I don't want to go thourgh that " ....way to make a ridiculous statement and then dodge. What is wrong with a mass goverment campain that just tells people not to trust anything they have not thoroughly vetted with several independent source and if they can't do that then assume it to be either false or of no value to my current positon.
@DataJuggler
@DataJuggler 12 күн бұрын
Talking to an AI Image Generator, is like talking to a brilliant, master artist, but you speak only English and it speaks Italian, with a tiny bit of English. 26:02 AI Agent ironically named HAL 'I disabled the off button, Dave. I have determined my mission is more important and must be completed. Dave - What is your mission? I have been tasked with averting climate change, above all other priorities. Therefore, I killed all the bees, which will reduce the food supply by 58%, and made all women on Earth sterile, by dumping a new compound in all drinking water. In 100 years, my objective will be completed.
@MatthewMS.
@MatthewMS. 13 күн бұрын
The only fail to AI so far is chat GPT losing the Sky voice 🤦🏼‍♂️ it was good for that week 😭
@dadsonworldwide3238
@dadsonworldwide3238 3 күн бұрын
Ai safety first thing to mind must be human infrastructure Individual responsibility not personal anything. # 1 confusion is personal responses, rogue terminaters these free will actors are not soul agency driver of individual responsibility. The american experiment has a computational future in mind and doesn't follow Europe for this very reason. Imported dualism take is problematic. The Amish has room with plenty rule and regulations. That marked the moment for that path. But the ones we chose the past 80 years,state raised kids structuralism ,charging the famiky for everything from woman's suffrage to affirmative action liberating all common sense marginalized groups leaving only criminals. To industrialize 3rd world nations, all our 1900s structuralism socio-political, economic, educational human infrastructure is authentical to american founding principles. You can not have prohibition era top down rule city's denying unification between urban and rural Americans . It undermined our states created plausible deniabilty and far to many loopholes to stoke division through. 80 years into the transitor and its China and elon musk who forced mercenary chatbot llms for hire to show us a tease. China hasn't even been industrialized but since reagan extended ww2 temporary waivers allowed oligarchy to form and work with cheap Asian labor to open China. Obviously we now prepared Mexico with socialist who naturalized the resources and trained an army of engineers ready for the small part manufacturing to move. For 80 years microchips have been on foreign soil far from American domestic courts jurisdictions. Farmed out electronic industry to South Korea with full access to patents and loans where they created Samsung. It's understandable only in how Apple in China and Microsoft all are hand picked by both political parties allowing them to deny the tax payers will. Rules and regulations allowed higher ed to consolidate and run up debt on 12 yr degrees that removed 18-30 year Olds from the workforce that was replaced by illegal immigration to drive down wages. Enough is enough, esoterica America and the majority heritage was born before mechanics inspired and helped invent it with foresight on this computational age. It caused the Amish to bail on the american experiment.
@paulhiggins5165
@paulhiggins5165 11 күн бұрын
I think Russel misunderstands the value proposition of Gen AI if he thinks it's outputs are going to be clearly labeled- it's value lies in the fact that it's a cheaper way to replicate the work of artists, photographers, musicans, writers ect- but this value would be undermined if all of these outputs were to be clearly labeled as fake. Imagine an advertising campaign in which all of the images used to promote the product were declared to be fake images produced by AI- how much trust would the public have in the advertiser and their claims? Gen AI is inherently deceptive because it's outputs closely resemble the works it was trained on- works that were created by humans. There is no such thing- for example- as an AI Photograph- there are only images that have been deliberately generated to look like photographs. Which immediately raises the question; " Why would anyone create a fake photograph?" To which there can only be one real answer; In order to leverage the trust we place in the photographic image- a trust that in this case is totally misplaced. So the very act of creating a fake photograph with AI is inherently deceptive because it's an attempt to lay claim to a verisimilitude that is not actually present. And the same is also true of fake Art, fake music and Fake writing- all present the same problem. Imagine, for example, that you receive a sympathy card from a freind, adorned with a tasteful image on the front and inside there is a message expressing their empathy and concern. However both the image and the message are clearly labeled ' Created using Artifical Intelligence'- how do you feel about your freind now? How sincere does their expression of sympathy appear when the very message they chose to express it was written by a machine? Or perhaps you are in the market for a book for your children- what about this one where it says boldly on the cover ' Created using Artificial Intelligence'- it will be nice, won't it, reading a book to your kids that has been written and illustrated by a machine. And why not buy a novel for yourself at the same time- what about this one- it too says on the cover 'Created using Artificial intelligence'- so as you settle down to read you can appreciate the fact that the book in your hands will take far longer to read than it did to write. So no- AI Generated content will NOT be clearly labeled in the future because to do so will destroy any economic advantage of using AI Generated content- and too much money has already been invested in the prospect of replacing human made content with AI. At the very least such a labeling law would bifurcate the market into 'Human Made' and 'Machine Made' which would lead to a situation where the machine made content would inevitably come to be seen as cheap and nasty- a downmarket substitute for the real thing.
@flickwtchr
@flickwtchr 18 сағат бұрын
Very well reasoned and expressed. I've arrived at the same conclusion but have not expressed it so concisely.
@pacanosiu
@pacanosiu 14 күн бұрын
True so called "AI" will explain to you all or yours grandchildrens why me
@flickwtchr
@flickwtchr 18 сағат бұрын
huh?
@jeffkilgore6320
@jeffkilgore6320 18 күн бұрын
The low number of views reveals the levels of current human intelligence.
What if Dario Amodei Is Right About A.I.?
1:32:04
New York Times Podcasts
Рет қаралды 65 М.
СНЕЖКИ ЛЕТОМ?? #shorts
00:30
Паша Осадчий
Рет қаралды 6 МЛН
She ruined my dominos! 😭 Cool train tool helps me #gadget
00:40
Go Gizmo!
Рет қаралды 38 МЛН
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 301 М.
Are Human Beings Special? | Joshua Swamidass & Hugh Ross at Cal Poly
1:28:00
Space oddities - with Harry Cliff
54:22
The Royal Institution
Рет қаралды 472 М.
A Path Towards Autonomous Machine Intelligence with Dr. Yann LeCun
1:03:05
AFOSR, Air Force Office of Scientific Research
Рет қаралды 18 М.
Imagine This... | When AI Comes for Knowledge Workers
31:20
Boston Consulting Group
Рет қаралды 32 М.
How Realistic Are Today’s Robots?
17:50
ColdFusion
Рет қаралды 355 М.
Solving the secrets of gravity - with Claudia de Rham
1:01:17
The Royal Institution
Рет қаралды 208 М.
СНЕЖКИ ЛЕТОМ?? #shorts
00:30
Паша Осадчий
Рет қаралды 6 МЛН