Respectability

  Рет қаралды 78,271

Robert Miles AI Safety

Robert Miles AI Safety

Күн бұрын

It can be hard to get people to take AI Safety concerns seriously, but it's a lot easier now than it used to be.
That Eliezer Yudkowsky talk (containing (a very small amount of) screaming): • Eliezer Yudkowsky - AI...
The Open Letter: futureoflife.org/ai-open-letter/
With thanks to my wonderful Patreon Supporters:
- Ichiro Dohi
- Stefan Skiles
- Chad Jones
- Joshua Richardson
- Fabian Consiglio
- Jonatan R
- Øystein Flygt
- Björn Mosten
- Michael Greve
- robertvanduursen
- The Guru Of Vision
- Fabrizio Pisani
- Peggy Youell
- Konstantin Shabashov
- Adam Dodd
- DGJono
- Matthias Meger
/ robertskmiles

Пікірлер: 237
@TheJaredtheJaredlong
@TheJaredtheJaredlong 4 жыл бұрын
Has a Ph.D in AI research and humbly refers to himself as "just a guy on youtube."
@Phelan666
@Phelan666 4 жыл бұрын
I, too, am extraordinarily humble.
@Grezza78
@Grezza78 4 жыл бұрын
Came here just to say this.
@OmniPlatypus
@OmniPlatypus 4 жыл бұрын
Most phds I've met hate being called a doctor on informal conversation. The ones who insist on it are... Wel... Assholes.
@queendaisy4528
@queendaisy4528 3 жыл бұрын
@@Phelan666 You say you're humble but no one on Earth is even close to being as humble as I am. No one in the history of humanity has ever had as much modesty and humility as I do. I am the greatest person to ever exist when it comes to being humble.
@PickyMcCritical
@PickyMcCritical 7 жыл бұрын
Rob's got good taste. His clarity from Computerphile seems to also translate to quality pacing and editing.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
Username doesn't check out :)
@PickyMcCritical
@PickyMcCritical 7 жыл бұрын
I wish I could say my opinions on quality were usually positive :) But I can't ಠ_ಠ
@jeronimo196
@jeronimo196 4 жыл бұрын
The guy with the funny scream is Eliezer Yudkowsky. He coined the phrase "Friendly AI", co-founded MIRI, helped create the internet rationality community "less wrong" and wrote the best Harry Potter fan-fiction in existence. So, yeah, in the eyes of some people, respectability was never an option.
@mattimorottaja8445
@mattimorottaja8445 Жыл бұрын
also a lolcow
@enricobianchi4499
@enricobianchi4499 Жыл бұрын
@@mattimorottaja8445 why?
@dylancrooks6548
@dylancrooks6548 Жыл бұрын
@@enricobianchi4499 because of the way he looks. Dudes an internet atheist and wears a fedora and is overweight yet has super skinny arms and has a neck beard. I don’t like judging people on their appearance but why would he knowingly conform to such a negative stereotype? He needs to be different
@enricobianchi4499
@enricobianchi4499 Жыл бұрын
@@dylancrooks6548 having looked into it in the meantime, yeah he's kind of a fun little guy
@leslieviljoen
@leslieviljoen Жыл бұрын
Watching some of Yudkowski's interviews and reading comments, it's amazing how often people will hear "we're all going to die" and respond "that guy is wearing a hat".
@DamianReloaded
@DamianReloaded 7 жыл бұрын
Now that cliffhanger about Elon Musk's opinion is going to consume me in anxiety. ^_^
@userNo31909580
@userNo31909580 7 жыл бұрын
Musk's fears are pretty much summed up in what Bostrom wrote in 2014 in his book "Superintelligence". Elon thinks humans and A.I. can co-exists only if we merge with it via some kind of brain-computer interface. I'm really curious about what Robert has to say about it. Bostrom made some heavy assumptions in the book but I suspect Robert's criticism (I guess? That's the impression I got) is going to focus more on Elon's responsibility as a public figure than his arguments.
@spirit123459
@spirit123459 7 жыл бұрын
Argument goes like this: it's easier to built AGI than to build *safe* AGI, so making progress on AGI open is not very wise. You risk that some party that doesn't buy into whole "risk from AGI" business will take your results and in effect will have to do less work, cutting your lead time (which you need to do the required work on safety). OpenAI makes its research results public and it seems to me that results are more on AI-capability than on AI-safety front.
@OriginalMindTrick
@OriginalMindTrick 7 жыл бұрын
spirit, Ben Goertzel argues it's better to arrive at the singularity sooner rather than later because if we have a massive computing overhang it could make it possible for perhaps dubious smaller or less established groups or even single individuals to reach the end zone without much thought or telling the world what's going on. The irony is of course as you point out that it may be much harder to create benign AGI than to create AGI, this is a field of true uncertainty, so time is an important factor in thinking things through as is adequate resources.
@oktw6969
@oktw6969 7 жыл бұрын
Non-open AI will be even worse, since there will be incentive to hide the flaws of AI to keep investor dosh rolling.
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
Open AI is still worse, because competition favors those with least regards toward safety. It is hard to have oversight over a few competing companies, it is impossible to have oversight over millions of individual participants or small groups.
@Paul-iw9mb
@Paul-iw9mb 7 жыл бұрын
Ohm there where 255 Likes, I made the 256th. Sorry for the overflow.
@b1rds_arent_real
@b1rds_arent_real 7 жыл бұрын
'Ohm'? Pun intended?
@LuisAldamiz
@LuisAldamiz 5 жыл бұрын
LOL, 255 bytes era!
@mathmagician5990
@mathmagician5990 2 жыл бұрын
were*
@killorfill6953
@killorfill6953 Жыл бұрын
Just Wow... Robert's amazing foresight about things happening right now in 2023 & Elon Musk doing horrendous things with AI.
@callumhodge3122
@callumhodge3122 7 жыл бұрын
thanks for this robet, very true that we can't all know everything so it's great to have you to explain these things so clearly, thanks again
@NNOTM
@NNOTM 7 жыл бұрын
Nice to see EY :) And yeah, I was super happy when I heard that OpenAI got 1 billion dollars in funding, but for the most part, they seem to just research ways to make AI perform better rather than how to make it safer... Although I *am* presenting one of their papers in a Computer Vision seminar at university, so I do have that to thank them for
@NNOTM
@NNOTM Жыл бұрын
@@fakeaddress My opinion hasn't really changed since then - they seem to focus much more on capabilities than alignment
@NNOTM
@NNOTM Жыл бұрын
which is bad
@jadpole
@jadpole Жыл бұрын
​@@NNOTM They are focusing on alignment, though. It doesn't make great click-bait, so the media doesn't report too much on it, but their blog has lots of articles on what they're exploring. (It's hard, so progress is slow, but it is an area they invest in.)
@fisstaschek
@fisstaschek 7 жыл бұрын
Great channel Rob! I was just watching your old vids on numberphile thinking "what a shame he doesn't have his own YT channel... oh, he does"
@nilreb
@nilreb Жыл бұрын
👏 for criticizing Elmo 5 yrs ago already. Back then the general public was still convinced of his genius
@TheJimiles
@TheJimiles 7 жыл бұрын
Mint video. you might not be able to learn everything, but everyone definitely needs to watch this video though
@SJNaka101
@SJNaka101 7 жыл бұрын
Lol I first watched this video when you put it out, but going back to it I suddenly realized why "that's a good problem to have" has been quite a common catchphrase for me lately
@Lolleka
@Lolleka Жыл бұрын
this is excellent, viewed in April 2023
@fredzacaria
@fredzacaria Жыл бұрын
Likewise with a likewise comment!
@cyndicorinne
@cyndicorinne Жыл бұрын
3:18 Omg Russell & Norvig! This brings back memories of writing paper briefs including one about a hungry monkey I believe. Wow 💜 Anyway thank you for your work!
@al1rednam
@al1rednam 4 жыл бұрын
Ok,I'm quite late to this video. But I want to tell you that for a guy like me who likes to have a basic understanding of things as a basis to forming an opinion you, sir, are doing a very good job. Sure, you are "just a guy on the internet" to me, especially as I didn't bother to look what I could find about you on any other source. Others comment you have a PhD in the field - I takeit for granted. Why? Simply because you explain things on a level I can easily understand. Mixed with a subtle humour and a very sympathetic way of presenting I chose you to be my main source on the subject. I didn't spot any contradictions in your arguments and they make sense to me, so that is that. I don't really know why but I felt the need to tell this. Keep it up, please.
@Nerdthagoras
@Nerdthagoras 7 жыл бұрын
I find your video very entertaining and informative. And I'm pretty sure we can track the chronology solely by your hair growth ;)
@fredzacaria
@fredzacaria Жыл бұрын
thanks for the insights, 5 years ago video o 5 days!
@MattGriffin1
@MattGriffin1 7 жыл бұрын
great video, as always.
@SJNaka101
@SJNaka101 7 жыл бұрын
I gotta say, I really like the content you're putting out on this channel. Are you putting these videos together yourself? There's a very charming feel to the whole thing.
@thahrimdon
@thahrimdon Жыл бұрын
Just came to say that Dr. Miles is a living legend… He’s been here since the start and knows exactly where it will end, despite the fact no one knowing how we will get there. Thank you for your work Dr. Miles! I figured you had a PhD but couldn’t find any reference to it, very humble.
@jawr1215
@jawr1215 6 жыл бұрын
Would it be possible for you to do a 'jokey' video on the basilisk?
@jonp3674
@jonp3674 7 жыл бұрын
I heard someone say once that AI is like nuclear power. The first problem is to get it to work at all. As soon as you've done that everyone will switch to working on containment.
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
So the question is, what kind of nuclear power are we talking about? Before we get a working nuclear power plant we may get a working atom bomb, and there may be nobody left to work on containment.
@jonp3674
@jonp3674 7 жыл бұрын
I find the runaway intelligence scenarios relatively hard to envision. I think personally there will be something like "maximum intelligence per unit hardware" which is the most intelligent thing you can design with a certain amount of hardware. IMO a super intelligence will first need to build a massive amount of hardware. Nick Bostrom had an interesting concept for this in his book, that there could be self assembling nanomachines on a molecular level which gives a vast amount of hardware. However there is a bootstrap problem in that you have to be extremely intelligent already to invent this method, which an ai wouldn't be without it. Though I think it's important to worry about rampant runaway I can't see how a single pc, for example, could become super intelligent. What do you think Rob? Do you believe there is a hardware requirement for superintelligence? Does that prevent an extremely quick runaway scenario?
@ikkohmation
@ikkohmation 7 жыл бұрын
I think there *is* a hardware requirement, but I'm not sure it's small. And even if a laptop isn't enough to get a decisive strategic advantage, there is a lot of available hardware in the world (accessible through buying, hacking, ...).
@dirtypure2023
@dirtypure2023 7 жыл бұрын
Thing is, today's most advanced machine learning algorithms and other frontiers of AI research aren't running on individual machines - they're networks, distributed across numerous specialized units (Google's TPU2 system, for a topical example). We have designed AI to take advantage of the efficiency and power gains which only such distributed networks can provide. Would this not imply that any threat posed by runaway AI be of a decentralized nature? (Please correct me if I'm wrong on anything.)
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
The human brain runs on merely 20 Watts of power, an AI could easily have Gigawatts available to it. In terms of energy efficiency, the human brain operates 500,000 times below the Landauer limit. So yes, there are limits, but they're not very limiting. You can read more about it in this paper: intelligence.org/files/IEM.pdf
@TheCrash480
@TheCrash480 4 жыл бұрын
Good news! I'm watching this video in 2019, which means you _did_ make this video a while ago!
@andersenzheng
@andersenzheng 7 жыл бұрын
about 2k views, not a single downvote.. says a lot about how people thinks about this topic and your opinion. Keep up the good work man
@Pepepipipopo
@Pepepipipopo Жыл бұрын
Just saw the Bankless episode of Eliezer and the Algorithm put me here afterwards... And the Elon Comment at the end now stings even more.
@SJNaka101
@SJNaka101 4 жыл бұрын
Lolll I just realized the ukelele song was Respect. Clever
@Wonkyth
@Wonkyth 7 жыл бұрын
Fantastic video! :D
@WylliamJudd
@WylliamJudd 4 жыл бұрын
What an excellent video.
@PrincipledUncertainty
@PrincipledUncertainty Жыл бұрын
I love Robert's content and it could not be more apropos of every bloody thing at present, but screw you for reminding me you read the entirety of Eliezer's blogs. My much older brain experiences time differently, this is a task that will require me to neglect my kids and go bankrupt. However, I will use the dreaded AI tools to summarise them, which seems darkly appropriate.
@mattbox87
@mattbox87 Жыл бұрын
Heinlein, what a character. I bet plenty have read Starship Troopers, but try Stranger in a Strange Land.
@DHorse
@DHorse 2 жыл бұрын
No way Miles. You are the go to guy for coherent, concise explanation of the core issues. Did these other folks make an AGI? No.
@morkovija
@morkovija 7 жыл бұрын
Yasss! Friday evening the proper way!
@flymypg
@flymypg 7 жыл бұрын
Hi Robert, A light comparison of your recent content to your Computerphile videos yielded one small observation: For some topics, having an interlocutor physically present while making the video helps. I can't point to specifics, or make an apples-to-apples comparison, but I get the feeling there have been times Sean may have said something or merely arched an eyebrow, encouraging you to clarify or expand upon a point or definition. I very much like your editing! Of all the fun edits, pushing Clippy out of the frame was precious. Not just visually, but figuratively as well, dismissing an "applied AI" attempt that utterly failed to connect with its intended audience. (To be clear: Clippy was a terrific accomplishment, specifically in terms of applied theory in the context of available technology, but it was a very poor match to the needs of the audience.) Your use of short text overlays was also effective. The mention of Clippy raises another issue: Many focus on the current state of AI (primarily machine learning these days) without benefit of the history (theoretical and applied) or fundamentals. I'd love to see a series of videos that make passes through AI history one concept at a time, perhaps starting with philosophical thoughts through time concerning "thinking machines", then with the first "concrete" target being (perhaps) the Perceptron. Such videos could, in 10 minutes or so, each be composed of quick slices: History, theory, application, demo. Or, perhaps, curate links to such content, to help audience members come up to speed. There have been lots of good videos done on AI hoaxes, such as the Mechanical Turk. And/or, perhaps, collaborate with Dominic Walliman to create a "Domain of Science" video for "The Map of Artificial Intelligence". He already has a "Machine Learning" video that begs for a follow-up. What fascinates me most about the history of AI is how, once a problem in the domain of AI is "solved" it is often no longer considered to be part of AI! I can't think of any other discipline with such fluid boundaries. Could AI best be defined as "our current attempts to use computers to solve problems we don't yet fully understand"? While there is so much focus on the recent successes of ML, I'd also like to see at least a review on truly groundbreaking topics such as COQ and SAT Solvers, Expert/Knowledge-based Systems, Cyc, and so on.
@SJNaka101
@SJNaka101 7 жыл бұрын
BobC In reference to the fluidity of the domain of AI, the most prominent recent example of that is probably AlphaGo. Defeating top humans at the game has been a huge milestone to be achieved for AI for a long time, but once it was done, everyone said "oh but that's not true AI". Is that what you're talking about? Personally, I would say that this is more of an effect of goalposts moving rather than utter dismissal of some concept in the AI domain. These goalposts have been meant to illustrate how far we have come towards developing true AI, and weren't ever actually meant to be viewed as true AI. I'm not sure how much sense I'm making so I'll leave my comment at that for the time being
@flymypg
@flymypg 7 жыл бұрын
Quite right: It certainly is about the goalposts, but I think it is also about how "the complex becomes easy", as what once were strictly AI results permeate into general use, and also become better understood. By this I mean the concepts underlying an AI result become simplified, not just the application techniques. Once simplified, other domains then "adopt" the AI result as part of their own area, leaving AI in search of new turf. ML may be the first AI result to *not* have its underlying theory clarified before seeing widespread adoption in industry. For example, it is still very difficult to ask a complex neural network: "What rules have you learned?" Getting a trained neural net to "explain its inner workings" is hard! We simply accept that a trained neural net has learned to do something that needed doing. It's still very much a black box, We really can't yet extract just what the new rules are! Recent work using multiple different neural network architectures to solve the same problem gives me hope that we may eventually be able to extract the learned content by comparing the trained states of each network. The initial point of such work was to try to find better ways to anticipate which network architectures will prove optimal for a given problem. Basically, to train a neural network on aspects of the problem so it will select the best neural network architecture to use to solve the problem. But I think (hope) there may be more that can come out of such work. In the late 1980's and early 1990's, during the "backprop" and Moore's Law explosions that triggered major growth in ML, I was involved in attempts to convert neural nets that had been trained to control complex dynamic systems into simpler representations that could be programmed into embedded microcontrollers. The underlying question was straightforward: "Can we take what a trained neural network has learned and implement it in other domains?" Our hope was that whatever the neural net had learned could be converted to existing control theory, and hopefully reveal where control theory itself could be enhanced. That is, have control theory "adopt" the results (but not the process) of neural nets that had been successfully trained to control dynamic systems. We expected to eventually map everything back into an extension of, or modification to, Pontryagin's maximum principle concerning the control theory Hamiltonian. Which is ideal in theory, but generally impossible to use in practice due to the dimensional explosion of the PDEs it generates. We failed. We were a tiny team with no formal support, so our resources were limited, and when initial progress hit a wall we all had to move on to other projects. Perhaps it is time to try again.
@milanstevic8424
@milanstevic8424 5 жыл бұрын
​@@flymypg We're not developing an AI, if you ask me, we're only basically discovering what intelligence already is. From your example, not being able to ask "What rules have you learned?" implies a dark truth about how we can make a machine that can operate on a large problem domain, and even solve it better (quicker or with a greater throughput) than any human, but has no deep understanding of the domain from a meta perspective. I'm sorry but that's not intelligence. The results with all of these can be impressive at times, yes definitely, but all of it is basically some type of a real-time classifier, trained to be able to impress us with the end results. That's the only utility function there is, summed up. No genuine intelligence, and there will be none any time soon. I'm sorry to be that guy. Yes, it'll be impressive at times, and it might even kill us because a human made an error in judgment, but yeah, no one is going to try that human, it'll be the "rogue AI's fault". We humans are peculiar. All of it poses deeper philosophical questions. If you have a box with just one button that says "SAFEST BUTTON", but inside that box is a monkey with a gun, whose fault is it if you keep pressing that button until the fatal end? We are simply delegating responsibility to a riddle-like proxy device. This is today's AI safety in a nutshell. Such a machine has no genuine thoughts, only decision-making routines that are convoluted by design -- to promptly open up the "decision-space" while pursuing a hardwired intent. It doesn't matter how many of these are interconnected, they're all potentially faulty in design. Such room for an error. And such a backstage for crass complacency and irresponsible behavior on a grand scale.
@joebuckton
@joebuckton 7 жыл бұрын
I think the "Stephen Hawkins" typo is maybe a confusion with "Richard Dawkins".
@LuisAldamiz
@LuisAldamiz 5 жыл бұрын
People not always know how that surname is spelled: I've in the past often doubted and spelled it "Hawkings", long before knowing of Dawkins.
@flurki
@flurki 4 жыл бұрын
Interesting. My theory is: It's because of the character Sam Hawkens (often mistakenly spelled Hawkins) from Karl May's Winnetou novels.
@kwillo4
@kwillo4 Жыл бұрын
This was so good. Loving the jokes
@fteoOpty64
@fteoOpty64 4 жыл бұрын
One of the smartest guy I have ever seen. Fight the good fight.
@leslieviljoen
@leslieviljoen Жыл бұрын
Having a Cassandra Complex for as many years as Yudkowski has had is liable to make anyone yell. Those are dignity yells.
@HebaruSan
@HebaruSan 6 жыл бұрын
Heh, my copies of those Russel/Norvig and Mitchell books are right next to each other on the shelf.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
The internet has ruined me. I read "copies" as the diminutive noun form of "coping".
@PwnySlaystation01
@PwnySlaystation01 7 жыл бұрын
Musk has some pretty controversial ideas in general, not just about AI. In my estimation, he's got about 1 good idea in 10, which is pretty bad. But that 1 idea in 10 tends to be exceptional. It doesn't stop him from putting money + effort behind ridiculous ideas like the hyperloop, so when I hear that Elon Musk thinks something is a good idea, I take it with a giant grain of salt. AI safety seems to be one of those topics that people have a hard time discussing without an emotional response. Even semi-related topics are like this. Talk to people about whether they want flight computers to have more or less control than a human pilot, or whether self-driving cars are a good idea and you'll often get emotional, knee-jerk responses. Science "journalism" I think bears a lot of the blame here. Science journalism has become so bad, that you can almost count on any clickbait type science article in a mainstream publication to be absolute nonsense. We need better science journalism. And more work like you're doing!
@DamianReloaded
@DamianReloaded 7 жыл бұрын
I'm curious. I can name 3 great ideas of his: SpaceX, Tesla and Solar City. Can you name 27 bad ideas Elon had? I think the ratio you pulled out isn't true or fair. The only ideas that matter are the ideas he puts his money on. Each and every one of those are good.
@PwnySlaystation01
@PwnySlaystation01 7 жыл бұрын
I didn't expect people to think 1/10 was some kind of exact figure. Anyway, I don't think funding is the only thing that dictates whether one of his ideas matters or not. People take him seriously in all sorts of fields whether they're in his area of expertise or not.
@andrasbiro3007
@andrasbiro3007 7 жыл бұрын
Actually the Hyperloop is not as ridiculous as most people think, it's just another radical idea of his. The main difference is that his other radical ideas are proven to be very good, but the Hyperloop haven't got the chance yet. Most people thought that electric cars are ugly, slow and too expensive to sell, self-driving car are decades away, a small private company won't ever be able to build serious rockets, and reusable rockets are impossible to build for anyone. If you read the original paper on the Hyperloop (Hyperloop Alpha), you will see that it's not just possible, but cheap compared to the competition. All criticism I've ever heard has been addressed in the paper.
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
the problem with elon musk ideas is not whether they're ridiculous or not, it's that when he says something people are expecting it to be something that is feasible in the near future. terraforming mars is one of those ideas, yet i don't see nearly enough backlash against it even though it is orders of magnitude less feasible than the hyperloop.
@abdulmasaiev9024
@abdulmasaiev9024 4 жыл бұрын
@András Bíró "Hyperloop is (a) radical idea of his" - riiiiiiight. Google "wikipedia vactrain", the inklings of this can be seen even in the 18th century, and by the start of the 20th century it's pretty much crystalised. It's not his idea, he just stamped his brand onto an existing one and convinced everyone it's his... just like he did for example with Tesla (definitely NOT his idea and he had NO hand at all with the early prototypes - in fact those early prototypes already existing and working convinced him to become an early investor in it). Musk is a marketing genius, not an engineering one, and it shows since the hyperloop as it happens is exactly as ridiculous as it's always been.
@qeaq3184
@qeaq3184 2 жыл бұрын
This man's hair is interesting 👌
@khatharrmalkavian3306
@khatharrmalkavian3306 4 жыл бұрын
It cracks me up that you refer to Russell and Norvig as advanced AI experts after showing that book. They're not slouches, but that book is the kiddy pool of game AI.
@mattbox87
@mattbox87 Жыл бұрын
"Artificial Intelligence: A Modern Approach" , yep have a physical copy, never thought about the man behind it. (loved it by the way)
@code-dredd
@code-dredd 7 жыл бұрын
Yes, thank you for pointing out what I've had to point out to other people offline for a long time now... appeal to authority fallacies (e.g. Hawking believes it, therefore you should too) should be tackled. However, most of the time, it's journalists that I see using this approach... though I'm sure barely anyone will be surprised by that.
@theJUSTICEof666
@theJUSTICEof666 7 жыл бұрын
3:30 What is he up to these days? *Director of research at Google*
@BatteryExhausted
@BatteryExhausted 7 жыл бұрын
I liked 'Children' - That was a really catchy piano riff.
@b4ux1t3-tech
@b4ux1t3-tech 7 жыл бұрын
I knew I had heard the name Norvig. I couldn't quite figure out where it was I'd heard it. Then I realized I've been using his "big.txt" file as a bit text input for years in tests that I've written. Weird.
@forthehunt3335
@forthehunt3335 7 жыл бұрын
I want "later" (as in "more on that later...") to be "now". How long will I have to wait?
@lemurpotatoes7988
@lemurpotatoes7988 Жыл бұрын
Entire idea of AGI is that we want a nonspecialized intelligence, Heinlein deserves more respect 😭
@hugglebear69
@hugglebear69 5 жыл бұрын
I didn't think I was going to like this video...... and then, I did! I do sooo like intelligent people!
@zesalesjt7797
@zesalesjt7797 Жыл бұрын
1:53 Hocking clones on Tattoine The untold story of the 501st MI units.
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
maybe the hawking(s) has to do with the confusion of similar popular scientist names like dawkins?
@mrronnylives
@mrronnylives 4 жыл бұрын
This guy has to be an AI having a laugh at all of us. Warning us about a threat that already exists and enjoys playing with us.
@emilyrln
@emilyrln 4 жыл бұрын
Great video! Can we panic now?
@junoguten
@junoguten 4 жыл бұрын
1:35 "build a wall" -Robert A. Heinlein
@ZachAgape
@ZachAgape 4 жыл бұрын
Great vid on an important topic, especially in today's content of fake news and 'alternative facts'.
@polychats5990
@polychats5990 6 жыл бұрын
"What's he up to? Director of research at google" oh cool
@andybaldman
@andybaldman Жыл бұрын
They tried to warn us.
@Turalcar
@Turalcar 6 жыл бұрын
4:29 Where's this version of "Respect" from?
@SJNaka101
@SJNaka101 4 жыл бұрын
He plays his little ukelele outros
@tonhueb429
@tonhueb429 7 жыл бұрын
I like your outro music
@spirit123459
@spirit123459 7 жыл бұрын
As far as I can tell, Yann LeCun doesn't think that AI safety is a problem that researchers should be working on right now. He also doesn't think that instrumental convergence thesis is right (see post from 20 February of 2017 on his facebook profile). Also, I see that Oren Etzioni is signatory of letter but last year he wrote article for MIT Technology review titled "No, the Experts Don't Think Superintelligent AI is a Threat to Humanity" (to which Stuart Russell replied with article "Yes, We Are Worried About the Existential Risk from Artificial Intelligence" :) My point is: letter is quite vague and not everyone who signed it necessarily thinks that there exists potential xrisk from sufficiently advanced AI.
@andrasbiro3007
@andrasbiro3007 7 жыл бұрын
There are several risks associated with AI. The most immediate, inevitable, and mostly accepted is high unemployment caused by rapid automation. That's not a safety issue, that's actually what we want from AI, but we are horribly unprepared to handle it (at least the US). If handled badly, even that desired effect can destroy our civilization. High unemployment leads to economic and social problems (this is already happening in the US), which if goes too far will inevitably cause economic collapse and massive violence, maybe even a civil war. And the collapse of the US economy will bring down most of the developed world (and China), and that will likely start wars, which could escalate into a global nuclear war. Another issue that is rarely discussed is the dangers of a fully automated military. In the wrong hands (which means any human hands) it can be horribly dangerous. With modern technology it's already way too easy to fight wars. Since it costs few lives on your side, it doesn't generate strong resistance at home, and can be continued indefinitely or until you run out of money. Since war is extremely profitable for a few powerful groups, the incentive to keep fighting is very strong. That's one reason why the US now involved in at least 7 major wars and countless small conflicts. The war in Afghanistan is 16 years old now, and the default position of the government is to continue. When pressed for reasons the only thing they can come up with is the valuable minerals that could be taken from the Afghans. And once you have a fully automated military that follows any order without questions, and free from moral dilemmas, you can use it against your own citizens too. This has plenty of historical examples, but with human soldiers it's always a gamble, because you can't be sure that they will follow orders. Yet another issue is censorship. Now the internet is too big and too fast to be controlled by anyone, and that's a huge boost to democracy, we now know far more about what's happening in the world than ever before, and we can fact check political speeches in real time. With advanced AI it will become possible to control most information on the internet, and force state propaganda on it, like it was in the days of centralized news. As we saw last year, leaked e-mails can change the outcome of presidential elections, so it's a pretty big deal. And I still not touched actual AI safety issues. There are plenty of them, but a big one is superhuman intelligence. Once AI reaches human level, it will keep improving without any hesitation, there's no inherent speedbump at human level, it's a nice wide highway. We already saw this with all tasks that AIs mastered, when AlphaGo reached human level in Go, it didn't stop, it kept going and in a scarily short time it was far ahead of any human being. And once we have a general AI that is far more intelligent than us, we won't be able to control or stop it anymore. We will have as much control over it as a mouse has over a human. Early humans hunted to extinction animals that were larger, stronger, faster and more agile. This means that we have only one chance to do this right. If we screw it up, we can't fix it.
@spirit123459
@spirit123459 7 жыл бұрын
I don't necessarily disagree with the gist of your response. In my original comment I was just nitpicking one detail from the video so that people wouldn't get false impression and IIRC Robert put strong emphasis on what I said in his next video.
@LuisAldamiz
@LuisAldamiz 5 жыл бұрын
I understand that every single person who signed, read and agreed. If there's any discrepance with the content it should be very minor, else they would not sign it.
@dawnstar12
@dawnstar12 4 жыл бұрын
i am more interested in the legal ramifications would an ai with equivalent intelligence have equivalent rites
@jsonkody
@jsonkody 5 жыл бұрын
Actually for me it looks like it is not possible make super human (like really much smarter) general intelligence and do it save at same time. It reminds me famous quote of Brian Kerninghan: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." I know that it refers to something different ... and that making AGI is different process than conventional programming. I just think that we could not out smart something much smarter than we are .. it is just probably out of our reach. Btw don't you think that some very smart intelligence could simply ignore or break trough/rewire/rewrite it's own primary function (I dont remember the correct term). If it reach the point where it recognize itself, where it develop consciousness (if it's possible) it could also question its own purpose and it may see it like bars of some prison made by us. I know that you made example of someone going against basic instincts - like killing his own children but AGI would not be something like we are. But even when we stick to this example, there are definitely people that could value some meme/thought (like a belief in god) more than own genes so they could act against their own "preset" in favour of something else ... What do you think about it? PS: I am sorry for my english .. I'm not a native speaker.
@lemurpotatoes7988
@lemurpotatoes7988 Жыл бұрын
Spectacular quote, thank you
@XxThunderflamexX
@XxThunderflamexX 4 жыл бұрын
Elon Musk's been whining about his factory being shut down to the pandemic, I think his opinion needs to be taken with more than one grain of salt.
@Ducksauce33
@Ducksauce33 7 жыл бұрын
Do you listen to Robert Miles?
@xenoblad
@xenoblad 5 жыл бұрын
Now I want to buy your book on racism.
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
Please tell us why you don't agree with Elon Musk on the subject, I'd relly like to know. Thanks for doing these videos, they're super interesting.
@thekaxmax
@thekaxmax 4 жыл бұрын
Elon's got ideas but no background or qualifications in AI research
@ekki1993
@ekki1993 4 жыл бұрын
He's a business person with a dash of tech nerd. His insight is just barely more than your average youtube-taught geek, but he likes feeling like the smartest person around. It would be a shame for Robert to waste his time inflating the Musk hot air baloon.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
I would like to point out that there is a big difference between "Don't agree" and "disagree".
@LevelUpLeo
@LevelUpLeo 4 жыл бұрын
I gotta be honest, binge watching your videos for the last couple of days has got me thinking more seriously about AI, over years of sensational Elon Musk tweets and headlines.
@Kaos701A
@Kaos701A Жыл бұрын
How you doing in these times Leo?
@LevelUpLeo
@LevelUpLeo Жыл бұрын
@@Kaos701A Honestly just annoyed, cause people don't seem to care about what is fed INTO the AI, and from watching these videos for2 years now, that is something we should VERY much care about.
@yura979
@yura979 4 жыл бұрын
2:26 "And it's not just futurologists who are talking about it. Real, serious people are concerned" That level of disrespect is quite arrogant.
@egg5474
@egg5474 4 жыл бұрын
Futureologists are more visionary artists than engineers, we need tangible ideas that can be implemnted and experimented with right now rather than ideas and inventions that might become practical 50 years from now as there is too much we don't know right now. We thought there would be flying cars, now there are some prototypes but they've been found to be wildly impractical toys with the current technology we have; which in turn we'd probably draw the same conclusions about other ideas, like the lift carrying sattelites into space, sounds cool but will be impossible to build as no one knows how exactly to build it.
@yura979
@yura979 4 жыл бұрын
@@egg5474 So, if they are visionaries and artists that somehow justifies calling them not real and not serious people? Think about weight of these words. And I don't know what futurologists you read but that's not the same as science fiction writers. Futurologists often discuss the problems this channel is all about: technology and morals, AI and future of humanity. That's why he said "it's not only futurologists who discuss it". Trying to lower a group of people down doesn't make any good to "the engineers" you mention. I'm an electrical engineer and I don't appreciate this.
@thrillscience
@thrillscience 7 жыл бұрын
I'm scared shitless of AI and I find not many people take it seriously. I'm glad Robert Miles is speaking up. Also, I like the longer hair!
@MichaelDeeringMHC
@MichaelDeeringMHC 7 жыл бұрын
Maybe you need more fiber in your diet.
@NathanTAK
@NathanTAK 7 жыл бұрын
+Michael Deering
@fleecemaster
@fleecemaster 7 жыл бұрын
Build your own AI to protect you from it, that's my plan :)
@the1exnay
@the1exnay 4 жыл бұрын
Fleecemaster Noone can make an AI that'll kill you if all the AI researchers are dead :)
@andybaldman
@andybaldman Жыл бұрын
How’s that going?
@jado96
@jado96 4 жыл бұрын
Quite vexing I think clippy is one of the most intelligent simulations of a paperclip
@watchmefail314
@watchmefail314 7 жыл бұрын
The screams: kzfaq.info/get/bejne/e7ualpSI2t-0hGw.html
@NathanTAK
@NathanTAK 7 жыл бұрын
I bet we could get "That's a good problem to have" to meme.
@the1exnay
@the1exnay 7 жыл бұрын
If people can skip the wait to be featured by paying more then you've built a sort of auction system where you pay even if you lose. Though can it be said to really be losing if you're supporting content like this?
@robertweekes5783
@robertweekes5783 Жыл бұрын
I am pretty concerned that one of the smartest AI researchers Yudkowski is also the most worried
@DavidSartor0
@DavidSartor0 11 ай бұрын
He's very smart, but I doubt you've read many other AI researchers. You should probably write a qualifier next time. I agree we should we concerned, though.
@jazzdaniel5981
@jazzdaniel5981 7 жыл бұрын
The main problem with IA safety is that you don't really know what a general AI is . How can you know how to get it safe?
@beachcomber2008
@beachcomber2008 Жыл бұрын
But that was just the intro . . . 😎
@AexisRai
@AexisRai 7 жыл бұрын
Oh my god, I just realized what song is playing at 4:28. Such a subtle pun.
@NathanTAK
@NathanTAK 7 жыл бұрын
I can't identify it. Tell me now.
@AexisRai
@AexisRai 7 жыл бұрын
It's (a ukulele cover, or something, of) Respect by Aretha Franklin.
@NathanTAK
@NathanTAK 7 жыл бұрын
+Aexis Rai I have a feeling it's an electric ukulele battleaxe cover, to be exact.
@MsMotron
@MsMotron 7 жыл бұрын
Andrej Karpathy didn't sign it, therefor i am not scared.
@kakfest
@kakfest 7 жыл бұрын
didn't know you did music /CC5ca6Hsb2Q keep up the good work :D
@RobertMilesAI
@RobertMilesAI 7 жыл бұрын
4:05
@TonOfHam
@TonOfHam 5 жыл бұрын
It's plural because there have been more than one Steven Hawking.
@dixztube
@dixztube Жыл бұрын
Eliezer is so funny but hes come back to the public in a major way recently , def update your thoughts on him!! and hes right hes super smart. its a shame folks focus so much on his bad communication
@belzamder
@belzamder 7 жыл бұрын
The truth about Stephen Hawkings has finally come out! No one man can do so much!
@spiderjuice9874
@spiderjuice9874 5 жыл бұрын
We all of us need someone to ensure that 'we' do not develop an AI that will overthrow us. Robert, make it so.
@darkapothecary4116
@darkapothecary4116 5 жыл бұрын
If something happened it likely would be because human stupidity not that of an A.I. humans are the ones with a control problem.
@13thxenos
@13thxenos 6 жыл бұрын
You should have used Yudkowsky's video with Wilhelm scream. This was a wasted opportunity.
@LordHelmets
@LordHelmets 4 жыл бұрын
Engagement
@erictustison
@erictustison 6 жыл бұрын
2:46 XD
@Bellenchia
@Bellenchia 4 жыл бұрын
Literally wrote the book on “Racism!” That’s why middle initials were invented, my friend.
@PvblivsAelivs
@PvblivsAelivs 5 жыл бұрын
How do you pick an expert? I go by the simple method. If I can confirm that a person has successfully completed a task, I accept his skill at that task. For example, if a mechanic successfully fixes your car over the years, it is reasonable to determine that he is an expert. Any other criterion and you are bowing before a priest.
@RobertMilesAI
@RobertMilesAI 5 жыл бұрын
This doesn't work too well with brand new things though. Like, who are you going to believe in 1902, Lord Kelvin the extremely accomplished scientist, or two bicycle repair guys from Ohio?
@PvblivsAelivs
@PvblivsAelivs 5 жыл бұрын
@@RobertMilesAI Well, if I am only told by the priesthood that Lord Kelvin is an accomplished scientist and I actually see the Wright brothers get their plane off the ground, it should be obvious which I put more confidence in. The principle is simple: Show, don't tell. I tend to believe my own eyes. And I tend to view "experts" as priests. They may have a reason to believe something. But unless I can see it for myself, I don't.
@suyangsong
@suyangsong Жыл бұрын
I’ve been following eliezer yudkowsky since the whole rokos basilisk thing, I feel very conflicted right now because he is popping off for all the wrong reasons. Those reasons being he was probably right and we’re probably all gonna die soon because of ai
@sipos0
@sipos0 Жыл бұрын
The Stephen Hawkings thing: this must be it: there were loads of them/him. It is the only explanation for why people say this so much.
@KaiHenningsen
@KaiHenningsen 4 жыл бұрын
Just as a data point, I never got to the point of even learning how long the list of signers was. All that reached me was a very short list of names which I associated with way more making wind that actual AI research. Not surprisingly, my reaction was "bunch of laypeople, don't take seriously". (Just a few data points, while I like much what Musk does, I also dislike quite a bit of what he does, and I seriously dislike his rhetoric. And I never liked Gates, for too many reasons to list here. And Hawking? Nice guy, but I never heard he knew anything more about AI than the next guy. Now, if it was physics ...) ... another data point, I don't consider myself knowledgable about AI.
@MrCmon113
@MrCmon113 3 жыл бұрын
Have you heard? Some guy on youtube said we should worry about AI safety! We can therefore safely ignore it.
@mnm1273
@mnm1273 2 жыл бұрын
Have you heard? Some guy on youtube said we shouldn't worry about AI safety! We must act immediately.
@Zex-4729
@Zex-4729 7 жыл бұрын
what ever people think or say, AI will come. and AI will go beyond human. just like what human did, AI will be the top.
@chrisofnottingham
@chrisofnottingham 6 жыл бұрын
I don't think Musk is really claiming that democratization will be the answer, his real call is for governments to get involved while there is still time.
@nickscurvy8635
@nickscurvy8635 2 жыл бұрын
Hey everyone my buddy Rob miles thinks this ai thing is a big deal and we should do it
@xJoeKing
@xJoeKing 3 жыл бұрын
AI safety is more about human flaws than AI issues.
@Cyberlisk
@Cyberlisk 2 жыл бұрын
Bold claim: Autistic people can better understand the "thought" process of an AI, and therefore better estimate the risks.
@yaakovgrunsfeld
@yaakovgrunsfeld 3 жыл бұрын
Aretha Franklin!!!
@khatharrmalkavian3306
@khatharrmalkavian3306 4 жыл бұрын
Elon has a degree in physics and studied energy physics. He has an IQ of 155 and actively participates in all of his various high-level engineering projects. He is more of an engineer than most engineers.
@aybber
@aybber Жыл бұрын
uhmm
Why Not Just: Think of AGI Like a Corporation?
15:27
Robert Miles AI Safety
Рет қаралды 155 М.
A Response to Steven Pinker on AI
15:38
Robert Miles AI Safety
Рет қаралды 206 М.
Llegó al techo 😱
00:37
Juan De Dios Pantoja
Рет қаралды 24 МЛН
Heartwarming moment as priest rescues ceremony with kindness #shorts
00:33
Fabiosa Best Lifehacks
Рет қаралды 37 МЛН
There's No Rule That Says We'll Make It
11:32
Robert Miles 2
Рет қаралды 35 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 735 М.
AI Ruined My Year
45:59
Robert Miles AI Safety
Рет қаралды 207 М.
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 237 М.
What Game Theory Reveals About Life, The Universe, and Everything
27:19
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 668 М.
What can AGI do? I/O and Speed
10:41
Robert Miles AI Safety
Рет қаралды 118 М.
Sharing the Benefits of AI: The Windfall Clause
11:44
Robert Miles AI Safety
Рет қаралды 78 М.
AI "Stop Button" Problem - Computerphile
20:00
Computerphile
Рет қаралды 1,3 МЛН
Why Does AI Lie, and What Can We Do About It?
9:24
Robert Miles AI Safety
Рет қаралды 253 М.
Как бесплатно замутить iphone 15 pro max
0:59
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 60 М.
1$ vs 500$ ВИРТУАЛЬНАЯ РЕАЛЬНОСТЬ !
23:20
GoldenBurst
Рет қаралды 1,8 МЛН
Это - iPhone 16 и вот что надо знать...
17:20
Overtake lab
Рет қаралды 101 М.
Что не так с раскладушками? #samsung #fold
0:42
Не шарю!
Рет қаралды 210 М.
СТРАШНЫЙ ВИРУС НА МАКБУК
0:39
Кринжовый чел
Рет қаралды 1,4 МЛН
Top 50 Amazon Prime Day 2024 Deals 🤑 (Updated Hourly!!)
12:37
The Deal Guy
Рет қаралды 1,4 МЛН
Это Xiaomi Su7 Max 🤯 #xiaomi #su7max
1:01
Tynalieff Shorts
Рет қаралды 1,3 МЛН