AI Safety Gym - Computerphile

  Рет қаралды 119,635

Computerphile

Computerphile

Күн бұрын

Check out today's sponsor Fasthosts for all of your UK web hosting needs: www.fasthosts.co.uk/computerp...
Rob Miles discusses the idea of a gym for training AI algorithms.
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 192
@RasperHelpdesk
@RasperHelpdesk 4 жыл бұрын
"A ship in harbor is safe - but that is not what ships are built for."
@imveryangryitsnotbutter
@imveryangryitsnotbutter 4 жыл бұрын
"The Earth is the cradle of the mind, but one cannot eternally live in a cradle." - *Konstantin Tsiolkovsky,* _from a letter written in 1911_
@Sonny_McMacsson
@Sonny_McMacsson 4 жыл бұрын
Except in Peal Harbor.
@Catcrumbs
@Catcrumbs 4 жыл бұрын
The ships attacked in Pearl Harbour were safer there than if they had been attacked in open water. Almost all the ships sunk there were raised to fight again.
@Blox117
@Blox117 4 жыл бұрын
@@Sonny_McMacsson never heard of any ships sunk at this Peal Harbor
@iam3377
@iam3377 4 жыл бұрын
Blox117 TENOHAIKA BONZAI
@AcornElectron
@AcornElectron 4 жыл бұрын
Awesome. I’ve watched literally everything Rob has recorded on AI. He’s very relatable, knowledgeable and informative.
@danielm9753
@danielm9753 4 жыл бұрын
I used to know this guy. Glad he’s still at it. Easily one of the smartest dudes I’ve met in person
@janzacharias3680
@janzacharias3680 4 жыл бұрын
i first read prison, and was like what
@null-bd7xo
@null-bd7xo 4 жыл бұрын
@@janzacharias3680 bruhhhhhh xddddddd
@moritzschmidt6791
@moritzschmidt6791 3 жыл бұрын
Still no PhD yet and I guess hes not as smart as some people here think.
@lullah85
@lullah85 3 жыл бұрын
@@moritzschmidt6791 is getting a PhD a benchmark for smartness?
@moritzschmidt6791
@moritzschmidt6791 3 жыл бұрын
@@lullah85 Well Iam sure that if someone is trying hard to get a PhD and doesnt get it, he is not as smart as someone who got it under the same conditions. Right?
@DrumsKylePlays
@DrumsKylePlays 4 жыл бұрын
Miles is an excellent teacher. Always does a great job fielding questions from a layperson.
@sandwich2473
@sandwich2473 4 жыл бұрын
His channel has a bunch of videos that are great to just play on a second monitor, or in the background to learn stuff. He's pretty cool.
@nikoha1763
@nikoha1763 4 жыл бұрын
Agree
@hattrickster33
@hattrickster33 4 жыл бұрын
Looks like they're taking security very seriously. This guy is always kept inside a prison to avoid his rogue AI pets from escaping.
@LeoStaley
@LeoStaley 4 жыл бұрын
Rob's video on the 3 laws of robotics is what really demonstrated to me how serious ai safety really is.
@ragnkja
@ragnkja 4 жыл бұрын
The sponsor intro is too loud. Edit: as is the sponsor segment at the end.
@AustinSpafford
@AustinSpafford 4 жыл бұрын
Indeed, the video content’s volume was comparable to other videos I had been watching, but that sponsor callout at the beginning was so loud that I found myself swearing and scrambling for the volume control. Understandably, mistakes happen, and it’s unfortunate that only youtube themselves have access to editing published videos.
@Petertronic
@Petertronic 4 жыл бұрын
It made my cat jump.
@philrod1
@philrod1 4 жыл бұрын
It nearly woke my child! 😱
@SproutyPottedPlant
@SproutyPottedPlant 4 жыл бұрын
Hint: you can use the volume control to adjust the volume
@philrod1
@philrod1 4 жыл бұрын
@@SproutyPottedPlant After the fact? It's not as if there was a warning.
@joshie228
@joshie228 4 жыл бұрын
I'm a man of simple tastes - I see Rob Miles, I press the like button.
@gasdive
@gasdive 4 жыл бұрын
It tickles my reward function.
@arthurcheek5634
@arthurcheek5634 4 жыл бұрын
gasdive hahahaha
@TheStarBlack
@TheStarBlack 4 жыл бұрын
7:58 that artificial camera movement is both trippy and impressive!
@DIECARS1
@DIECARS1 4 жыл бұрын
never knew notts uni had a prison to film in
@Ceelvain
@Ceelvain 4 жыл бұрын
For, you know. Reenacting the Standford prison experiment. :D
@johnhudson9167
@johnhudson9167 4 жыл бұрын
It's a safety gym for academics
@zacgarby3113
@zacgarby3113 4 жыл бұрын
pretty sure it's at the nottingham hackspace
@esquilax5563
@esquilax5563 4 жыл бұрын
I like the fact that young Robert uses the same Simpsons references I remember from 20-odd years ago
@letsgobrandon416
@letsgobrandon416 4 жыл бұрын
I really love listening to Rob's explanations.
@abcdemnopq3583
@abcdemnopq3583 4 жыл бұрын
Fab and super interesting video, also v. much appreciated your [Rob's] EA talk yesterday - will definitely be checking out the AI Safety field in more depth.
@locarno24
@locarno24 4 жыл бұрын
Completely agree. Big safety failures - in organisation structure, or real world industry, or whatever - usually occur because of either unknown elements in the environment or unexpected interaction by known elements. Because - at stupidly obvious level - if you could predict it you would (you'd hope) have done something about it. Thanks for the the description on the constraint learning. Keeping constraints and goals kept as modular elements is one of those things that makes obvious sense *once* someone explains it to me.
@Marina-nt6my
@Marina-nt6my Жыл бұрын
13:39 😂 I love how they named all these things
@intron9
@intron9 4 жыл бұрын
enable subtitles please
@user-yv5mt9rm3d
@user-yv5mt9rm3d 4 жыл бұрын
That is an absolutely miserable classroom!
@zwz.zdenek
@zwz.zdenek 4 жыл бұрын
I thought it had to be used by soldiers or something.
@mohamedhabas7391
@mohamedhabas7391 Жыл бұрын
Miles is an excellent teacher. 👨‍🏫
@Danicker
@Danicker 4 жыл бұрын
Sneaky hitch hikers reference ;) love it!
@silaspoulson9935
@silaspoulson9935 4 жыл бұрын
Could you link paper?
@doodlebobascending8505
@doodlebobascending8505 4 жыл бұрын
I initially read this as "AI Sentry Gun" and thought Rob was having a crisis.
@vasiliigulevich9202
@vasiliigulevich9202 4 жыл бұрын
I feel love to viewer behind those rotations of article page. Awesome job!
@FuZZbaLLbee
@FuZZbaLLbee 4 жыл бұрын
I was waiting for Robbert to make this video. 😀
@bldcaveman2001
@bldcaveman2001 2 жыл бұрын
Just noticed you're a slapper (aka Bassist) - Love it!
@joshuahillerup4290
@joshuahillerup4290 4 жыл бұрын
I wonder if you can get complicated multidimensional shapes like optimization problems for reward functions
@elephantwalkersmith1533
@elephantwalkersmith1533 4 жыл бұрын
Non linear optimization methods like sqp often include constraints. This is very common in fields other than machine learning. The problem with constraints is their formulation is actually very difficult, and infeasible path optimization is necessary to solve the learning problem.
@cheaterman49
@cheaterman49 4 жыл бұрын
The path optimization thing, is it kind of like hitting a local minimum because of constraint boundaries, preventing the exploration of a better solution?
@moon_bandage
@moon_bandage 4 жыл бұрын
He never ended up explaining what this "gym" thing is :(
@giampaolomannucci8281
@giampaolomannucci8281 4 жыл бұрын
I think he did. He first said these are places where you train AI, then moved into explaining what "training AI" means.
@Hexanitrobenzene
@Hexanitrobenzene 4 жыл бұрын
? At 12:43, the entities which AI can control in a "gym" are presented. Then at 13:26, the obstacles are presented. The whole video is presenting a framework which helps to develop safer algorithms, which can then be benchmarked in the "gym" for their safety.
@Theoddert
@Theoddert 4 жыл бұрын
*clears through* I am a simple robot, I see a Rob Miles AI video, I like it,
@SirWilliamKidney
@SirWilliamKidney 4 жыл бұрын
+ 100 points for the THHGTTG reference!
@ashurean
@ashurean 4 жыл бұрын
5:19 Would it be possible to mix VR and test simulations to have real humans interact with the simulated machine? Just have it open to the public and you have all the "real human reactions" you'll ever need.
@oxybrightdark8765
@oxybrightdark8765 Жыл бұрын
When real, unselected humans mess with machines- they invariably will try and teach the machine bad things. For instance, look up what happened to microsoft Tay.
@HenrikoMagnifico
@HenrikoMagnifico 2 жыл бұрын
I want more videos with Miles
@mare4602
@mare4602 4 жыл бұрын
awsome video
@charstringetje
@charstringetje 4 жыл бұрын
@14:53 Is that the UK bass in the background?
@BlenderDumbass
@BlenderDumbass 3 жыл бұрын
Can we make a sponsor segment just sit somewhere in the end of a description?
@AA-qi4ez
@AA-qi4ez 4 жыл бұрын
Oooof... "Doggo." Some top quality memes AI researchers
@redbyte8259
@redbyte8259 Жыл бұрын
Hello, I have a question about this topic: Is it possible to imprison these robots in an environment where these can't harm any humans, but can do all the tasks that are gaffed to them? For example, in a warehouse where there is no way out for these robots, but where they can do all the warehouse work, or in a commercial kitchen where they can only interact with the kitchen and nothing else. I think the best solution is to separate these robots from humans as much as possible. I believe that it is impossible to develop an algorithm that can cover all hazards and avoid harming a human being.
@U014B
@U014B 4 жыл бұрын
10:59 Well, pens and mugs are both toruses, so you really wouldn't need to change anything.
@Hexanitrobenzene
@Hexanitrobenzene 4 жыл бұрын
Mugs - yeah, but pens ?
@007filko
@007filko 4 жыл бұрын
So, if we look at how humanbabies tend to learn, it usually is also by doing random things, which very often happen to be quite dangerous, even if only to the baby itself. It's not that a baby crawling around can't do anyone harm. The difference is, I believe, that a human baby is under constant superviosion by its parent(s). We perfectly know, that it's impossible for any human to constantly observe and analyse the process of learning of an AI, even with use of reward modelling. If there is a possibility of something danegrous happening, we should sit with a power off button in a vritual world, predicitng when an agent is going to crash or destroy something, and then manually giving negative feedback. However, maybe a solution worth considering would be to have this kind of "parenting agent", trained specifically to try to predict the "learning" agent actions, or just switching it off, when it detects a possible disaster? To put it in another words - to have this constraint in a form of another trained AI?
@jimijenkins2548
@jimijenkins2548 2 жыл бұрын
Okay, now train the parent AI.
@ExOster-ys9sj
@ExOster-ys9sj 3 жыл бұрын
Where can i find the paper, looked it up at google scholar and cant find it!
@MyMusics101
@MyMusics101 4 жыл бұрын
Haven't looked at the paper yet and perhaps it's a silly idea, but couldn't you make a time-dependant reward function which gives very negative rewards for to the things you're supposed to stay away from, in proportion to your distance to them (e.g. close to bad things --> -10000). And as the training progresses, you reduce the penalty to a more reasonable value, so the agent starts caring more about their actual goal. The idea would be that it would first learn quickly to avoid the bad stuff, and *then*learn the actual task without forgetting that touching the bad things is bad.
@soumilshah1007
@soumilshah1007 4 жыл бұрын
With current reinforcement learning systems, once the agent has learned not to do something, it won't do it. There's no way for it to know that you've reduced the punishment for it. That's the problem with exploration vs exploitation, the most common approach I've seen to solve the fact that the agent doesn't explore actions whose reward might have changed is to occasionally take actions at random, which in this case would be a really bad idea. You gave your self driving car a large negative reward for a reason. You can't then deliberately program it to randomly crash and ignore it's reward.
@lHenry97
@lHenry97 4 жыл бұрын
What exactly is the difference between reinforcement learning penalties and these constraints?
@danieljensen2626
@danieljensen2626 4 жыл бұрын
Seems like the penalty basically becomes infinite after a set number of negative outcomes, and you program that limit in yourself. There are probably other differences, but I don't know enough to understand them.
@HalcyonSerenade
@HalcyonSerenade 4 жыл бұрын
From what I can tell, penalties are negative events that are _responded_ to, whereas constraints are considered _before_ they're violated. Assigning a _penalty_ to hurting a human wouldn't be ideal, because then the AI would only learn not to do that _after_ they've already hurt someone. That's the high-concept as I understand it... I'm actually pretty interested in getting into machine learning and might do some research into the topics discussed in this video, so maybe I'll make a follow-up comment (or edit this one) with a more robust answer if someone else doesn't give one in the meantime :P
@GFmanaic
@GFmanaic 4 жыл бұрын
I didn't come here to get roasted thank you very much
@witeshade
@witeshade 4 жыл бұрын
I think I use the internet too much. I read "fasthosts" as "fast thots"
@ciarfah
@ciarfah 4 жыл бұрын
Daniel G begone
@PopeGoliath
@PopeGoliath 4 жыл бұрын
I read "fast thots" as "fast tots" and really wanted me some drive-through taters.
@the1exnay
@the1exnay 4 жыл бұрын
I was thinking about how i explore safely. And a simplified AI-friendly version of it could be i assess the likelihood of a negative outcome happening and then apply a negative outcome equal to the percentage multiplied by the value of the negative outcome. So like if there's a 0.1% chance of me dying and dying is -1,000,000 then I'd apply a -1,000 to the action. But then i also account for uncertainty in a way that increases the likelihood I'll explore it but also increases the care taken exploring it. So like a reward for learning and an increase to the negative that's proportional to how uncertain it is, so that encourages finding the safest surest way to find the answer even if the safe sure way takes longer. I'm uncertain how easy that'd be to make an actual program or how effective it'd be but seems reasonable to try copying humans. Doesn't really solve how to get started though, cause flailing like a baby with an arm that weighs a ton is a horrible idea. Maybe it's possible to give the AIs neutered bodies to learn with before being transferred to a more dangerous body?
@subschallenge-nh4xp
@subschallenge-nh4xp 4 жыл бұрын
Did life make experience or does experience make life? Seriously
@the1exnay
@the1exnay 4 жыл бұрын
william polo valerio I don't understand what you're asking
@gasdive
@gasdive 4 жыл бұрын
The other thing I've noticed is that they're seemingly not programming in boredom. I get bored doing the same thing all the time. This seems to prevent me getting stuck in a local optimum. For example, I'll drive the same route to work every day, but then get bored and try a quite different route, expecting it to be slower, but occasionally it's faster, or less stressful or smoother. In other words I intentionally reduce expected reward, in the hope of getting something unexpected.
@LochyP
@LochyP 4 жыл бұрын
@@gasdive I understand and half agree with your point, but making robots get 'bored' sort of defeats the entire point of Using them over humans for automation
@trucid2
@trucid2 4 жыл бұрын
How do you assess the likelihood that your action is unsafe if you've never performed it before?
@PanicProvisions
@PanicProvisions 4 жыл бұрын
If he stays at it, in 20-30 years, this man will be in the position of people like Neil deGrasse Tyson, Bill Nye or Lawrence Krauss today, once AI starts really taking off and people are looking for public educators who have been tackling this for decades.
@TheBinaryHappiness
@TheBinaryHappiness 4 жыл бұрын
1337 plate number, aww yeahh!
@THEPHILOSOPHYIS
@THEPHILOSOPHYIS 4 жыл бұрын
Hey! I really like your videos. And I am learning JSP right now after completing the basics of Java. Could you please make a video on why scriptlets in jsp are discouraged. Thanks.
@reedl9452
@reedl9452 4 жыл бұрын
"You can't train self driving cars safely in the real world" Tesla fanboy has entered the chat
@zwz.zdenek
@zwz.zdenek 4 жыл бұрын
More like: Tesla: Hold my electrolyte!
@Speed001
@Speed001 4 жыл бұрын
Ehh, controlled environment
@qeithwreid7745
@qeithwreid7745 3 жыл бұрын
What would be a typical task for the first generation of AI?
@markhall3323
@markhall3323 4 жыл бұрын
I liked the content but not the adverts, too intrusive
@ragnkja
@ragnkja 4 жыл бұрын
Mark Hall And, in this case, too loud.
@Speed001
@Speed001 4 жыл бұрын
@@ragnkja Other than that, I was okay with it
@konradw360
@konradw360 4 жыл бұрын
sponsor? a sponsor
@y.h.w.h.
@y.h.w.h. 4 жыл бұрын
9:13 this is such a relatable way to explain the unsexy 99% of research and development.
@RockWolfHD
@RockWolfHD 4 жыл бұрын
What Guide is he talking about?
@Hexanitrobenzene
@Hexanitrobenzene 4 жыл бұрын
@@RockWolfHD A book by Douglas Adams, "Hitchhiker's guide to galaxy".
@RockWolfHD
@RockWolfHD 4 жыл бұрын
@@Hexanitrobenzene thank you.
@Hexanitrobenzene
@Hexanitrobenzene 4 жыл бұрын
@@RockWolfHD You are welcome :)
@theMifyoo
@theMifyoo 4 жыл бұрын
Here is an idea, baby robots. The idea being you make a smaller scale, perhaps squishier body for learning in and train the robot. That way the baby can flail about while learning what is safe or not safe while not harming anything.
@springboard9642
@springboard9642 4 жыл бұрын
Are there theorists or programmers building AIs that they can watch learn.?
@pb-vj1qs
@pb-vj1qs 4 жыл бұрын
What is your channel?
@RobertMilesAI
@RobertMilesAI 4 жыл бұрын
It's just "Robert Miles AI"
@pb-vj1qs
@pb-vj1qs 4 жыл бұрын
@@RobertMilesAI thanks!
@mohammedmohammed519
@mohammedmohammed519 4 жыл бұрын
Robert Miles ok
@JulianDanzerHAL9001
@JulianDanzerHAL9001 4 жыл бұрын
does everything have to be controled by learning? I mean I get that's a nice heoretical exercise which might get relevant eventually and it's jsut examples - and this is partially already done - but for example for a robotic arm I'd use learning only to ouput a desired hand location - then use (comparatively) simple reverse kinematics to igure out how to move the arm to get the hand there wile also checking that the arm cannot get anywhere near a human - the learning part has no direct control of the arm and if it tries to move the arm through the human the kinematics won't let it and it will have to find a way around
@BurningApple
@BurningApple 4 жыл бұрын
The robot arm is a toy problem - it doesn't map to all cases, e.g a robot learning to walk
@JulianDanzerHAL9001
@JulianDanzerHAL9001 4 жыл бұрын
@@BurningApple yeah but in many applications a similar though more complex solution might be doable - if you have a walking robot controlled by a learning algorithm you could instead have 2 learning algorithms and a set o simple geometry equations where the first learner tries to solve a probelm and tells the robot where to go, the geometry limits where that goal location CAN be (not near humans, cars, fragile objects, etc) and the second learner moves the robot but it's goal is not to solve a problem but only to reach the (previously limited) location can't work everywhere which is why this kind of research is important but I think it's ometimes an overlooked soluion in practice
@billykotsos4642
@billykotsos4642 4 жыл бұрын
Yeaaaah boy
@ri-gor
@ri-gor 4 жыл бұрын
the license plates are 1337 XD
@mvmlego1212
@mvmlego1212 4 жыл бұрын
Robert was sounding like Jordan Peterson around 6:30-6:45, LOL.
@iAmTheSquidThing
@iAmTheSquidThing 4 жыл бұрын
Peterson (and also John Vervaeke) get quite a lot of their lingo from cybernetics. A fair bit of the theory of Artificial Intelligence was formulated decades ago, and influenced psychology. But it's only recently that we've had computers powerful enough to actually execute it in a useful way.
@Qkano
@Qkano Жыл бұрын
Great talk! 10:15 Rename it. Just imagined the "Biden Robot" trying to "avoid a recession" ... and how it discovered "completely redefine what a recession is" instead of actually change the economy.
@lesslesser6849
@lesslesser6849 4 жыл бұрын
speed limits give a data point from which the collision penalty could be deduced. see an absense of penalty function in the exploring not getting a haircut space. aside from personal commenrs on youtube inferring one.
@glocksupremo
@glocksupremo 4 жыл бұрын
where are the subtitles tho
@shledzguohn
@shledzguohn 4 жыл бұрын
none of the computerphile videos even have auto-generated subtitles enableable; it makes me sad! ideally, they'd caption them for maximum accessibility, but i don't see the benefit of disabling the auto-captions... it sure makes them harder to follow 😔
@Computerphile
@Computerphile 4 жыл бұрын
The automatic subtitles are all enabled. There was a bug in YT where they didn't show because of Community Subtitles. I have switched Community Subtitles off in an attempt to get auto subs to appear again - not sure why they aren't there >Sean
@WilliamDye-willdye
@WilliamDye-willdye 4 жыл бұрын
@@Computerphile As of this writing, the option to show subtitles does not appear for me.
@Computerphile
@Computerphile 4 жыл бұрын
@@WilliamDye-willdye still don't understand this - photos.app.goo.gl/sqT3j7r81AgKDtM58
@SkarbowkaZokopane
@SkarbowkaZokopane 4 жыл бұрын
Dude looks like skinny Ethan from H3H3
@iugoeswest
@iugoeswest 4 жыл бұрын
Cool
@gowikipedia
@gowikipedia 4 жыл бұрын
Rob Miles legitimately looks jaundiced and has done for ages. Someone tell him to eat better.
@gowikipedia
@gowikipedia 4 жыл бұрын
It's not ok to have waxy, yellow skin
@denisschulz3814
@denisschulz3814 4 жыл бұрын
He looks normal 😅
@gowikipedia
@gowikipedia 4 жыл бұрын
@@denisschulz3814 false, look again
@jetjazz05
@jetjazz05 4 жыл бұрын
So the problem with AI is it's like a moving target where the target can move in an almost infinite number of ways. Nice.
@drawapretzel6003
@drawapretzel6003 4 жыл бұрын
they just need to make an ai that simulates the target, and then simulates how it would get the target.
@goethe528
@goethe528 4 жыл бұрын
Did you loose your good camera, with the tripod?
@TheArchsage74
@TheArchsage74 4 жыл бұрын
Damn didn't know Ben Schwartz knew so much about AI
@mikescott7530
@mikescott7530 4 жыл бұрын
Bamzooki with extra steps
@raleighcockerill
@raleighcockerill 4 жыл бұрын
Engagement
@marflfx
@marflfx 4 жыл бұрын
Have you seen what people are doing with AI in StarCraft and StarCraft2?
@cabbageman
@cabbageman 4 жыл бұрын
I have not, is there a video you can link?
@declup
@declup 4 жыл бұрын
This video has clear themes, but what is its message? What's its point? Could a link to the paper have sufficed? Is this video itself helpful?
@y.h.w.h.
@y.h.w.h. 4 жыл бұрын
Look up the concept of science communicators.
@deanvangreunen6457
@deanvangreunen6457 4 жыл бұрын
200 points = dont tuch baby 100 points = make coffee 50 points = push power buttons ai = greedy reward function
@Shabazza84
@Shabazza84 7 ай бұрын
Number 5 needs more input....
@hermask815
@hermask815 4 жыл бұрын
What if A.I. starts to think outside the box?
@WillToWinvlog
@WillToWinvlog 4 жыл бұрын
This is one of Ben Schwartz's characters!
@AndreRhineDavis
@AndreRhineDavis 3 жыл бұрын
omg I never realised before how much Rob Miles actually does look like Ben Schwartz!
@AgentM124
@AgentM124 4 жыл бұрын
Faster than their sponsor.
@blackmage-89
@blackmage-89 3 жыл бұрын
Common sense seems to be the most difficult thing for AIs to learn.
@Jojoxxr
@Jojoxxr 4 жыл бұрын
Griswold
@justusstamm1485
@justusstamm1485 4 жыл бұрын
Never have I clicked faster
@rtg5881
@rtg5881 Жыл бұрын
I dont know, seems fairly straightforward to me. I dont know how often humans crash, say every 500 trips/every 10000 kilometers, okay. Then whatever reward it gets for 500 trips and 10.000 kilometers is the negative for a crash. Sure, maybe you should refine it by severity of the impact and of course some things im happy to take a greater risk on than on others. Maybe i have a medical emergency and need the AI to get me to the hospital quickly. Or maybe i need the AI to flee from the police for me. Maybe we can have a dial for that. Certainly it should be entirely open to modification by the owner of the car or hes not the owner.
@temptemp563
@temptemp563 3 жыл бұрын
... like watching an ai learn how to programme an ai ...
@spicybaguette7706
@spicybaguette7706 4 жыл бұрын
5:10 don't drop anything near it
@sevrjukov
@sevrjukov 4 жыл бұрын
Blue car at 7:42 has "1337C" licence plate. A Rick&Morty reference? :-)
@MmKayUltra1
@MmKayUltra1 4 жыл бұрын
Leet reference
@kasuntharaka8040
@kasuntharaka8040 Жыл бұрын
Gym???
@R.Daneel
@R.Daneel 2 жыл бұрын
So if you want to create a self driving car, you release it half-finished and tell people they need to keep their hands on the wheel. Then you pay very close attention to when the driver makes corrections to what the autopilot is doing. Or if you find people are voting videos down only because lots of other people have, polluting the data, you hide the down votes. The motivations of all modern companies suddenly look very different from the old-school "maximize profit".
@smithwilliams5637
@smithwilliams5637 3 жыл бұрын
license plate "1337 c" l33t dont mind if i do
@jetjazz05
@jetjazz05 4 жыл бұрын
....the first iteration of the Matrix.
@Pehr81
@Pehr81 4 жыл бұрын
1337
@Nagria2112
@Nagria2112 4 жыл бұрын
Goodbye
@amrmoneer5881
@amrmoneer5881 4 жыл бұрын
More real world examples would be appreciated
@95reide
@95reide 3 жыл бұрын
I consider myself oto be quite knowledgeable when it comes to hitchhiker's guide to the galaxy, and as best I can tell, he butchered whatever he was trying to reference. If I'm correctly inferring what he's going for, it's the 42 bit. Engineers: *Builds a super-powerful computer system.* "What's the answer to the ultimate question of Life, the universe, and everything?" Computer system: ... *1,000 years later.* "42. you asked for the answer to the ultimate question. BUt you'll need an even more sophisticated system in order to figure out what the right question is."
@stacychandler6511
@stacychandler6511 4 жыл бұрын
Blue car == 1337
@TechyBen
@TechyBen 4 жыл бұрын
This. Why did we spend 50 years making "robots" tested in real life, wasting time on broken designs and materials, when we can test 10 or 100s in virtual spaces, then build 1 or 2 working prototypes? Yeah, I know computation was low for a long time, but if building a robot + it's computer takes time, how is building the computer and using existing servers to simulate any more expensive?
@timconlin7692
@timconlin7692 4 жыл бұрын
In the video it's mentioned that simulation can only get you so far as some things are too complex to simulate with any sort of meaningful accuracy, like the driving habits of humans for example. Less computational power back then also meant less accurate simulations.
@TechyBen
@TechyBen 4 жыл бұрын
@@timconlin7692 I agree. But coming from when I was a kid, it was all about robots driving around a room/box. The kind of thing we could simulate, and the kind of thing we could see was not gonna become "self aware" from a tiny 8bit chip. :P
@MrRobket
@MrRobket 4 жыл бұрын
7:10 1337
@AlexandreGurchumelia
@AlexandreGurchumelia 4 жыл бұрын
Safety gym? These sounds so cringe.
@christopherdasenbrock2683
@christopherdasenbrock2683 4 жыл бұрын
first
@AgentM124
@AgentM124 4 жыл бұрын
Sorry. I beat you to it.
@UmaiKayu
@UmaiKayu 4 жыл бұрын
@@AgentM124 You were the zeroth, he was the first :-)
@zaprowsdower9471
@zaprowsdower9471 4 жыл бұрын
DOWNVOTED unannounced advertising
@SpeakShibboleth
@SpeakShibboleth 4 жыл бұрын
Didn't catch the first few seconds, huh?
@zaprowsdower9471
@zaprowsdower9471 4 жыл бұрын
You lost me, not following what you're saying.
@SpeakShibboleth
@SpeakShibboleth 4 жыл бұрын
That's when they announced the advertising.
@uniquename6925
@uniquename6925 4 жыл бұрын
This isn't Reddit, your down votes mean nothing here
@zaprowsdower9471
@zaprowsdower9471 4 жыл бұрын
@@SpeakShibboleth As a courtesy to the subscriber / viewer, I'm suggesting a channel include the text _"Includes Paid Subscription"_ prior to the advertising. Announcing the name of the advertiser, that the channel has advertising, can hardly be considered any kind of prior notice.
@Faladrin
@Faladrin 4 жыл бұрын
And none of this is AI. These are just really complex human written programs.
@Danicker
@Danicker 4 жыл бұрын
Sneaky hitch hikers reference ;) love it!
AI's Game Playing Challenge - Computerphile
20:01
Computerphile
Рет қаралды 741 М.
AI "Stop Button" Problem - Computerphile
20:00
Computerphile
Рет қаралды 1,3 МЛН
100😭🎉 #thankyou
00:28
はじめしゃちょー(hajime)
Рет қаралды 32 МЛН
NO NO NO YES! (50 MLN SUBSCRIBERS CHALLENGE!) #shorts
00:26
PANDA BOI
Рет қаралды 102 МЛН
ONE MORE SUBSCRIBER FOR 6 MILLION!
00:38
Horror Skunx
Рет қаралды 14 МЛН
Black Magic 🪄 by Petkit Pura Max #cat #cats
00:38
Sonyakisa8 TT
Рет қаралды 17 МЛН
Training AI Without Writing A Reward Function, with Reward Modelling
17:52
Robert Miles AI Safety
Рет қаралды 235 М.
GPT3: An Even Bigger Language Model - Computerphile
25:57
Computerphile
Рет қаралды 432 М.
AI Language Models & Transformers - Computerphile
20:39
Computerphile
Рет қаралды 325 М.
Anna Rudolf Accused of Cheating with an Engine Hidden in her Lip Balm!
21:50
agadmator's Chess Channel
Рет қаралды 3,7 МЛН
Intelligence and Stupidity: The Orthogonality Thesis
13:03
Robert Miles AI Safety
Рет қаралды 664 М.
Regular Expressions - Computerphile
17:19
Computerphile
Рет қаралды 238 М.
AoE2 Zombies!
27:01
Spirit Of The Law
Рет қаралды 1 МЛН
Evolving Genetic Neural Network Optimizes Poly Bridge Problems
9:59
The danger of AI is weirder than you think | Janelle Shane
10:30
100😭🎉 #thankyou
00:28
はじめしゃちょー(hajime)
Рет қаралды 32 МЛН