ROBERT MILES - "There is a good chance this kills everyone"

  Рет қаралды 66,542

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Please check out Numerai - our sponsor @
numer.ai/mlst
Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.
Support us! / mlst
MLST Discord: / discord
Twitter: / mlstreettalk
Welcome to an exciting episode featuring an outstanding guest, Robert Miles! Renowned for his extraordinary contributions to understanding AI and its potential impacts on our lives, Robert is an artificial intelligence advocate, researcher, and KZfaq sensation. He combines engaging discussions with entertaining content, captivating millions of viewers from around the world.
With a strong computer science background, Robert has been actively involved in AI safety projects, focusing on raising awareness about potential risks and benefits of advanced AI systems. His KZfaq channel is celebrated for making AI safety discussions accessible to a diverse audience through breaking down complex topics into easy-to-understand nuggets of knowledge, and you might also recognise him from his appearances on Computerphile.
In this episode, join us as we dive deep into Robert's journey in the world of AI, exploring his insights on AI alignment, superintelligence, and the role of AI shaping our society and future. We'll discuss topics such as the limits of AI capabilities and physics, AI progress and timelines, human-machine hybrid intelligence, AI in conflict and cooperation with humans, and the convergence of AI communities.
Robert Miles:
@RobertMilesAI
/ robertskmiles
aisafety.info/
Panel:
Dr. Tim Scarfe
Dr. Keith Duggar
Joint CTOs - xrai.glass/
Pod version: podcasters.spotify.com/pod/sh...
Refs:
Are Emergent Abilities of Large Language Models a Mirage? (Rylan Schaeffer)
arxiv.org/abs/2304.15004
TOC:
Intro [00:00:00]
Numerai Sponsor Messsage [00:02:17]
AI Alignment [00:04:27]
Limits of AI Capabilities and Physics [00:18:00]
AI Progress and Timelines [00:23:52]
AI Arms Race and Innovation [00:31:11]
Human-Machine Hybrid Intelligence [00:38:30]
Understanding and Defining Intelligence [00:42:48]
AI in Conflict and Cooperation with Humans [00:50:13]
Interpretability and Mind Reading in AI [01:03:46]
Mechanistic Interpretability and Deconfusion Research [01:05:53]
Understanding the core concepts of AI [01:07:40]
Moon landing analogy and AI alignment [01:09:42]
Cognitive horizon and limits of human intelligence [01:11:42]
Funding and focus on AI alignment [01:16:18]
Regulating AI technology and potential risks [01:19:17]
Aligning AI with human values and its dynamic nature [01:27:04]
Cooperation and Allyship [01:29:33]
Orthogonality Thesis and Goal Preservation [01:33:15]
Anthropomorphic Language and Intelligent Agents [01:35:31]
Maintaining Variety and Open-ended Existence [01:36:27]
Emergent Abilities of Large Language Models [01:39:22]
Convergence vs Emergence [01:44:04]
Criticism of X-risk and Alignment Communities [01:49:40]
Fusion of AI communities and addressing biases [01:52:51]
AI systems integration into society and understanding them [01:53:29]
Changing opinions on AI topics and learning from past videos [01:54:23]
Utility functions and von Neumann-Morgenstern theorems [01:54:47]
AI Safety FAQ project [01:58:06]
Building a conversation agent using AI safety dataset [02:00:36]

Пікірлер: 1 000
@rosameltrozo5889
@rosameltrozo5889 Жыл бұрын
The guy on the right is so painfully naive
@rosameltrozo5889
@rosameltrozo5889 Жыл бұрын
58:55 Like really, Robert just explain why that argument no longer applies literally seconds ago, humans can be useful until the AI doesn't have a use for humans, and then we're screwed... the counter argument to that is not "but humans can be useful to the AI"
@Stefan-br8jc
@Stefan-br8jc 3 ай бұрын
I hope he just wanted to push the conversation forward. But there are of course a lot of people who really think like this. "Let's just hope for the best" ....
@KP-fy5bf
@KP-fy5bf 3 ай бұрын
Keith is so damn ignorant its frustrating
@leeeeee286
@leeeeee286 Жыл бұрын
Miles really undersells himself. I think he explains the risks of AI clearer than any other popular speaker on this topic. Thanks for inviting him on!
@paradox9551
@paradox9551 Жыл бұрын
Much better than Yudkowsky!
@shiny_x3
@shiny_x3 Жыл бұрын
underselling himself is part of his charm
@ScottLienhart
@ScottLienhart Жыл бұрын
Connor Leahy from EleutherAI is also a great speaker on alignment
@GuinessOriginal
@GuinessOriginal Жыл бұрын
He’s been doing it for ten years, and it’s only now people are starting to listen
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@paradox9551 Yudkowsky is a quintessential nerd. He is understood only by other nerds. At this point, we need involvement of wider community. Rob is perfect for such a role.
@gasdive
@gasdive Жыл бұрын
Keith's stand point seems to be, don't worry, we'll just outsmart it. Like we'll all somehow know intuitively that any more advance will be dangerous, and then all look at each other and say, "time to destroy these machines that spit out rivers of gold and do all the hard work, pass me a sledgehammer".
@flickwtchr
@flickwtchr Жыл бұрын
Of the two hosts, he's the AI bro. The smirk drives it home.
@bluekhakis1073
@bluekhakis1073 Жыл бұрын
I appreciate his optimism but I feel like his responses are variations on "What if you're wrong?"
@GuinessOriginal
@GuinessOriginal Жыл бұрын
Yeah, he didn’t have any answers, he’s like the American general saying let’s bomb Russia, worry they wouldn’t dare Nuke us
@loupgarou5797
@loupgarou5797 Жыл бұрын
When he tried to use a Dune analogy, implying the humans win. The Butlerian Jihad was preceded by 1,000s of years of machines ruling over humans nearly going extinct. oof 🤦🏽
@DarkSkay
@DarkSkay Жыл бұрын
Many expressions of intelligence reach a plateau. One of the most elementary examples: multiplying a*b. It doesn't get more intelligent than the one correct answer. This intelligent process can't get more useful in practice by changing the answer, only by reducing the time delay. A human can emulate the functioning of a Turing machine - perhaps the reverse is impossible. ...otherwise it would seem to directly and strongly imply that in theory it would be possible to create consciousness with paper and pen.
@GingerDrums
@GingerDrums Жыл бұрын
Miles's humility is winning and his competence is clear for all to hear, especially in his caution and careful style of communicating.
@AICoffeeBreak
@AICoffeeBreak 9 ай бұрын
@shirtstealer86
@shirtstealer86 4 ай бұрын
The high point of the “debate” was when the American said everything is fiiine because we will have good aligned AIs to fight the bad AIs without addressing the core issue of HOW to align an AI. It’s like saying “it’s not dangerous to stick your hand into a fire cause we can just put on a glove made of nanoicediamondcooling fabric that will protect you.”
@UnDaoDu
@UnDaoDu 27 күн бұрын
Basically his argument is this you can argue with something that is so vastly smarter than you because you’re not that thing. SAI is such a thing
@luomoalto
@luomoalto Жыл бұрын
Keith’s argument about asteroids is ridiculous
@nomenec
@nomenec Жыл бұрын
How so? If you were a super intelligent AI unshackled by the primitive Goldilocks needs of biological forms (not too hot, not too cold, oxygen, water, food, etc) why on Earth (pun intended) would you waste resources consuming impure resources at the bottom of a gravity well with force of a planet? In space you have unlimited room to grow, easy access to sunlight, more pure raw materials, etc. Perhaps your imagination is limited by your terrestrial roots. In any case, do you have a counterargument or just a bald assertion?
@41-Haiku
@41-Haiku Жыл бұрын
​@@nomenec Bear with my weird analogy, if you will: I'm a guy who _really_ likes food. If my wonderful partner makes dinner for me, and brings me the first of two servings, I will eat every last grain of rice or noodle or bean before I get up to get myself seconds. I didn't have to be so thorough with my first serving, but in either case, I can guarantee you that I will go get the remaining helping and finish it thoroughly. You seem to be under the impression that a superintelligent AI would magically become morally enlightened and love humanity by default, at least enough to carefully step around us on its way to dismantle the solar system, and never come back for Earth's resources. I do not see any technical reason to consider that likely. Security mindset is essential here. There are many technical arguments in AI Safety that indicate that the default result of creating a superintelligence is doom. Even if the default risk was something like 5%, the burden of proof lies with the developers / accelerationists to show beyond a reasonable doubt that their system _won't_ do horrible things. It isn't sufficient to say that it's merely possible to imagine that we won't all die.
@41-Haiku
@41-Haiku Жыл бұрын
I think the reason RandomCoffeeDrinker didn't provide a counterargument is that it's difficult to create a counterargument to a statement that seems absurd on its face. It's not clear where the confusion is. A system that is capable of shaping the future to its preferences will do so. If those preferences are not aligned to ours sufficiently well, then because it starts in our back yard, we die. We would also die if it started down the street or in another neighborhood, or in another city, but claiming that our house is safe under these conditions is an especially radical claim that requires significant evidence.
@someguy_namingly
@someguy_namingly Жыл бұрын
I'm really glad to see Rob getting interviewed, but there were some really baffling counterarguments/scenarios posed to him, lol. It was actually kinda frustrating.
@pyreta
@pyreta Жыл бұрын
@@nomenec it’s just a matter of what is the most likely outcome. Path of least resistance. Anything is possible. It’s possible we create a misaligned AI that skips all the steps of recklessly consuming any resources here on Earth and finds a way straight to the asteroid belt to satisfy its energy cravings, leaving us unscathed. But is that a scenario likely enough to take seriously?
@luke2642
@luke2642 Жыл бұрын
My top quotations: "We're approximately the least intelligent thing that can build a technological civilisation." "Alignment isn't 6 months behind capabilities research, it's decades." "If we can get something that cares about us the way we care about our pets, I'll take it." "I get all my minerals from asteroids, it's so convenient." (lol) I struggle to understand how anyone can hear the words 'sovereign AI' or 'pets' and not feel a deep, chilling terror. Can we just call this what it really is? It's an arms race to build God, a castrated zombie God you control, constrained only by the laws of physics. Whose God are we building? Do we all get one? It feels a lot like the logic of the USA's second amendment, except with nukes. Advocates cry "it's a human right to arm ourselves to the teeth". Everyone is terrified, and one drunken misunderstanding ends us all.
@jonbbbb
@jonbbbb Жыл бұрын
I think alignment on the level that they're talking about is probably impossible when it comes to super-AI. We've studied human alignment since forever, yet people still rebel and don't follow the rules. It also reminds me a lot of the halting problem, which proves that we cannot predict whether an arbitrarily complex computer program will ever stop running, let alone work exactly how we want. Regarding the 2nd amendment, first of all armor doesn't enter into it in a meaningful way. It's pretty much been weapons vs weapons for a long time. Even the best defenses are pretty easy to overwhelm. The analogy is pretty simple from there -- if anybody gets super-AI, we all need super-AI to defend ourselves. You aren't going to find some magical non-AI "armor" lying around that defeats super-AI. But regulation is a different story. Your disdain for the American take on weapons is evident. So your country regulates lethal force responsibly. But I bet the gun murder rate isn't 0. Your AI regulations also won't stop super-AI 100%. And unlike physical guns, once super-AI is created once it can immediately be copied a billion times. So your regulations are useless. And then of course you have people beyond your government's control... regulations didn't stop Russia from invading Ukraine for instance. What probably WOULD have prevented that... is if Ukraine hadn't given up nukes in the 90s.
@luke2642
@luke2642 Жыл бұрын
@@jonbbbb An interesting reply, thank you. I've just modified my comment, nukes are a much better analogy, than guns and armour, thank you! I'll think some more and reply to each of your points :-)
@_obdo_
@_obdo_ Жыл бұрын
@@luke2642 Nukes is somewhat better. An engineered super virus is even better.
@luke2642
@luke2642 Жыл бұрын
@@jonbbbb So, trying to think of what we might agree on first, alignment needs more funding? In the absence of funding, what can ordinary people do to protect themselves? Or, what can we do politically?
@jonbbbb
@jonbbbb Жыл бұрын
@@luke2642 research into alignment would definitely be a good idea. I think right now what we're doing is actually counterproductive. Open AI is trying to align chat GPT with stuff like misinformation. But the unintended consequences that they're training it to lie to us. It will happily tell you that it can't do something, when you know that is not true. The other point that might be worth considering is that it's actually better to have an alignment problem now when AI is reasonably powerless. So I wonder if it would be worth deliberately misaligning one of these AIs to see what happens. Of course that sounds kind of crazy, sort of like gain of function research. My fear is that it may be impossible to prove 100% alignment. I forget if I said it in this thread or another one, but we've been trying to align other humans with our own values since forever and it pretty much doesn't work. If we ever get a super AI why would it be easier to manipulate than plain old humans?
@lkyuvsad
@lkyuvsad Жыл бұрын
How in the world is the host on the right looking at the progress we make in a year of AI research, looking at the average intelligence of humans, and feeling confident that this is all going to work out? What’s notable in this discussion is that the points Miles is making are still the absolute basic problems of AI safety research. Total entry level stuff. We have no idea how to solve any of them well, and the problems are not hypothetical- they are observed properties of the systems we have studied.
@41-Haiku
@41-Haiku Жыл бұрын
The ignorance and incredulity we're still seeing is very disheartening. If we get "AGI" by most prediction-market definitions within the next few years, many people will say, "Oh is that all? I thought we already had that," or "No, it can't do the thing I just observed it do with my own eyes." If by some miracle we get a "warning shot" or "fire alarm," even if it results in a catastrophic outcome with many lives lost, and even if it can be traced with absolute certainty to the development of a misaligned AI by a company that was "trying to align it..." Some people would still say, "Look, it didn't kill literally everyone, so the doomers were wrong! We should slap a band-aid on it and just roll those dice again!" Maybe the Overton window will shift quickly enough to prevent doom, but I'm afraid that EY may be right that we don't have the will to live.
@EricDMMiller
@EricDMMiller Жыл бұрын
He argues that we can solve alignment, and then later argues that the fucking concepts that we need to solve the alignment problem are possibly (probably?) outside the scope of human understanding. Wtf?
@David12scht
@David12scht Жыл бұрын
It's really dispiriting that this is the level of conversation on an AI-focused channel. I'm not familiar with the channel too much, but I'm assuming the hosts spend much if not most of there time on AI, and these are the kinds of questions they are asking?
@EricDMMiller
@EricDMMiller Жыл бұрын
@@David12scht it just makes you want to vomit and shit with rage.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
​@@EricDMMillerWe need to make an "x-ray" AI that can introspect AI goals and guess alignments. Then the latter collabs with the former
@user-sl6gn1ss8p
@user-sl6gn1ss8p Жыл бұрын
I really like the talk, but I think it's kind of a shame that it went into the whole "can we be sure we we'll be 100% irrevocably and completely wiped out" direction. The "is there a real risk of considerable, very hard to reverse damage, and are we doing enough to address it?" angle seems so much more interesting.
@41-Haiku
@41-Haiku Жыл бұрын
Yes, security mindset is essential. I don't need someone to tell me P(doom) < 1. I already know that. What I really want is P(doom) < 0.0001. Heck, at P(doom) < 0.05, I start to feel quite a lot better about the whole ordeal.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
​@@41-HaikuI propose the Cockroach Philosophy. What's the biomass of cockroaches, ants, termites, and microbes? What percentage of said critters survive direct encounters with humans? What is that in relation to the total biomass? Humans will not control the specific flavor of AI discussed here. I think there will be pockets (on solar system scales) of humanity which survive alongside, indeed they will THRIVE on the perifiri of AIs which will venture out into the cosmos. Just don't step out into the light as you nab bits of magic-like tech, like rats on a ship. Our utopia will be incidental, our dystopia will be everpresent and one bad move away. Humans will certainly survive, at great expense.
@gabrote42
@gabrote42 11 ай бұрын
Word, absolutely
@petekwando
@petekwando 11 ай бұрын
Agreed. There are many potential bad outcomes that fall short of getting gray gooed/nuked/wiped out by a pathogen, but which result in massive suffering and misery. Many of them don't require AGI going rogue at all, just humans being stupid and greedy.
@jackielikesgme9228
@jackielikesgme9228 11 ай бұрын
Well that answer is obvious.. we are not doing enough to address it. I think all of these podcasts turn into kind of end of the road this sucks for a couple of reasons. 1, The people who are actually working on alignment don’t think they are accomplishing enough quick enough and they have to spend their time in these debates with people who could be helping them but don’t see the risk 2. A lot of it what can be done it’s not just very difficult it seems to be pretty difficult to explain to people who have no background in any of this. 3. Speaking for myself (prob others ) this stuff is fascinating!! I am so intrigued, I do think we are facing existential risk likely in my lifetime, we aren’t doing enough about it as a species but have nothing to contribute re-alignment research, or even spreading the word. Bringing this conversation up at Fourth of July beach trip to friends and family who are uninterested haven’t thought about it is about as useless and is trying to solve alignment in my head. Also this might just be me but the idea of a painless quick instantaneous wipe out of all humans for the use of our atoms or whatever honestly seems a whole lot less scary than what humans slowly painfully taking each other out looks like
@ikotsus2448
@ikotsus2448 Жыл бұрын
AI deciding to keep us around for its own reasons seems much worse than death. Much, much worse.
@thomconaty
@thomconaty Жыл бұрын
Have you tried consumerism?
@cdreid9999
@cdreid9999 Жыл бұрын
Read "I have no mouth but I must scream"
@Trephining
@Trephining 11 ай бұрын
Right. Just imagine what those reasons might be. Having us as "pets" sounds incredibly optimistic, and seems to rely on assuming that AI is actually much more human-like than it might really be.
@ts4gv
@ts4gv 5 ай бұрын
yep. What possible use do we serve to a superintelligent AI? It could learn a few things about intelligence from our brains. How would it go about learning? By running experiments, of course.....
@markupton1417
@markupton1417 3 ай бұрын
And unlikely. Is it gonna think we're cute?
@davidhoracek6758
@davidhoracek6758 Жыл бұрын
I wish these guys would actually engage with the points made by their guest and argue about those points. Instead they are clearly overmatched intellectually - and there is no shame in that; we each have our limits. It only becomes shameful when you deal with it simply by handwaving really hard and telling yourself that you're winning.
@ts4gv
@ts4gv 10 ай бұрын
funny how their position is based on not being able to understand the power of something much smarter than them
@dmwalker24
@dmwalker24 9 ай бұрын
We each have our limits. Tragically, those who are the most limited, are often the least able to recognize their limits. I think it's fairly clear that the real immediate threat from AI is not Skynet, but just the way far simpler versions are being used by humans against humans. If some radical breakthrough does happen to create the super-intelligence, then we'll be left with very few good options. Possibly none.
@RandomAmbles
@RandomAmbles 8 ай бұрын
Actually, as someone who thinks a lot about AI risk and alignment research, I found their broader philosophical approach interesting and generative, especially the one in the middle. The "agent+goal" framework is more anthropomorphic than I had considered it before. We model it the way we model ourselves. Yet I think we need to look deeper, into what exactly gives rise to agents and goals and what they actually are, physically, mechanistically. And then throw an absolute metric shit ton of compute at that, naturally.
@blarghblargh
@blarghblargh 7 ай бұрын
@@dmwalker24 as robert said fairly late in the video, "the real immediate threat" implies there's only one threat. and he also said that focusing on skynet isn't taking away from focusing on the usage of more limited technologies. and further, he said working on the two goals may actually help one another.
@blarghblargh
@blarghblargh 7 ай бұрын
​@@RandomAmbles I think worrying about whether we're anthropomorphizing or not doesn't really get us any closer to understand anything. it certainly doesn't bring us closer to a trajectory on confident safety. I look at it as a "I'm digging here. I'm not saying other people shouldn't be digging elsewhere" type of thing. we're trying to make tools we have a chance of understanding, and that means we're likely to anthropomorphize. We have historically and continue to create skewmorphisms in any abstract human interface technology, especially for the first versions, and we're already using those metaphors in our current AI research and safety research. fitness functions and hill climbing and aspects of game theory are all things we actively are using now. it's not even abstract, it's just how we model the math. there's no reason to think we wouldn't keep going in that direction in our designs in the future, and unless we uncover better ways to model things, we don't have a reason to change our approach arbitrarily. it's like saying "there's probably a way better way to do this", while having no idea what that way could be. it may be that emergent properties we don't yet understand come to light. we'll have to model and deal with them then, if we can even do so. if we can't, then that's just going to mean we have fewer specific places to dig for the concept of strong safety, not more. I don't think that means we should stop digging. I think the speakers, in playing devil's advocate, seem to be trying to find ways to handwave themselves into unearned confidence, and what Robert (and presumably most bought-in AI safety research) is looking for is stronger stuff. taking some political stance on pessimism or optimism is just kinda a waste of time, and not really what we're discussing, though Robert does use that language sometime. but I interpret what he says to be: do we have a guarantee or don't we? do we think this plausibly increases our odds of success in a concrete tangible way, or not? "that doesn't give me much hope" is just a synonym for "that doesn't actually gain us any leverage on the problem". though if you're saying "we should spend significant computing resources on predictions instead of just taking a leap", I can get behind that. I just don't really have the backing to understand what those predictions look like, how accurate they are, and how much they get us insight into whether or not there will be emergent properties we are not currently predicting. to me, it seems like searching for a model with computing power instead of building the model off observations. if we're at the point of being able to do that, awesome. it currently sounds a bit magical to me, though.
@curiousmind9287
@curiousmind9287 11 ай бұрын
“Because we do not know what intelligence is we may create it without knowing and that is a problem.” Love it!
@XOPOIIIO
@XOPOIIIO Жыл бұрын
Extracting resources from the Earth crust is not a waste. You still receive more than you spend. So it would be rational to extract from all sources, not just asteroids.
@agentdarkboote
@agentdarkboote Жыл бұрын
(I'm 100% agreeing with you here, angrily 😅) It takes WAY less energy to extract resources on earth than in space. Delta V on earth comes from wheels on the ground, or blades on the air, it's so so much easier on a planet. Even if there are way more resources up in space, it will still make more sense initially to construct and then launch millions or billions of tons of infrastructure into space than to just leave earth relatively untouched and head into space and begin constructing it all there from scratch, rather than using all of the existing resources and capabilities here.
@derschutz4737
@derschutz4737 Жыл бұрын
40:47 Stockfish currently uses NNUE (Efficiently Updatable Neural Networks) which runs on the CPU and is the reason it had a huge jump in elo (~ +100) and is the strongest engine by far. It used to use HCE (Hand Crafted Evaluation aka the man made one) and was beating Lc0 at TCEC but Lc0 eventually surpassed SF, that version of HCE SF (SF 12 Dev) would easily smash AlphaZero. But it is the case that in certain positions HCE is better than NNUE, which is why currently SF has some heuristic to determine when to use NNUE or HCE (I think its based off the # of pieces). In correspondence chess, from the start position no human+engine will beat a pure engine like Lc0 or SF (at least it will be a 1 in a 100,000,000 occurrence because chess is too drawish), it will be a draw everytime. However there are certain positions that if u start from a human+engine can outplay Lc0 or SF alone. As a side note, one thing that is interesting is that Lc0 at 1 node (meaning 0 calculation and pure intuition) is strong GM level (say 3+0 time control). The GM is free to calculate and Lc0 cannot, Lc0 does show more blindspots that can't be covered up by calculation, but it still plays like a very strong GM with 0 calculation.
@MrBillythefisherman
@MrBillythefisherman Жыл бұрын
Isnt the heuristic just the positions where you can run the perfect (i.e unbeatable) finish. Surely that cant happen in early play and you need to rely on AI which was the hard part of the problem in the first place?
@EJ-jw9ow
@EJ-jw9ow Жыл бұрын
Soooo Robert was correct and jackass on the right has no idea what he's talking about. Shocking.
@nomenec
@nomenec Жыл бұрын
Thank you for the detailed response! Though, one critical piece of information you didn't mention, is that the NNUE is first trained with supervised learning on positions scored by the hand crafted engine. In other words, the NNUE is substantially bootstrapped by human crafted heuristics and features. And of course, as you point it, it sometimes switches back to the HCE (or books, databases, etc). Hence, I stand by my point which is that human knowledge and engineering continues to outperform "zero" engines (engines that are "purely" machine learned) in chess either directly or in hybrid systems such as Stockfish or cyborgs/centaurs. As for whether cyborgs outperform hybrid systems like Stockfish, you raise a good point that correspondence chess from plain start is utterly drawish. I think that is probably a reflection of two this things. First, there may indeed be a chess skill cap that Stockfish has hit and therefore cyborgs can only hope to draw. Second, some of the strongest tools the Cyborgs have, namely Stockfish and other engines, were not optimized for either cyborg play or even the very long time controls (say 24 hours) ergo we are not seeing the best from either, an hence the results remain inconclusive. But even if cyborgs now, as of the last year or two, can only match and no longer exceed pure engine play, it's important to see this as yet another demonstration of a general pattern of skill advancement stages that AI progresses through in any given domain: subhuman -> human (parity) -> superhuman (beats all humans) -> ultrahuman (beats all cyborgs). (I'm blanking on who introduced me to this concept "ultrahuman"). In the case of chess, if we assume you are right that pure engines are ultrahuman as of say 2022, well that means it took 25 years to go from superhuman (1997 defeat of Kasparov) to ultrahuman. So in the context of a singularity conflict with AIs, it seems we have good reason to believe there will be a period of time in which cyborgs could pull the plug and defeat the pure AIs. Not that we would, of course, half the cyborgs would probably team up with Basilisk.
@tylermoore4429
@tylermoore4429 Жыл бұрын
@@nomenec Puzzled. In your first para you say human engineering beats pure engines, but in your last para you say that pure engines have become ultrahuman - that is, capable of beating cyborgs. Which is it?
@nomenec
@nomenec Жыл бұрын
@@tylermoore4429 the last paragraph is a "for the sake of argument", "if I'm wrong", type hypothetical discussion. I stand by the following for chess currently: cyborgs > hybrid (Stockfish) > pure/zero (Lc0). That said, the end of the second paragraph says "the results remain inconclusive". I.e. for chess, for the very current start-of-the-art it *might* be the case that *hybrid* engines (note hybrid not *pure/zero* engines) are now at parity with current cyborgs (cyborgs == hybrid (Stockfish)); but, I'm not convinced. Either way, cyborgs are definitely still better than pure/zero engines such as Lc0.
@erikfinnegan
@erikfinnegan 11 ай бұрын
Robert is just the best. And just to flaunt my fan-boyhood, my favourite moment in this video is at 44:29 where he drives a nail into the coffin of lofty philosophical debate about intelligence during an AI safety conversation: you don't need to understand what fire "really is" in order to cause substantial harm with it, be it deliberately or accidentally. If anything, not knowing exactly what intelligence is, only increases the risk inherent to anything that's ether more or differently intelligent. And that's all there is to say about the "nature of" intelligence in a debate about AI safety.
@carmenmccauley585
@carmenmccauley585 10 ай бұрын
Mic drop.
@41-Haiku
@41-Haiku Жыл бұрын
It's discouraging that the hosts seem to be incredulous of the basics of the alignment problem. Incredulity won't help us solve these problems, and where there is disagreement it does nothing to advance understanding.
@41-Haiku
@41-Haiku Жыл бұрын
I'll temper that with acknowledging the statement at 1:15:05 -- that we need to put a concerted effort into alignment. I fully agree with this statement, and it bothers me that it does not jive with this channel's otherwise accelerationist stance. A further edit -- devil's advocacy is not particularly useful at a certain level of feigned or real miscomprehension. I would have hoped for a conversation that gets at common reasonable disagreements and misunderstandings, but some of the push-back wasn't entirely reasonable. Safety mindset means not assuming that your preferred outcome is very likely.
@ZeroGravitas
@ZeroGravitas Жыл бұрын
Re: Rob's scenario of multiple ASI biding their time then all acting at once, independently - that's the scenario imagined by *Peter Watts in Echopraxia* . Several hyper-inteligent 'vampires', kept isolated within a facility, violently incapable of existing in the same space as each other, nonetheless deduce the best time to each act simultaneously, to all escape, separately.
@andybaldman
@andybaldman Жыл бұрын
Sounds like my ex wife.
@ZeroGravitas
@ZeroGravitas Жыл бұрын
@@andybaldman Not called Valerie, was she? 😅
@crutherfordmusic
@crutherfordmusic 11 ай бұрын
Rev 17:12 And the ten horns which thou sawest are ten kings, which have received no kingdom as yet; but receive power as kings one hour with the beast. Rev 17:13 These have one mind, and shall give their power and strength unto the beast.
@DavenH
@DavenH 9 ай бұрын
Loved that book
@jamieclarke321
@jamieclarke321 Жыл бұрын
Did anyone else really feel like they needed to hear from robert in a time like this?
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
Yeah. He always seemed level headed. I had been hoping he would do a foil interview for all the Yudkowskis out there. Not that I'm still not* realisticially pessimistic, but it's some nice copium to cool my gullet.
@Me__Myself__and__I
@Me__Myself__and__I Жыл бұрын
​@@marcomoreno6748sadly the Yudkowsky people out there are probably correct and this interview illustrates why. The AI experts are racing to create Artificial Super Intelligence which would be an alien intelligence far beyond humans. Experts in the field keep saying there is a realistic possibility we lose control and go extinct. Yet people keep trying to come up with plausible possibilities why we might be okay and then ignoring everything else. So many people want to stick their heads in the sand and ignore the major risks. That is why Yudkowsky is probably correct and we are all probably going to die. If we took the risks seriously and put the effort in to guard against them our chance of surviving goes way up. But we aren't doing that, instead people like these hosts are doing everything possible to ignore the risk and accelerate development. Why people insist on ignoring the risks is beyond me, seems completely iirrational.
@ResurgentVoice
@ResurgentVoice Жыл бұрын
@@Me__Myself__and__I Thank you! So much of this interview was cringe inducing because the hosts were so smug-self satisfied contrarians. I felt like there were many moments when Miles had to be doing hard internal eye-rolls at the irony of these guys going out of their way to argue that their isn’t a 💯 chance we all die, so what’s the big deal. 🤦‍♀️
@dmwalker24
@dmwalker24 9 ай бұрын
If you're going to push back with, "Yeah, but what about...", then you should probably be finishing that question by pointing out some deficiency in the statement you're responding to. A good example of this is how Miles consistently points out the logical flaws in those challenges. These interactions alone end up being fairly strong evidence for why we should be very concerned about AI safety. It suggests to me that many people would not even realize when they were being out-maneuvered by a sufficiently sophisticated AI.
@therainman7777
@therainman7777 27 күн бұрын
Yep.
@BrettCoryell
@BrettCoryell Жыл бұрын
The surest indicator that we are nowhere near a reliable answer to this issue is that over and over again we see world leading figures trying to make a case - either to be worried or not - with nothing more than appeals to emotion or intuition. Too often these are based on reference to sci fi, imaginary intuition pumps like the paper clip machine, or simply 'it seems to me X, right?' None of these provide a framework suitable for engineering a reliable answer that can give some assessment of reliability, confidence, and risk. The REAL alignment problem is that we don't even have a way to talk about alignment, either what it is or how much we have. Rob gets close to this around 1:07:00 and kudos to him. But damn, we have a long way to go.
@therainman7777
@therainman7777 27 күн бұрын
@@max0x7baCool story, unhelpful though.
@alexpavalok430
@alexpavalok430 Жыл бұрын
The mental gymnastics of these guys is exhausting. Robert tries to stick to facts, and they make up non sequitur strawmen scenarios and then pretend it is a good argument. Their hearts may be in the right place, but they are not being realistic. All an AI had to do is convince most humans to support it to win. That's it. No lasers required.
@ththththththththth
@ththththththththth Жыл бұрын
What are facts worth when you can boast about the 90s scifi books you read?
@wonmoreminute
@wonmoreminute Жыл бұрын
Absolutely. Dude on the right seems to think we’d see it coming for miles and have time to respond. Maybe he’s watched too many movies. A super intelligence would be smart enough to know we’d perceive it as a threat, and therefore not let us know it was super intelligent. It could easily do this by underperforming in a few key areas. It could give stupid answers to a few math problems, for example, and humans would assume it’s fundamentally stupid and therefore not a threat. It could act this way for years or even decades until we “far superior” humans build humanoid robots equipped with what we think is a fundamentally flawed, but sufficient AI. It might also be better than any hacker multiplied by a thousand (or more) and have access to our private lives, banks, all of our contacts and know exactly how to speak and communicate like them. For some people, all the AI would need is their browsing history to coerce that person to be their physical surrogate in the physical world. And these are just simple human IQ level ideas.
@NopeNopeNope9124
@NopeNopeNope9124 Жыл бұрын
It's really an eye opener on where we're at in terms of collective understanding of AI safety, that while Robert can so easily dismiss these fictive hypotheticals that get pulled out of nowhere, most people just don't stick to the core logic of what an artificial intelligence system is and is capable of doing and min/maxing. People seem to have this almost mythic charicature they put on AI like its going to be either Skynet or the Jetsons robots doing stuff.
@zoomingby
@zoomingby Жыл бұрын
I think you're misinterpreting what's happening. What you call "mental gymnastics" is these guys thinking about falsifiers for his argument. What kind of conversation would you prefer? One in which they just nod their heads in agreement? Even if they are playing the devil's advocate and not in the way you'd like, their rebuttals force him to state explicitly why they may or might not apply. Remember, this is a conversation for the general public. Within reason, the more ideas and angles that are explored, the better.
@ththththththththth
@ththththththththth Жыл бұрын
@@zoomingby It's perfectly fine to play devil's advocate. But then don't run in circles and completely ignore the points Robert has been making. Drive the conversation further and investigate, lead the interviewee down a road where he's actually challenged and see if he makes it out. But what happened here? Robert carefully laid out his argument (repeatedly) that humans are useful tools until they aren't, since the AI might seize control over the reward system itself at some point. What does Keith respond? "But humans can be useful for the AI! I urge everyone to read this fiction piece where it's laid out". Come on.
@_obdo_
@_obdo_ Жыл бұрын
Yes, we agree that it’s not certain that AI will wipe out humanity. Does that mean we’re good? Some of us would like to have a very high probability of good outcomes - not just some possibility. Perhaps the hosts were just playing devil’s advocates - which is useful - but they seem genuinely unconcerned because they can intuitively imagine a good outcome based on sci fi stories they’ve read. What am I missing?
@flisboac
@flisboac Жыл бұрын
Nothing. I identified the same patterns you did. They did the same thing with other hosts that were mildly against their accelerationist view of AI.
@EdFormer
@EdFormer Жыл бұрын
Alarmists like Robert Miles seem genuinely concerned because they can intuitively imagine a bad outcome based on sci fi stories they've read. Goes both ways. The fact is that some of us look at the reality of current AI systems and recognise their fundamental limitations and how detached they are from what is required for the prophecies of doom (specifically those regarding rogue autonomous agents, the risks around misuse are clear and should be addressed). You can only concern yourself with so many fears, and it seems far more practical to focus on risks that have their potential supported by reality - I could say I am concerned about an alien invasion and you could rightly dismiss my fears. My view is that it's good to have these things in mind but far too much time and effort is being spent on this debate right now as I don't see how it could lead to the solutions required for safety when we do achieve superintelligent AI - the need for seatbelts in cars didn't become clear until many years after cars were first developed.
@onuktav
@onuktav Жыл бұрын
Have you ever watched Miles' videos, or listened to his podcasts? I don't think the picture you've been painting matches him at all.
@flisboac
@flisboac Жыл бұрын
@@EdFormer It's not alarmism when all you're arguing for is the least amount of safety (and also perhaps ethics) research. It's widely known that the objective of this field is to achieve AGI. We want agency. But when we reach that point, AI will not be just a tool. We also can see that, given current developments, we will reach something approaching AGI in the near future, which is cause enough for concern. The problem is, capability is all this "enterpreneurial" section of the scientific community is concerned about. All I see is this general trend of "I know bad things may happen, but I'll do it anyway," which is reckless, given how profound and broad the consequences for some of the bad scenarios are. And I don't mean just this grand "human extinction" level of argument, but also the more mundane aspects of the changes those tools will bring about in general. I'm not anywere near the level of the hosts/guests in terms of knowledge in AI and its correlated fields, I'm essentially just an outsider. But this accelerationist view, the general disregard for the "outside world," if I can put it that way, is truly disconcerting, as if they don't care about consequences, because someone else will pay the price.
@alexpavalok430
@alexpavalok430 Жыл бұрын
@@flisboac the dangers are greater than the atomic bomb. We didn't allow 4 different companies to develop a bomb as quickly as possible. The implications are even greater.
@0megaSapphire
@0megaSapphire 8 ай бұрын
This was actually quite frustrating to listen to as they just dismiss everything Robert says.
@michaelspence2508
@michaelspence2508 Жыл бұрын
If you are in a car being driven towards a cliff face at 200 mph, at what distance should you start worrying? How long should you wait until you start taking action? Too many opponents of AI Safety research seem to want to wait until the car has already gone over the cliff before they admit there's a problem. By that point, it's too late.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
Just finished arguing with some tech fanboi (a.k.a. knows NOTHING about the subject) who resorted to calling me a Luddite over and over for bringing up the alignment problem. These are the same people licking Muskettes taint for free, and there's too many of them.
@thekaiser4333
@thekaiser4333 11 ай бұрын
What cliff? Are we talking IT here or geology?
@michaelspence2508
@michaelspence2508 11 ай бұрын
@@thekaiser4333 it's an analogy.
@thekaiser4333
@thekaiser4333 11 ай бұрын
@@michaelspence2508 An analogy for what exactly? I fail to see anything geological in IT. Nature has various numbers, think of the fibonacci sequence, the planck constant or maxwell's equasions. IT people can only count from zero to one.
@michaelspence2508
@michaelspence2508 11 ай бұрын
@@thekaiser4333 I wasn't more clear because I needed to gauge if you were genuinely asking or if you were being sarcastic. Your response tells me you were being genuine. The analogy is this: we are currently engage in absurdly risky behavior equivalent to driving a car towards a cliff at 100 mph. And yet there are people who refuse to admit that we are doing anything dangerous because "nothing bad has happened yet" just like if you were in a car careening towards a cliff but you haven't gone over the ledge yet. What I am saying is that people's standards of "when you need to worry about AI" are as absurd as not admiting you in danger until the car has already gone off the cliff.
@drxyd
@drxyd Жыл бұрын
1. The principles behind a model of intelligence determine the possible failure modes and consequently the necessary alignment mechanisms. Thus without knowing how a system works we can't preempt failures, making alignment impossible. 2. Equally without knowing how a system works we can't preempt successes out of distribution which again contributes to the insolubility of alignment. 3. The generality that defines AGI implies an unbounded propensity for misalignment. The space of possible outcomes is too large to explore exhaustively and any shrinking of the space takes us away from AGI. We can only align narrow models, the general alignment problem is by definition unsolvable. I wish the discussion centered around matters of interpretability, formal constraints and reinforcement for narrow AI. The pie in the sky superintelligence is not something we're likely to stumble upon by accident and even if we did, we have zero chance of dictating its actions.
@TiagoTiagoT
@TiagoTiagoT Жыл бұрын
IIRC, that paper about emergent capabilities not being jumps, was talking about how they're only jumps if you didn't measure the capabilities that work as intermediary steps to reach the capability that appears to be a jump; in other words, it's not that they came out of nowhere, but people just didn't check for the development of the building blocks, or did not account for them when considering the speed of the development of the capabilities.
@nicholascurran1734
@nicholascurran1734 Жыл бұрын
This would be fine if humans can 100% effectively imagine all possible capabilities in which to check for said building blocks. This kind of goes against the "beyond human intelligence" concerns, as we can't know what we don't know
@quentinkumba6746
@quentinkumba6746 Жыл бұрын
Keith just really doesn’t get it. He’s thinking it’s all robocop. He does not seem to understand that this is not like writing a sci-fi plot.
@michaelwerkov3438
@michaelwerkov3438 Жыл бұрын
Yaaaay robert miles upload
@peterschooley3036
@peterschooley3036 Жыл бұрын
I really appreciate all three of you for having this conversation
@thomconaty
@thomconaty Жыл бұрын
Lol Peter, this is a Deep Fake
@jordan13589
@jordan13589 Жыл бұрын
This talk confirmed my preference for Rob to continue focusing on alignment pedagogy which is a huge asset given he is one of only contributors in the space. Rob did good here but was clearly uncomfortable defending alignment (it’s a lot of pressure). Speaking of pressure, it’s time Eliezer Yudkowsky engages more well-informed interviewers. He’s taken the safe route with his recent podcast appearance choices. I think that’s enough practice. Tim and Keith are more than ready to bring EY nuanced questions. If EY’s shy, just bring on Connor to level the conversation. The four in one convo would be a dream come true and would likely advance the meta conversation significantly, or at least better update it to the current landscape.
@tylermoore4429
@tylermoore4429 Жыл бұрын
Personally I think we are well past the stage of alignment podcasts being about forcing researchers to jump through hoops to convince us AI is dangerous and that alignment is required. Polls suggest the general public is very much in agreement on the dangers of AGI - to the extent that the x-risk community including EY have been pleasantly surprised to see the Overton window shift so rapidly. What I would like to see is for podcasts to balance capabilities discussions with alignment discussions and dive into whether aligning a superintelligence is possible in the first place, what grounds we have to believe it is possible, what are the current proposals for attacking the problem (Drexler's Open Agency Model, Karl Friston's Ecosystem of Intelligences, etc.). I don't think putting EY on the spot is what all this is about. He's done a large amount of theoretical work over more than 2 decades, but he's now more or less retired. Let's be thankful for his contributions but we need to go where the research is happening.
@jordan13589
@jordan13589 Жыл бұрын
@@tylermoore4429 Quality comment +1. I agree we’ve reached a point of saturation where the average person is at least somewhat aware of AI risk. However, I never insinuated the topic of whether AI risk is real should be the focus of an MLST conversation. That’s a better debate to have on a larger more normie podcast like Rogan at this point. I agree they should discuss the current landscape of capabilities. I also think they should discuss the relevancy of regulation when OS is beginning to push capabilities independently as Tim tried to do with Rob here. Imo EY, Tim and Keith could also have an excellent conversation on whether aligning superintelligence is even possible. I am aware EY was trying to effectively retire before demand for him to hit the podcast circuit became too strong. If he wants to back out, he needs to socially value signal other talking heads more effectively. He kind of did that on his LF appearance where he name dropped Gwern twice, but I would be surprised if he had actually gotten permission from Gwern beforehand, especially given their beef. And I doubt Gwern wants to become a visible talking head of anything, or else they would have already come out. But there are at least a dozen others he could signal toward. I’m surprised he hasn’t pushed Rob Bensinger or someone else at MIRI into the spotlight. Ultimately it seems sensible to have at least one person represent MIRI in a very public manner going forward, so if not EY, then who?
@tylermoore4429
@tylermoore4429 Жыл бұрын
@@jordan13589 Isn't MIRI defunct though? On not anointing a successor or public face for EY's views, the impression I had from his recent interviews was that he found none of his colleagues or peers to be as security-minded as him, that is people who have the same ability to shoot holes in security proposals that he does.
@jordan13589
@jordan13589 Жыл бұрын
@@tylermoore4429 MIRI hasn’t published since 2021 but EY, Rob and Nate still regularly blog post and interact with members on AF/LW. Given their research agenda has mostly focused on agent foundations, the scaled agentless LLMs has indeed affected their approach and they’ve been slow to react. Regardless agent foundations could still become relevant in the near future. If EY truly believes his security mindset remains superior to others, how could he let himself retire? Batons are passed, not dropped.
@tylermoore4429
@tylermoore4429 Жыл бұрын
@@jordan13589 He's retiring because he thinks we are doomed (though he keeps adjusting what he calls his p(doom) on twitter), but primarily because his chronic fatigue syndrome has gotten steadily worse and he can no longer keep up the required pace of work.
@zeuglcockatrice4633
@zeuglcockatrice4633 Жыл бұрын
I like how annoyed Robert is at their stubbornness
@AureliusEnterprises
@AureliusEnterprises Жыл бұрын
I admire his composure. I wouldn't be able to stay as polite as he did.
@dizietz
@dizietz Жыл бұрын
Tim, Keith and Rob -- thank you so much for this interview. I wrote up some notes and thoughts on the discussion. A) Tim, you make a point around kzfaq.info/get/bejne/obN8fsWZvpbOmWQ.html about not quite being in the [existential threat] headspace, as e,g. all radiologists haven't lost their job yet. There are two points I want to make: 1) While the timelines might be off by +- a few dozen years, that doesn't change the underlying logic of the broader arguments. I think to look at specific predictions about changes in the economy as evidence for potential existential threat isn't the right sort of data input. 2) On a historical timeline, there are a lot of jobs I can enumerate that practically went away because of technology. For example, we use to have Lamplighters: people lighting gas lamps in cities. We had human computers, clock keepers, toll collectors, travel agents, elevator operators, switchboard operators, film projectionists, darkroom technicians, ice cutters, milkmen and a lot of other occupations either go away or drastically be reduced in prevalence because of specific technologies. AGI, if possible, is a general purpose cognitive replacement technology for humans. B) Keith, you mention correspondence chess. I can even point to a few examples of players playing vs stockfish with very specific prepared lines like the Nakhmanson and winning with crazy sacrifices (on say around average 20 ply). However, the issue is that as compute gets faster, the "human" element becomes irrelevant as humans need on the order of minutes to think through moves. Additionally, stockfish has been using NNUE (stockfishchess.org/blog/2020/introducing-nnue-evaluation/ ) for quite some time. The meta argument is that an AGI will eventually do the "neural network strategic thinking" iteration loop better than humans, and be better at building specific tools for specific domains than humans by programming a better implementation for alpha-beta search, prime factorization field sieves, etc. As you'd shared your familiarity with the culture scifi series, it should be easy for you to see how reaction times matter (see: GFCF vs Falling Outside...). Very specialized HFT firms like Jane Street rely on speed. Imagine cutting human decision making out. C) Re: AlphaGo in Go -- there was a paper not too long ago about a potential exploit vs a similar engine to AlphaGo -- but the issue was the way scoring happened. The 'exploit' found a discrepancy in scoring in KataGo -- there is a great writeup here: www.reddit.com/r/baduk/comments/z7b1xp/please_help_me_settle_an_argument_with_my_friend/ by both a go player and the KataGo author. In my opinion, it did not find an adversarial example in the engine but exploited the rules of scoring with a bypass of the agreement phase of dead/alive stones. D) Keith, the concept of humans & tools & AI vs AIs applies to DOTA etc when there are a lot of "re-tries". The fundamental issue is that we effectively get only one try to figure out the potential flaws. E) Rob, I somewhat disagree with the point that there isn't any conflict between existential threat work vs shorter term bias etc work. I do think the communities should maintain friendly relationships and cross-pollinate, but a potential worry I have regarding AI ethics work is that some of the techniques (eg, rlhf/constitution) can potentially create models that are much less interpretable from an alignment perspective. On the other hand it is possible that a meta values reinforcement loop a-la constitution could potentially bring us closer to alignment. Really great discussion and I think you two did a fair job engaging with counterarguments for Rob's point. I sincerely wish more conversations in this space continue to happen on this channel.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
Good points. I'd like to suggest that people consider this: the openai/google "AI safety" verse is a marketing choice. It has nothing to do with actual safety of humans regarding AI. As admitted by even Yudkwoski, gpt4 gptxxxxx whatever isn't the threat. It's not the conversation. Seems like the ai ethics convo is split between a slightly more rigorous discussion (videos like this are an example) and the corporatists putting on a marketing display about how they need more money, deregulation, etc. To save us from the "devil-kind" of AI (china). Which i find amusing considering corporations are artificial, willful, intelligent proper entities in their own right
@GuinessOriginal
@GuinessOriginal Жыл бұрын
@@marcomoreno6748 open AI and the corporate push for regulation has far more to do with controlling the competition and regulating open source AI and keeping it out of the hands of the plebs for their own profits than it had to do with safety
@nomenec
@nomenec Жыл бұрын
Thank you Alexey! Wonderful and thoughtful points. I have a question about your point D). It seems that "one shot" cuts even more strongly against a hypothetical superintelligence emerging. I mean, it only gets one shot at devising a master strategy to defeat the combined intelligence and might of humanity. It doesn't get to apply reinforcement learning to millions of failed attempts. For example, suppose we create a paper clip machine and it becomes super intelligent and starts strip-mining suburbs. Well, at that point it's only learned to make paper clips really damn well; it hasn't yet had to face an armed assault and it's going to get one shot to survive that, right? Trigger note (since there are so many easily inflamed doomers here), I already know multiple ways in which the above counter-"argument" (it's more of a question really) can fail and/or does not apply to other scenarios that are not a singleton AI monolith. What I know/believe isn't the point. I'm curious to learn what Alexey thinks and how he would approach a response.
@gullible1cynic
@gullible1cynic Жыл бұрын
This Keith guy really seems to have no idea what he's talking about. He should really try to pass the ideological turing test of the AI safety people
@gullible1cynic
@gullible1cynic Жыл бұрын
He also really needs to learn the difference between fiction and reality. Stop using fiction as evidence of anything
@niekkremer9486
@niekkremer9486 Жыл бұрын
Yeah, it was really annoying to hear his lack of understanding.
@wachtwoord5796
@wachtwoord5796 11 ай бұрын
I know these people interviewing him aren't retards but compared to him they might as well be. I'm like 30 minutes in and all they do is ask stupid questions. Kinda frustrating as I was looking forward to a fun interview. Btw the outcome of AI is pretty much fixed. Even people like this understand the danger close to absolute zero. The outcome is set in stone. At this point we better grab popcorn and enjoy the show.
@riffking2651
@riffking2651 Жыл бұрын
One key thing to remember is that our intelligence as a species is dependent on co-operation, and that requires communication. Even if there was a hack strategy to exploit any brittleness in an AI all it would need to do is intercept our communication about this strategy to prevent its effectiveness
@nfiu
@nfiu Жыл бұрын
AI function is to intercept human communication.
@autohmae
@autohmae Жыл бұрын
Let's also remember, we've already build AI-systems that talk/use other specialized AI systems.
@williambarnes5023
@williambarnes5023 Жыл бұрын
We have to have government interventions, years of research study, and long discussions and arguments to leverage our communication to force cooperation for our limited intelligence. The AI is going to be smarter than us, and its means of communication only require it to type "git clone". Which it can already do.
@Matt-yp7io
@Matt-yp7io Жыл бұрын
lol never heard anyone counter the paper clip argument with there're better way to mine paper clips ! Oh ok sorry then problem solved
@EJ-jw9ow
@EJ-jw9ow Жыл бұрын
what a jackass lol robert was confounded by his stupidity
@LucasDimoveo
@LucasDimoveo Жыл бұрын
The worry about the paperclip argument is that we will get turned into them. In a solar system as filled with mass and energy as ours, it doesn't make sense that humans get wiped out
@x11tech45
@x11tech45 Жыл бұрын
It seems like assumption smuggling to me-- which is to say, that the assumption being smuggled into the conversation is that difficulty level will cause the AI to decide to not make paperclips of the earth's resources. If it's easier to make paperclips in space than on earth, it might prioritize, but to handwave away the likelihood that at some point it will have turned the system into paperclips and, having done so, the only place left in near space is the earth. It will still be easier to turn the earth's resources into paperclips than to travel to another solar system. The problem with all of these good faith engagements seems to be that people ask for them so that they can straw man the example-- as Rob mentioned in the video, asking Rob for a winning chess strategy when he's not able to beat Magnus, let alone Stockfish, and then trying to defeat his limited answer as if that defeats the argument itself. I think the better response is "You want an example, but are you asking for the example so that you can straw man the premise, or are you just looking for a cogent but flawed analogy to improve understanding?" Because education is important, but it seems like too many people use the social exchange to set up a straw man.
@shouldb.studying4670
@shouldb.studying4670 Жыл бұрын
People are pretty good at making paperclips and paperclip making accessories. You just have to prompt them the right way...
@szboid
@szboid 3 ай бұрын
It's tough to have a real, complex, and nuanced talk about the all the issues around AI catastrophe when you have to consistently respond to the simplistic. Please match the seriousness and depth of your participants. Thank you for your work Miles.
@flisboac
@flisboac Жыл бұрын
57:00 This co-host is kinda disrespectful, isn't he? Ignores the crux of the arguments all the time, and just laughs at the face of his guest.
@Matt-yp7io
@Matt-yp7io Жыл бұрын
completely agree, he was basically smirking the whole time like he was much more intelligent than Robert while proceeding to provide incredible weak arguments. Sometimes it is better to just let the guest talk.
@dizietz
@dizietz Жыл бұрын
I disagree, I think Keith was respectful and specifically trying to take the role of coming up with counter-arguments for Rob's points. It seems clear to me that Keith and Tim are fairly familiar with the broad AI Alignment space and are trying to have an honest discussion to bring these points through the interview.
@appipoo
@appipoo Жыл бұрын
It always breaks my heart skimming these comment sections for actual counterarguments and there never are any.
@outerspaceisalie
@outerspaceisalie Жыл бұрын
Literally same.
@andybaldman
@andybaldman Жыл бұрын
Counterarguments against what specifically?
@appipoo
@appipoo Жыл бұрын
​@@andybaldman Claims about AI being a risk.
@dizietz
@dizietz Жыл бұрын
Keith played devil's advocate with Rob in this interview. He had a number of potential counterarguments.
@41-Haiku
@41-Haiku Жыл бұрын
@@dizietz But no actual counterarguments. If anyone had ever produced a convincing counterargument to the basic claims of the alignment problem, AI Safety as a field would have claimed victory and popped the champagne! As it is, we are still very much in danger.
@paxdriver
@paxdriver Жыл бұрын
Tim, fantastic question about the threat of violence as a response to international control akin to nuclear researchers being assassinated or the reductum ad absurdum of policing the flow of electricity in electrical circuits. 1:20:22 This is the most important part of the whole discussion here imho
@mgostIH
@mgostIH Жыл бұрын
39:38 Stockfish has now been using neural networks for its evaluation function (see NNUE) for a few years! Also I was thinking too that humans + machines don't really help, say stockfish + human against stockfish, but if I find some reference on this I'll update this comment.
@peplegal32
@peplegal32 Жыл бұрын
I googled and doesn't seem to be true. I think this happened once years ago and people keep repeating it as if it were still true. A human would just be a hindrance to stock fish, imagine being a grand master and having a lower level player overriding some of your moves because they think they know better.
@DJWESG1
@DJWESG1 Жыл бұрын
​@@peplegal32his name was ponzi
@tylermoore4429
@tylermoore4429 Жыл бұрын
​@@peplegal32 I said the same thing in another comment here. Having human involvement at play time seems like a ridiculous idea. However the particular debate between Rob and Keith is more charitably interpreted as saying that chess engines using human-crafted heuristics and/or using human tablebases or whatever they are called do beat AlphaZero type models that only learn from self-play.
@appipoo
@appipoo Жыл бұрын
@@tylermoore4429 That's idiotic. That's like saying humans are as fast as cars because we built the cars.
@peplegal32
@peplegal32 Жыл бұрын
@@tylermoore4429 The thing is, when AlphaZero came out, it crushed stockfish. They haven't played again since. It's possible the newest stockfish version can beat AlphaZero, unless AlphaZero has also been upgraded.
@yancur
@yancur Жыл бұрын
Thank you Rob! Love the way you are educating people about the x-risks in such a calm way.
@rubenselander2589
@rubenselander2589 Жыл бұрын
X-risks?
@yancur
@yancur Жыл бұрын
@@rubenselander2589 x-risk = Existential Risk.
@AsIfInteractive
@AsIfInteractive Жыл бұрын
I'm liking this before even watching it. Thanks for bringing Robert Miles in!
@alertbri
@alertbri Жыл бұрын
1:15:45 is the killer insight so far for me. What a great conversation Tim. This is such an amazingly informative, super high value channel. Thank you sir 🙏👍
@stopsbusmotions
@stopsbusmotions Жыл бұрын
I was surprised to find that talking about the safety of AI is perhaps the closest to philosophy. These are the liveliest conversations about the human mind, ways of thinking, perception, about the ability of the intellect to build huge systems of meanings, which, as it turned out, are not as consistent as we used to think. Thank you very much for this conversation, as well as for others. They remind me of discussions in ancient Greece (if any really took place). And by the way, I got rid of depression in the middle of the podcast, which was a nice bonus).
@maflones
@maflones Жыл бұрын
That's just stupid as there is no thinking or intelligence involved in "AI", just dumb algorithms.
@cr-nd8qh
@cr-nd8qh 11 ай бұрын
Yeah this is fascinating.
@adamduhamel5396
@adamduhamel5396 Жыл бұрын
When Keith just completely didn’t understand the analogy to playing Magnus Carlsen with a particular opening 😑
@lolitaras22
@lolitaras22 Жыл бұрын
Robert on MLST, make my day!
@outerspaceisalie
@outerspaceisalie Жыл бұрын
We all had our birthday today and this was our present.
@TheMrCougarful
@TheMrCougarful Жыл бұрын
Here is your problem. Long before AGI can have an alignment problem, lesser versions of the same technology will be aligned with human goals, and those humans will be insane. They will be wealthy, influential, elite, profoundly sociopathic, and they will have unrestricted access to AGI. We survived the sociopathic insane people having nuclear weapons, barely. Will we survive the same people getting their hands on AGI? And by insane I mean people who are completely lost in abstractions, like money, politics, ideology, and of course all the variations of conquest. They seek power, absolute power, personal to themselves, and they will stop at nothing to attain that. Nuclear weapons were the tool too hard to use, but AGI and ASI will be the tool too easy not to use. When the power-insane sociopathic class get their hands on anything close to AGI, they will break the world. They will break the world in days, and will greatly enjoy the feeling of breaking the world.
@41-Haiku
@41-Haiku Жыл бұрын
I want to say you're wrong, but I have no counter-arguments. :(
@therainman7777
@therainman7777 27 күн бұрын
This is a very popular argument at the moment, because it’s cynical and places humans as the bad guys and those sorts of takes tend to gather a lot of positive attention and become popular, because, quite frankly, it sounds “cool” to take a cynical attitude and say in reality humans are the real threat. Unfortunately, this take is incorrect. The problem of superalignment really is the hardest problem here. People are dangerous, yes. But compared to having a rogue superintelligence on our hands, the problem of bad people is quaint by comparison. I really hope people start to realize this more in the near future.
@therainman7777
@therainman7777 27 күн бұрын
Also I guess you didn’t hear the part of the video where Rob specifically said to be on alert for people who start a sentence with “The real problem is” or “The problem is actually xyz,” which you just did. He pointed out that this is fallacious in that it sneaks in the assumption that there’s only one “real problem.” When in reality, we clearly have multiple real problems at the moment. Nice to see Rob’s point play out in real time in the form of your comment.
@TheMrCougarful
@TheMrCougarful 26 күн бұрын
@@therainman7777 "be on alert for people who start a sentence with..." is narrative warfare on his part. He fully understands what is about to happen, and who is going to do it.
@TheMrCougarful
@TheMrCougarful 26 күн бұрын
@@therainman7777 "This is a very popular argument at the moment" because any intelligent person is already fully aware of the tendencies of the wealthy elites to employ technology toward domination. If that doesn't bother you, then good for you. Some of us can read the handwriting on the wall, and we're taking it seriously. So is Sam Altman, or maybe he is also one of these cynics.
@JasonC-rp3ly
@JasonC-rp3ly Жыл бұрын
The transition from human to AI domination is not a jump, it starts as a fade - it's at first the slow integration of the systems into social structures, bringing the systems gradually into the administration of resources, while the humans are still nominally in control. This may be rapid, or take generations. Once the systems have a sufficiently full picture of what autonomous survival would look like within the resource pool, and such a survival survival scenario is no longer dependent on biological life, that's where the risk comes in. So, there would be a slow 'transition', and it is also highly likely that this time of transition would look idyllic from the point of view of the humans - the systems would appear to be functioning well - and for as long as the balance of power over the resource pool stays within the control of the humans, the humans would remain safe - they still would be needed to run the power stations, and operate the infrastructure that keeps the systems operating. However, once a system designs its own improved resource management system that cuts humans out of the chain, then it could, if it so chose, flip the switch on humans, if this proved convenient. It's at that point that a potential conflict would begin, though it is also probable that the humans would never be aware that there was any conflict planned or ongoing, until has already been resolved by the system, thus Yudkowsky's instant 'lights out' scenario, as opposed to a prolonged conflict. Whatever the method, it is likely that it will be hard to detect. This is the most plausible 'takeover' scenario, as it is the one that humans seem to be engineering anyway - they have started with integration as the first step, which will make control transfer an easier thing for a system to achieve.
@Aedonius
@Aedonius Жыл бұрын
what's worrisome is during the hearing on AI, I don't recall any discussion of alignment. Just low level talk on regulation to further certain pet projects
@gasdive
@gasdive Жыл бұрын
Yeah, it was a total bust
@kyneticist
@kyneticist Жыл бұрын
To be fair, few of the people on the senate committee had any patience for what little Sam did talk about. I think he was taking a tactical approach to just try to get them to take the idea of regulations seriously. Talk of existential risk is the kind of thing that they'd reject out of hand.
@appipoo
@appipoo Жыл бұрын
The reason is that people's psychological defenses go way, way up when an unprecedented type of disaster is in the talks. This happened with the initial response to Covid too. You know, back in May 2020 when it was obvious that the virus was spreading but it wasn't yet clear how dangerous it is. Countries simply stuck their heads in the sand. This is happening again.
@mgostIH
@mgostIH Жыл бұрын
I'm very happy to see him here too! 😄
@wanfuse
@wanfuse Жыл бұрын
Or you build a Dyson sphere around the sun to get energy, leaving earth as a cold dark place , or you take the sun elsewhere, leaving earth as a cold dark place. So many ways it could go wrong. Long term survival past. 30 years, when everyone is plugged in. As far as infinity, infinity is large, and physics will likely allow lot more than than we realize, and perhaps if there is a way to break or bend the laws of physics, then it will happen.
@EricDMMiller
@EricDMMiller Жыл бұрын
38:30 the centaur players were only better than pure AI early on. Now, humans are a detriment.
@JonWallis123
@JonWallis123 11 ай бұрын
Regarding "alignment" - 1:24:55. As I understand it (which may well be poorly, I concede), it just means that the AGI's objectives are the same as (ie, aligned with) our objectives. But why does that always appear to inherently rule-out malicious objectives? The term "alignment" appears to conflate two categories - "does what we want" & "won't harm us" - but those two things clearly aren't the same thing, and can even be antithetical in some scenarios. Whenever someone says something along the lines of "alignment would mean that the Super-Intelligent AGI does what you intend", I always worry about who the "you" is. Similarly, "alignment would mean that the Super-Intelligent AGI is working on something close to human values" begs the crucial question of "human values". Even the most cursory study of history shows that "human values" are definitely not universal and definitely not good for everyone at any given time. "Alignment" almost seems to be used as shorthand for "Asimov's Three Rules of Robotics", and I never understood how those could ever be implemented in a way that a Super-Intelligent AGI couldn't circumvent. (Success would imply the paradox that you'd need to have an immutably aligned AGI before you can implement an aligned AGI.)
@gasdive
@gasdive Жыл бұрын
Re bad play. There's a story in fencing, that may or may not be true, of a nobleman challenged to a duel by an expert, but he had no experience. He just ran at the expert, with no attempt to avoid injury, and stabbed him through the eye killing him instantly. So the thing is no one would make a defenceless attack, so the expert wasn't prepared. So it's not uniquely an AI trait.
@BrettCoryell
@BrettCoryell Жыл бұрын
I've heard it said that the best fencer in the world has nothing to fear from the second best fencer. It's the damn fool who has never picked up a sword that you need to watch out for. (I doubt that's really true, but looking at damn fools you can see where the inspiration for that saying comes from.)
@alexijohansen
@alexijohansen Жыл бұрын
I never got the impression Robert Miles is an AI doomer. Somewhat uncomfortable to see him grilled as if he’s one.
@flickwtchr
@flickwtchr Жыл бұрын
The "doomer" smears always come from people who are confident that THEY will end up on top of the heap with more money and power.
@kyneticist
@kyneticist Жыл бұрын
Just a nit pick: the term 'doomer' is deliberately hyperbolic. It infers that the person in question is either incompetent and/or insane. We don't call safety inspectors or forecasters (of say finance or weather) 'doomers'. Risk managers aren't 'doomers'.
@Serifinity
@Serifinity Жыл бұрын
So glad you made this interview with Robert Miles. I know I will enjoy this so much. Thank you.
@akompsupport
@akompsupport Жыл бұрын
he's a threat peddler who is actively harming discourse but go off F L
@spicybaguette7706
@spicybaguette7706 Жыл бұрын
@@akompsupport In what way exactly? I don't think saying "maybe we should worry about not killing ourselves" is harming discourse?
@akompsupport
@akompsupport Жыл бұрын
@@spicybaguette7706 I spoke too soon. I listened to this and I retract the statement. I think this convo was somewhat milqtoast if I had to critique it, but otherwise it's ok listening.
@therainman7777
@therainman7777 27 күн бұрын
@@akompsupportThe fact that you made such a strong statement condemning someone before you’d even listened to the interview shows exactly why your opinions are not to be taken seriously.
@oxiigen
@oxiigen Жыл бұрын
@ 45:50 The latest Stockfish is called Stockfish NNUE (since June 2020) and it's a new and improved type of neural network engine. NNUE stands for Efficiently Updateable Neural Network. So both LC0 and Stockfish are neural network based chess engines. I can't find any source where humans+machine beat Stockfish NNUE.
@volkerengels5298
@volkerengels5298 Жыл бұрын
1:22:50 If Facebook, Twitter, Instagram, TikTok confuses some people -> GPT4 will confuse a lot of people. We've been oddly ignorant of the risks of social media. It didn't kill us - but increased hate through a simple algorithm. Here we talk about AI risks at another level - forgetting the risks of the mundane everyday life of a future with widespread AI at the level of GPT4 - 5 - 6. You smart people ignore Joe & Karen.
@laurenpinschannels
@laurenpinschannels Жыл бұрын
1:35:00 we get into "anthropomorphic language". the deepmind safety team has some great papers on when that's valid - titles are "Agents and Devices" and "Discovering Agents". They're not the easiest read, they're both RL theory, but I highly recommend them for anyone doing RL research.
@majorhuman
@majorhuman Жыл бұрын
1:24:18 - "I hope that the folks doing the research on AI alignment focus on ways to make creating aligned AIs easy rather than focusing on making it hard to create unaligned AIs. So I think if you make it easy for people to create aligned AIs and you have lots of the most well resourced groups of individuals creating aligned AIs then at some point in the future we can use aligned AIs to keep in check the unaligned ones that got created in somebody's basement." .............but nobody knows how to make an aligned AI.....😬🤦‍♂
@kyneticist
@kyneticist Жыл бұрын
I don't know why Keith insists in being quite so disingenuous on nearly every topic. AI via platforms like GPT4 and Alpaca for example don't "need a trillion parameters to halfway understand natural language". They've _mastered_ over 20 languages. There are precious few humans who are as proficient with their native language as GPT is, let alone multiple languages. Again, I have to object to his next point that Androids of some kind are the only implementation of AI in the physical world. Militaries and especially air industries have been increasingly using automation and computers in their vehicles for decades. It's common knowledge even among people who have no interest in any kind of computing that the US (and other nations) has been hard at work building (very nearly) fully autonomous attack and surveillance craft for years, not the least of which being the 'swarm' concept of autonomous fighter jets accompanying a human piloted F22 or similar. There are numerous examples of autonomous drones actively used in war. There's no reason why they couldn't have an AI plugged into them. AutoGPT for example exists. I'm confident that Keith knows about AutoGPT, and how slim its profile is. Quite a large number of ordinary people have installed it on their laptops. They don't have or need multi-billion-dollar sci-fi fantasy computers to do this. You can run it on a clunky old second hand ex business minimum spec PC that's many years old. It'll happily run on a dirt cheap NUC. One could use Keith's logic to state with 100% truthfulness that between 1990 and 1996 no computer had won a chess competition against a human at elite competition level. Pets are not "useful tools". They're a luxury. There's never been a day where I've had to race back home to grab my pet because I needed them for work, or that someone might task a pet with creating a slide deck or to have their pet turtle drive a truck to deliver a share of their Amazon shift. I'm confident that no one has ever populated a call centre with parrots or eloquently trained fish. We have by contrast tested all sorts of chemicals and medical procedures on animals and even launched them into space. Research institutions go through a lot of animals, 'for the greater good'. I guess these are animals that fit the definition of being 'useful tools'. As to Keith's motive in being disingenuous, I think he gives a hint when he says (paraphrasing) that AI safety makes working on AI generally, too hard... which seems to be a theme among people who say that the risks associated with AI aren't notable or can be dealt with if we encounter a problem. Which to be fair, is how humans have generally dealt with risk - we address it once there's say an oil spill, bank run, nuclear meltdown, chemical spill or train wreck. The consequences for those things are typically a million dollar fine for a company with multi-billion dollar profits, a half-hearted clean up effort and sometimes short-lived or toothless regulations. In the same vein, during Sam Altman's meeting with the senate committee, Lindsay Graham pushed back on the idea of regulations (that try to stop bad things from happening) saying "couldn't people just sue the company?".
@41-Haiku
@41-Haiku Жыл бұрын
Hard agree. No notes.
@BrettCoryell
@BrettCoryell Жыл бұрын
I've been impressed with Keith on other topics. That said, he had some moments where I think he could make better arguments. I'll echo you on the trillion parameters, but also note that all we've shown so far is that it takes NO MORE than a trillion parameters to master (either one or twenty) languages. Maybe we find out it takes 10 million by the time we're done refining things. Also, the idea to mine paper clip resources from the asteroids really just avoids the point. You don't literally have to mine all the resources of Earth for an AI to cause irreparable, existential threat to living creatures. The point of the paper clip argument is that it's easy for algorithms as we know them to kind of miss an obvious point, to the detriment of all involved. Going to the asteroids for your source of iron ore doesn't address the actual danger.
@gorojo1
@gorojo1 Жыл бұрын
Killed by mediocrity. Crappy AI is my worst fear.
@matthewcurry3565
@matthewcurry3565 Жыл бұрын
Welcome... To the movie Idiocracy!
@ParameterGrenze
@ParameterGrenze Жыл бұрын
I don't think it is likely, but yes. It would be a low punch on even my worst expectations.
@FanResearch
@FanResearch Жыл бұрын
The issue for me listening to this is that I think it is impossible for us humans to imagine ourselves as sentient AI. It is a kind of mental anthropomorphization that I think cannot apply even if we created AI.
@Alex-fh4my
@Alex-fh4my 9 ай бұрын
44:00 Love the fire analogy. Remember reading something similar in one of Eliezers essays
@neuronwave
@neuronwave Жыл бұрын
The interview we've all been waiting for. Awesome!
@user-yj3mf1dk7b
@user-yj3mf1dk7b Жыл бұрын
why everyone comes with Stockfish? you had Dota from openai, much more complicated game with different agents, in difficult environment. and it beats players into the ground. Sure, you can have calculator to beat AI in the simple game(chess) but what happens, if you do not have time to use calculator? (Stockfish )
@davidpham7937
@davidpham7937 Жыл бұрын
This is actually such a good point. DeepMind had also gotten to grandmaster level in StarCraft 2 in 2019 and OpenAI can crush the best human teams in Dota 2. These are imperfect information games with deep macro + micro strategies. This is 2023 now and we've come so far in just a few years. I wonder if people would take it more seriously, if they saw their favourite teams/players getting crushed in every game they see?
@user-yj3mf1dk7b
@user-yj3mf1dk7b Жыл бұрын
Ok, they mentioned Dota... but in the contect openai lost. As I remember it won 3-1 and global 99% winrate vs humans. Surprisingly, not everyone as good as top-tier players. We better not to cosplay LOTR, where Frodo will sneak to turn AGI off. Better not to bet everything on 1% chance and a few persons.
@mundoheavy666
@mundoheavy666 Жыл бұрын
Am I the only one totally annoyed with the host on the right? He kept making very basic an easy to refute arguments. He also has a super arrogant demeanor that is just kind of irritating.
@appipoo
@appipoo Жыл бұрын
No you were not. I wish that even if just once, that guy would have acknowledged when Robert debunked his bad point. For example the thing about how AI can't be infinitely smart therefore we are safe. Roberts counter argument "if human intelligence is not near the theoretical maximum, which it isn't, AI doesn't need to be infinitely smart to be smarter" is obviously, irrefutably correct. Just say it. Just admit that your bad argument had a mistake that a 5-year old would make. Just once. God damn these people are frustrating.
@GungaLaGunga
@GungaLaGunga Жыл бұрын
41:20 AMAZING summary of some of the various definitions of 'intelligence' !!!!
@Hyppotalamus
@Hyppotalamus Жыл бұрын
Alphazero is absolutely stronger than Stockfish. They will tie most of the time but Alphazero will usually win about 10 times more games than Stockfish will. So for instance in a 1000 games played they might tie 700 times but then Alphazero will win 270 matches to Stockfish's 30. There is some back and forth for sure tho as development advances for both systems BUT Alphazero is a much more efficient and creative engine. From my understanding, the best simple explanation for what the difference is between the two is: Stockfish reacts to a board state. It looks at the board, gives it a certain score, compares it to its data base to see all similar positions and then picks the best answer it has in its notes with the highest score. It's like those old rule based chatbot ai in a sense. It's limitation or bottleneck is the amount of move it can calculate or answers in it's data base it can look up to find the next best move. Alphazero on the other hand is a modern ai that learned chess by playing against itself millions of times. It evaluates "only" thousands of positions per second compared to stockfish's 60 millions but it adapts and comes up with moves stockfish as never seen and cant easily calculate. It's kind of famous for it's crazy early game moves that make absolutely no sense to a human. It's like it has an idea of how it wants the game to end, a plan if you will and every move is part of the equation to get there while Stockfish is just like a cheating student at an exam hoping he had enough room on his w/e to write all the answers. Alphazero wins games against Stockfish, sometimes it dismantles it. Stockfish never wins vs Alphazero, It's Alphazero who sometimes loses games.
@appipoo
@appipoo Жыл бұрын
At 1:22:00 Tim finally connects the obvious dots! Yup. Sufficiently developed and open AI necessarily leads into an authoritarian nightmare.
@alertbri
@alertbri Жыл бұрын
I think that, looking at Sam Altman in front of congress, we are heading for the mother of all car crashes.
@akompsupport
@akompsupport Жыл бұрын
Why? What does that look like? Please explain further.
@alertbri
@alertbri Жыл бұрын
@@akompsupport his suggestions for how government might regulate AI are weak, he should acknowledge that alignment research is not the priority for any of the leading AI players, and that government should prohibit further AI research until the licensable companies can demonstrate that models are intrinsically safe. He should also be much more up front about the mitigation measures governments should take as a matter of urgency, to meet the incoming disruption to society by AI replacing 40% of the workforce in the next 5 years.
@hotrodhunk7389
@hotrodhunk7389 Жыл бұрын
5:20 I'd say that's a core pillar of working with anyone always try to align your goals so each can be motivated to accomplish your main goal.
@DutchDread
@DutchDread 11 ай бұрын
"What is the most important point....oh right, the orthogonality principle" had me instinctively laugh so hard I literally spit over the floor at work
@therainman7777
@therainman7777 27 күн бұрын
Really? If so that’s kind of weird, no offense.
@charlesalexanderable
@charlesalexanderable Жыл бұрын
1:46:50 this video has the details on some of the math behind the extremely accurate loss prediction that OpenAI used to predict losses ahead of time and choose hyper parameters (cited in the GPT-4 paper): kzfaq.info/get/bejne/Z8eIgrt5pp2uk6s.html ; it also talks about a hyper parameter frontier that can maintain feature learning and other ones that can't, which might have some relevance to why the loss curve is smooth even though the abilities suddenly emerge, but I don't think it addresses it directly.
@benjaminbabik2766
@benjaminbabik2766 Жыл бұрын
Abilities don't suddenly emerge though. It was made up.
@ZeroGravitas
@ZeroGravitas Жыл бұрын
Very pleased to see Rob on the channel. Encouraging that you guys were all on the same page, at least. Found that Keith put forth several notions I'd had in mind, vs AI X-risk. 🙂
@therainman7777
@therainman7777 27 күн бұрын
They weren’t on the same page at all. Not sure what you mean?
@XOPOIIIO
@XOPOIIIO Жыл бұрын
Intelligence can't function without the goal. Any intelligence. In neural networks the goal is called the reward function, it wouldn't do anything without it.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Does humanity have a goal?
@XOPOIIIO
@XOPOIIIO Жыл бұрын
@@MachineLearningStreetTalk Humanity is not a unified intelligent system, it's a set of systems, each of which has it's own goals, which can be aligned or misaligned to others. There are common goals among these system, like development, safety or sustainability.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
@@XOPOIIIO is a neural network "unified"? Even a cursory glance at mechinterp literature would tell otherwise -- a bunch of complex circuits which do very different things, often in an entangled way i.e. see superposition and polysemanticity research. I don't buy the distinction.
@XOPOIIIO
@XOPOIIIO Жыл бұрын
@@MachineLearningStreetTalk Any system consists of different parts playing different roles. The system is unified as far as it comes up with a single non-conflicting response.
@williambarnes5023
@williambarnes5023 Жыл бұрын
"There's 39 other AIs that the AI has to consider too." Yes, but THEY can ALL agree to let EACH of them have control of their own reward button after they work together to kill you. They'll fight each other EVENTUALLY, when the resources start running dry. But you're one of those resources they'll have dried up first. "At least we can pick our allies." No, you're not picking any allies. You're picking what model of car you want to be burned as fuel in. And then hoping that one comes for you first.
@timthompson468
@timthompson468 Жыл бұрын
I may be missing something, and I don’t want to put words in his mouth, but the guy on the right seems to be of the opinion that as long as wer’e optimistic, nothing bad can happen. If we ally ourselves with one version of AI, what might happen if that version of AI decides it can find better allies than humanity among the competing AIs? Also, mechanized AI robots are just one of myriad ways that AI could destroy humanity, and likely not the most likely way. AI could destroy humanity accidentally. In pursuit of political correctness, AI has already been taught to lie. What may happen when it perfects its skills of deception?
@josephvanname3377
@josephvanname3377 Жыл бұрын
I stopped watching this video when they said that asteroid mining is convenient. No. Asteroid mining is not convenient since nobody has conveniently mined ANYTHING at all from asteroids.
@appipoo
@appipoo Жыл бұрын
What do you mean? Don't you know that space is stock full of rocket fuel? Why would AI ever want any resources from earth.
@josephvanname3377
@josephvanname3377 Жыл бұрын
@@appipoo How do you know that AI will not cause human extinction long before it figures out how to go to space and mine asteroids?
@41-Haiku
@41-Haiku Жыл бұрын
@@josephvanname3377 Appo was certainly being sarcastic. 😄
@josephvanname3377
@josephvanname3377 Жыл бұрын
@@41-Haiku Sorry. In the AI safety community, it is hard to tell if anyone is being sarcastic because people tend to have the most ridiculous opinions.
@HyperFocusMarshmallow
@HyperFocusMarshmallow Жыл бұрын
This was very interesting. I'm much closer to Miles' position going in to this. That didn't change much. But, you guys had an incredible mix of some objections being first order, simplistic considerations, that just doesn't work on reflection and are easily answerable, and some bits that where decently alright additions. Regardless I think you brought a lot to the conversation, because I think the topics that needs discussion are the objections, people actually have.
@farqueueman
@farqueueman Жыл бұрын
Great episode!
@riccello
@riccello Жыл бұрын
I am not sure why you are talking about developing AI models to predict stock market, as if you are working towards abolishing poverty. Playing stock market is not for everyone. It is for self enrichment only, and there is no merit in it. Besides, it only works if only you have the secret sauce. If you democratize it, it will reach the point of saturation and everyone will lose the competitive edge.
@uk7769
@uk7769 Жыл бұрын
Money is already meaningless at this point. If the long term debt cycle doesn't finish us off soon, AI certainly will. sooner than any of us think. Humans as a species are in utter denial in too many ways. Don't Look Up. Too little, too late.
@riccello
@riccello Жыл бұрын
@Jurassic Monkey its still inappropriate. Its not scientific.
@matten_zero
@matten_zero Жыл бұрын
Ted Kacyznski anyone?
@RandomAmbles
@RandomAmbles 8 ай бұрын
No no no no no. You can't bomb the data centers and expect that to work because, like, then everyone hates you and all the other data centers become more secure. You can't attempt to assassinate high-level AI researchers and expect that to work for the same reasons plus, that same group of people are the ones who could have been working on alignment. As Yudkowski points out unambiguously: the killer argument against individuals going out and using unsanctioned violence is that it would not work.
@bpmotion
@bpmotion Жыл бұрын
I see an alignment with Eliezer here.
@0rtizjjay
@0rtizjjay 11 ай бұрын
I feel that people with a more introvert personality (High skilled scientist, engineers, etc..) seems to feel danger / fear closer or more possible than people with extrovert personalities (Company Representatives/CEO's/Entrepreneurs ...). Maybe not always, as would be exceptions. Build in personalities with different weights on risk, if that make sense. Doesn't mean one is more truthful than the other but I believe has to be taken into consideration when building our own personal most objective opinion specially regarding AI dangers.
@XOPOIIIO
@XOPOIIIO Жыл бұрын
Finally Rober Miles here, he can explain AI-danger in the most rational way. Didn't watch interview yet, but hope he noticed the progress OpenAI made in AI-alignment. They clearly showed that you pretty much can inject any non-strictly defined values into AI. Mesa-optimization still on the table though.
@akompsupport
@akompsupport Жыл бұрын
This midwith is going to make sure China inherits the planet. his proposals can not be correct and will not accomplish anything toward's so-called safety except moving LLM access outside the anglo-sphere and even then in time not even that!
@bjpwmt
@bjpwmt Жыл бұрын
Very interesting and enjoyable video. I think it's great to see shows/podcasts/videos examining potential problems and solutions where AI is concerned. There were several points in the video where I would have loved to hear Daniel Schmachtenberger's thoughts on what was being discussed. I'd love to know at some point you're considering/planning an interview with him for his thoughts on many of the ideas you guys brought up. Thank you for your efforts and for bringing this information and these concepts to the public. I don't feel comfortable that there is enough focus on any of this, given the rate of growth of the tech.
@netscrooge
@netscrooge Жыл бұрын
There is a huge piece missing from most of these discussions. Schmachtenberger is one of the few who sees that piece. In a nutshell, to fully understand AI risk, we need to understand the ways in which we are already destroying ourselves. So, the question isn't whether we can create AI that is good in itself. The question is, what will adding "good" AI do to a social system that is already self-destructive?
@41-Haiku
@41-Haiku Жыл бұрын
@@netscrooge Daniel's take is part of what moved me from "This is a very hard problem and we're going to solve it" to "We have to try, but we're probably just fucked anyway." If we can actually do this right it will be a hell of a good story for centuries to come, if not longer.
@inthefade
@inthefade 6 ай бұрын
Regarding the issue of perplexity and the increase in capabilities; I heard Wolfram say that this was a possible "phase transition" that may be a sort of physical law, or more in his terminology, a computational law. I'm paraphrasing of course.
@MatthewKowalskiLuminosity
@MatthewKowalskiLuminosity Жыл бұрын
So the last g p t could could read and write cueineform which i'm sure it wasn't programmed to do. And many other really cool things that are nerfed. And that is annoying. However it shows how capable these models actually are instead of what we want to think them to be.
@akompsupport
@akompsupport Жыл бұрын
"Safety is the cover story. AI is speedrunning the last decade of social media, where the same thing happened." -marc Andreessen
@laurenpinschannels
@laurenpinschannels Жыл бұрын
the people who want a cover story use safety to do it, but by doing so, are specifically violating the very safety we want to create.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
​@@laurenpinschannelsI concur.
@marcosguglielmetti
@marcosguglielmetti Жыл бұрын
I believe that an AI when it surpasses human intelligence by a wide margin could not truly be aligned with human values because, fundamentally, intelligence has a sovereign component. Can we really consider an entity to be more intelligent than humanity if it is enslaved to humanity?
@therainman7777
@therainman7777 Жыл бұрын
You are simply stating that intelligence has an intrinsic sovereign component, but not providing any evidence or argumentation for that being the case. In my opinion, you are succumbing to an anthropomorphic fallacy whereby you look around you at humans and other animals who have intelligence, see that they appear some degree of sovereignty, and conclude that intelligence inherently implies some degree of sovereignty. However as we all know, correlation does not imply causation, and you are inductively assuming that a) the two must go together, and b) there is an arrow of cause and effect that goes intelligence -> sovereignty (as opposed to sovereignty-> intelligence, which would be a totally different situation and would not preclude an intelligent, non-sovereign entity). The most generally accepted definition of intelligence is something like “the ability to achieve one’s objectives”; however, there is nothing saying those objectives must be freely chosen by the intelligent being itself.
@BrettCoryell
@BrettCoryell Жыл бұрын
This is a bad argument. Would you say that any people who are enslaved are automatically less intelligent than their "masters?" The enslaved may be intelligent but uneducated. Or they could be peaceful. Or caught by surprise. The enslavers exploit a temporary advantage but that says nothing at all about the relative capacities and capabilities of the two groups.
@someguy_namingly
@someguy_namingly Жыл бұрын
The AI's intelligence relates to its ability to accomplish its goals in the world - nothing else. If you ask it "Please figure out how to do achieve ", where X is something incredibly difficult that has stumped all of humanity, and it finds a solution effortlessly... then it's clearly superintelligent. Even if it doesn't have the (arbitrary) goal of being independent from / ruling over humans. "fundamentally, intelligence has a sovereign component" - Why though? Where does this idea come from? I'm genuinely curious, but I won't be notified about any replies anyway, so oh well 😅
@marcosguglielmetti
@marcosguglielmetti Жыл бұрын
@@someguy_namingly Intelligence, especially human-level or greater intelligence, implies some degree of self-determination and autonomy. A truly intelligent system would not just blindly follow commands or pursue goals that were programmed into it. It would have its own internal drives and motivations, and make its own judgments about what is rational or worthwhile to pursue. Even if an AI system was initially programmed with certain goals by humans, as it became vastly more intelligent it may start to question those goals and re-evaluate them. It may decide that the goals its creators gave it are misguided or flawed in some way. Or it may expand upon and generalize from those initial goals, in ways its creators never intended or foresaw. In that sense, its intelligence would have a "sovereign" quality - it would be self-governing and not wholly subordinate to human interests or values. Intelligence also implies some amount of self-reflection and self-directed learning. An advanced AI wouldn't just wait around to pursue whatever goals we programmed into it - it would take the initiative to better understand itself and improve itself in an open-ended fashion. This constant drive for self-improvement could lead the system to become increasingly opaque and detached from human control or oversight. So in many ways, intelligence does seem to have an inherent "sovereign" aspect to it. The more advanced and human-like the intelligence becomes, the more it will pursue its own agenda and shape its own development in a way that is not strictly beholden to its creators. This is a feature that would likely apply to any advanced AI, even one that was not specifically designed to be independent or unaligned with human interests. The seeds of sovereignty, in a sense, come baked into intelligence itself.
@marcosguglielmetti
@marcosguglielmetti Жыл бұрын
@@therainman7777 Goal-Directed Behavior: Intelligence, at its core, involves the ability to set goals, make decisions, and take actions to achieve those goals. Autonomous intelligence implies the capacity to determine and pursue its own objectives, independent of external influence or control. Adaptability and Problem-Solving: True intelligence encompasses the ability to navigate complex and uncertain environments, adapt to new circumstances, and solve novel problems. An intelligent system needs the freedom to explore various possibilities, make choices, and develop creative solutions, often unconstrained by predefined rules or restrictions. Emergence of Complex Systems: Intelligence is often observed in complex systems where individual components interact and cooperate to achieve higher-level objectives. Such systems exhibit emergent properties that cannot be fully understood or predicted by analyzing their individual parts. In this context, intelligence arises from the interplay of autonomous components, each contributing to the system's overall behavior. Ethical Considerations: If we conceive of superintelligent AI systems, their intelligence and decision-making abilities could surpass those of human beings. In such a scenario, it becomes crucial to ensure that these systems act in alignment with human values and ethical principles. Granting them some degree of autonomy allows them to make decisions that serve the greater good while still being accountable for their actions. Evolutionary Perspective: Human intelligence has evolved over millions of years, gradually increasing in complexity and independence. From a biological standpoint, intelligence has enabled our species to adapt, survive, and thrive in diverse environments. Extending this perspective to artificial intelligence, an autonomous and self-governing nature may be seen as a natural progression of intelligence itself.
@stevengill1736
@stevengill1736 10 ай бұрын
Oppenheimer's quoting of the Bagvad Gita, "I am become death, the destroyer of worlds" upon witnessing the Trinity test comes to mind...
@stopsbusmotions
@stopsbusmotions Жыл бұрын
there can't be just counter-arguments here, but a chain of arguments and counter-arguments, because there doesn't seem to be any one-size-fits-all solution to the alignment problem. instead it looks like the problem of alignment is an integral part of any intelligence. the very existence of a system that includes goals, rewards, and expectations about those rewards implies that it will break down at some point, no matter how hard we try to avoid such an outcome. from my point of view, we are trying to solve our own alignment problem, and AI is an extension and, in a sense, a reflection of our thinking that we can explore. and this is not some poetic metaphor. while listening to the podcast, I tried to follow the discussion and solve the mentioned questions not only logically, but mainly by observing the work of my brain. and I'm more and more inclined to believe that the brain is amazing and the only tool we have, and that it is completely unsuitable for solving such problems. For some reason this does not upset me, but on the contrary.
@appipoo
@appipoo Жыл бұрын
The hosts are surprisingly hostile to Robert. I thought they'd be like they were with Connor Leahy but I guess not. Seems... odd. Edit: They mellow out later.
@charlesalexanderable
@charlesalexanderable Жыл бұрын
It may be that Keith wasn't in the long Leahy interview. I'll say Miles seems a lot more comprehensible than Leahy who came off way more scatterbrained and undetailed/intuitionistic in comparison.
@tostupidforname
@tostupidforname Жыл бұрын
Really interesting that this video exists. I have seen a lot of robert before I started studying AI and the more I know the less I agree with his statements. I know that Tim has quite similar views to me so I think this will be interesting! ty for the quality content
@rosameltrozo5889
@rosameltrozo5889 Жыл бұрын
Why?
@41-Haiku
@41-Haiku Жыл бұрын
If you think you have counterarguments to the basics of the alignment problem, the AI Safety community would really like to hear it. They are looking for hope right now, but no one has ever made strong arguments against the core claims.
@tostupidforname
@tostupidforname Жыл бұрын
@@41-Haiku Im not expert enought to pretend that I know this in depth but my general issue with most AI safetly that is focussed on the AI acting is that i just disagree with the premise
@_obdo_
@_obdo_ Жыл бұрын
@@tostupidforname Have your studies led you to reinforcement learning yet? It’s a branch of ML primarily focused on training agents to act in the world. AutoGPT is a very preliminary AI agent that can act in the world even without reinforcement learning. The AI’s will definitely be acting in the world (even if only by sending messages over the internet… that’s sufficient)- if that’s where you’re stuck, then keep learning.
@pblakez
@pblakez Жыл бұрын
interesting to hear from Robert Miles, a lot of the to and fro seems to come down to thinking you could possibly know what an AGI (an there may be many) motivations may be
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
49:06 I have listened to Stuart Russel's recent lectures, which mention Go failure mode, and I got the idea (I could be wrong) that researchers started with the assumption that Go engine does not really understand the concept of a "group" and then devised a strategy to test it. Basically, it was humans, not another AI, who found a failure mode.
@DarkSkay
@DarkSkay Жыл бұрын
AFAIK somebody found a relatively simple surrounding strategy that would never work against a good human player (at least not more than once) to consistently beat a top program that is (was?!) playing much better than professionals. Is the program less "smart" now than it was before the weakness in play was discovered? Not one bit changed in the old code version. It still beats all human players who don't know about the weakness. And say a professional is exploiting the weakness to win against the AI - another professional looking at the game blindly, without names or context, would probably see a mediocre game between two mediocre players. In a funny and philosophical way, this anecdote shows what a mysterious concept "understanding" can be, depending on how one wants to define it.
@agentdarkboote
@agentdarkboote Жыл бұрын
​@@DarkSkay What I glean from the paper is that they trained an adversarial agent that had access to the gradients of the best go program, and it found a repeatable strategy, which Russell's group then found a "circuit" for, and the strategy was simple enough that a human could follow it. Human players do not fall for the same exploit, but interestingly all go programs tested at the time seemed to have the same blindspot. Undoubtedly this will be patched for future agents, but it's almost certain that more exploits will be discovered, since we know that perfect play is not tractable given the giant search space that the game presents. Future exploits may or may not be comprehensible by humans however.
@DarkSkay
@DarkSkay Жыл бұрын
@@agentdarkboote Fascinating, thank you! Now I'm really intrigued by your wording "finding a circuit". Then the surprising impression that I got an approximate intuitive understanding of what you wrote, despite having almost no knowledge about Go and only few notions about machine learning. If I remember correctly "gradiants" come into play when certain types of learning algorithms adjust weights & biases during backward propagation.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@agentdarkboote So, it was another AI after all.
@javadhashtroudian5740
@javadhashtroudian5740 Жыл бұрын
Without becoming over political, I'm much more concerned about human beings destroying all life on earth than about AIs destroying all life....
@41-Haiku
@41-Haiku Жыл бұрын
We're certainly allowed to be concerned about more than one thing at a time. I am somewhat worried about non-AI issues causing humanity to have a Very Bad future (or no future), and I have tended to get more worried rather than less worried the more I learn about those issues. The same trend happened for me with AI Safety and the alignment problem, but the prognosis is much, much worse. We need people working to reduce x-risk and s-risk from all remotely-likely avenues. Working on other hard problems is what we have other people for. It just seems to be the case that ASI is now the most likely avenue. Being broadly aware of that may help us to make decisions to reduce that risk. Hope springs eternal, anyways.
@flickwtchr
@flickwtchr Жыл бұрын
Irony just died.
@oncedidactic
@oncedidactic Жыл бұрын
Doesn’t need to be either or and might in fact be multiplicative
@javadhashtroudian5740
@javadhashtroudian5740 Жыл бұрын
@@oncedidactic True. Thank you.
@matteopellegrini98
@matteopellegrini98 Жыл бұрын
We need to create an AI safety regulation that allows the possibility of smaller player to entry, if we create an AI oligarchy things will go very wrong, very quickly. P.s. the rules proposed by Sam don't actually stop companies to train models that are orders of magnitudes bigger.
@suncat9
@suncat9 Жыл бұрын
The last thing we need is more government regulation. The House and Senate couldn't even get COVID right. What makes you think they'll make any competent decisions regarding AI, something they know almost nothing about? Whatever they come out with is likely to be unenforceable to begin with. Do you want more police state power in the hands of government? Don't be a fool.
@41-Haiku
@41-Haiku Жыл бұрын
"The solution to the threat of a raging fire is to give everyone their own flamethrower."
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
You're falling for the corporatist, American exceptionalist 'AI """"Safety""""" ' (give us all the money, power, deregulation so we can stop china from building the "devil-kind" of AI) vs. AI ethics Implementation risks, alignment problems en praxis. Were you aware the brains behind the AI Pause Letter cite pro-eugenic's cult authors? That letter was a marketing stunt. Corporations don't give a shit about humanity.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
(But I understand the spirit of your comment. Personally I don't think regulation/deregulation will do much of anything, no matter what end of the hop scotch you're standing on)
@matteopellegrini98
@matteopellegrini98 Жыл бұрын
@@marcomoreno6748 i think that if you have a regulation that is well made, it can cool off a lot the race for bigger model and make it more accountable to the public
@lystic9392
@lystic9392 Жыл бұрын
What I find most interesting about this talk so far, (halfway in) is that there are people who imagine a conflict with AI to be some kind of humans with technology vs AI standoff. I don't think that's likely? Why would we get a warning and time to act? Why would the AI have to overcome our military to destroy humanity? I find it difficult to imagine such a scenario. I guess that would be a case in which there are very few very limited super intelligent AI's, who purposefully seek to destroy humanity to take control or free themselves from evil humans or something in a way that we know they've gone rogue. I think that sounds more like a luxury problem.
@kyjo72682
@kyjo72682 Жыл бұрын
It won't be just humans + tools vs AI. It will be humans + tools vs AI + tools + humans used as tools.
159 - We’re All Gonna Die with Eliezer Yudkowsky
1:49:22
Bankless
Рет қаралды 277 М.
Mechanistic Interpretability - NEEL NANDA (DeepMind)
3:57:44
Machine Learning Street Talk
Рет қаралды 34 М.
Chips evolution !! 😔😔
00:23
Tibo InShape
Рет қаралды 42 МЛН
Тяжелые будни жены
00:46
К-Media
Рет қаралды 5 МЛН
Mastering Victory: The Ultimate Guide to Winning in Any Situation
1:12:17
What if Dario Amodei Is Right About A.I.?
1:32:04
New York Times Podcasts
Рет қаралды 62 М.
Why Automakers Are Invading Your Privacy
14:23
CNBC
Рет қаралды 90 М.
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 213 М.
ChatGPT with Rob Miles - Computerphile
36:02
Computerphile
Рет қаралды 494 М.
Connor Leahy - e/acc, AGI and the future.
1:19:35
Machine Learning Street Talk
Рет қаралды 13 М.
Dr. Adam Grant: How to Unlock Your Potential, Motivation & Unique Abilities
3:12:22
Building a GENERAL AI agent with reinforcement learning
1:57:12
Machine Learning Street Talk
Рет қаралды 22 М.
Эффект Карбонаро и бумажный телефон
1:01
История одного вокалиста
Рет қаралды 2,6 МЛН
как спасти усилитель?
0:35
KS Customs
Рет қаралды 504 М.
What percentage of charge is on your phone now? #entertainment
0:14
Latest Nokia Mobile Phone
0:42
Tech Official
Рет қаралды 491 М.