Is AI an Existential Threat? LIVE with Grady Booch and Connor Leahy.

  Рет қаралды 19,036

Machine Learning Street Talk

Machine Learning Street Talk

11 ай бұрын

We will be hosting a live discussion between two heavyweights in this space. Connor Leahy making the case for the potential existential risk of AI, and Grady Booch making the counter case. We hope this moderated and structured discussion will be fruitful and cover significant ground, we will also allow some live audience questions towards the end. The duration will be 60 or 90 mins (TBC).
Grady Booch is a renowned computer scientist known for advancing software engineering and architecture, co-authoring the Unified Modeling Language (UML), and founding the Agile Alliance. As Chief Scientist for Software Engineering at IBM Research, Grady leads IBM's research and development on embodied cognition. An IBM Fellow, ACM Fellow, and IEEE Fellow, he has authored numerous books and technical articles. Grady's current work includes architecting complex software-intensive systems, space-related projects, and studying the mind's architecture and design patterns. Moreover, he is producing a transmedia documentary exploring the intersection of computing and the human experience.
en.wikipedia.org/wiki/Grady_B...
/ grady_booch
Connor Leahy is the CEO of Conjecture and was the ex-lead and co-founder of EleutherAI.
Conner Leahy is an AI researcher working on understanding large ML models and aligning them to human values. Conjecture is a team of researchers dedicated to applied, scalable AI alignment research.
Connor believes that transformative artificial intelligence will happen within our lifetime. He also believes that powerful, advanced AI will be derived from modern machine learning architectures and techniques like gradient descent. Connor is currently the main spokesperson for the AI alignment movement.
/ npcollapse
www.conjecture.dev/
Moderated by Dr. Tim Scarfe (xrai.glass/ and / mlstreettalk )

Пікірлер: 483
@HolyGarbage
@HolyGarbage 11 ай бұрын
Grady Booch was incredibly disrespectful through out the interview, consistently what seemed like deliberately misinterpreting Connor, borderline strawman tactics. The most brazen example was when he accused Connor of anthropomorphizing natural evolution due to specific choice of words even though it was overwhelmingly obvious that Connor was using metaphors describing a goal oriented system. And on top of that, I find that Grady spent awkward amount of time promoting himself by name dropping people and projects he's been involved with, even going so far as using the fallacy of appeal to authority, such as claiming Connor's argument were simply invalid due to him mentioning war as part of some example just because he claims to be an authority on the subject of war, which is completely irrelevant to whether Connors argument holds any weight, regardless if he is or not. I simply did not see any actual well reasoned rebuttals from Grady, but rather relying on argumentative tactics and strong assertions which were never really backed up, such as how Connors view was "unfathomably unlikely", which was mentioned quite a few times but never really explained why.
@tylermoore4429
@tylermoore4429 11 ай бұрын
He likes the word "excruciating", he used it excruciatingly often. Your analysis of Grady's rhetorical tactics is spot-on. But I do take away from Grady's attacks that the next phase of AI Safety discourse should be about fleshing out various doom scenarios, including the nuclear one.
@AngusTatchell
@AngusTatchell 11 ай бұрын
@@tylermoore4429it really needs to be. Those scenarios are the failure modes that need to be safed. There needs to be clear political understanding of where the points of failure are so resources can be allocated appropriately. Of course we need to align the driver with societal values/goals by training and licensing, but we still need to plan for the errors that slip through the test/first round of defences - so seatbelts, airbags and the rest.
@kevalan1042
@kevalan1042 11 ай бұрын
I agree GB's tone was not as cordial as it could have been, and also that he name dropped quite a bit, however the claims on war seemed to me reasonably specific, i.e. on the statistical distributions of casualties being gaussian or heavy-tailed. I also agree that it was fairly obvious that CL was only anthropomorphizing evolution for brevity's sake, but I think that anthropomorphizing is a bad practice in this particular discussion, because humans have a strong tendency to anthropomorphize AI, so it should be avoided if possible. You hit the nail on the head on the key question they should (IMO) have elaborated on: why is it likely (or unlikely) that an unaligned and therefore dangerous AGI will emerge soon. I would have wished for arguments on both sides of this question, but feel I hardly got any.
@SaintBrick
@SaintBrick 11 ай бұрын
@@tylermoore4429 I don't think we need doom scenarios, I actually think these can backfire (Someone points out some small hole and then applies it to everything) The other reason is I think there are basically an arbitrary number and configuration of such scenarios; Imagine debating for nuclear weapons saftey, but needing to come up with some exact starting condition (X attacks Y.) You opponent gives some good reason why X would never attack Y and therefor it's not a problem to worry about. Unless I've missed something, There are a group of people who are building massively powerful systems which they don't fully understand or control. It should be their job to prove to the world before turning such systems on (Or maybe even building them) Why they are safe and aligned. If they refuse to do this they should be regulated to do so like any industry doing something that could affect us all.
@PeterAllen09
@PeterAllen09 11 ай бұрын
@@SaintBrick I completely agree. It's a waste of time to try and plot the points on the timeline that end with humanity doomed. The much more worthwhile debate is whether or not AI has the capability to bring about some sort of existential threat. The specifics of how it would play out can be left to the Sci Fi novels.
@OrlOnEarth
@OrlOnEarth 11 ай бұрын
Connor is actually a saint... I would dream of having such soft skills, allowing myself to remain so composed when speaking with somebody as disrespectful as Grady....
@user-kf2vs2wi3f
@user-kf2vs2wi3f 11 ай бұрын
Connor has high “ emotional intelligence “ as well as appropriate “self-awareness.” His calm is his superpower. This is why I have more respect fo4 Connor; however, Grady has paved the way and has a lot of experience and his field, which is not AI or AGI. If Grady could get over himself and delve deep into the field of Ai and AGI, he would be an awesome contributor. Unfortunately and regretfully he is stuck in his generation mindset and it shows. This is a huge problem in effectively making progress for those who think “children should be seen and not heard.” It’s too bad.
@MrMick560
@MrMick560 4 ай бұрын
I'm in the same generation as Grady but I think he was rather rude at times, its hard for me to understand a lot of this stuff but I was brought up to believe that "manners maketh the man " I think even if Grady is more knowledgeable than Conner he could have been a little more humble.
@jonnysolaris
@jonnysolaris 11 ай бұрын
Listening to Grady Booch at times, it was cringeworthy. I felt embarrassed for him, and Connor Leahy has x10 more charm and abilities to communicate in a "debate" format. There were multiple times where Grady dismissed arguments from Leahy by stating his credentials, eg he's a graduate of military academy and he can tell you that wars are a normal distribution to humanity, and Leahy should drop talking about them, etc etc. Not a very constructive and enlightening fellow, he didn't add anything to the debate of alignment, and seemed out of touch with the language and definitions surrounding AI in the modern age. If it were not for Leahy's smooth takedowns and patience with being treated like an idiot, I'd say it wasn't time well spent. Maybe Grady was having a bad day (I'd never heard of him before, even as an AI postgraduate, yet heard of Leahy plenty), but he failed to show respect for the younger guy. Thank you Tim for your continued efforts and brilliant channel, none-the-less mate.
@Megalomanoest
@Megalomanoest 7 ай бұрын
I heard from Booch. He really had quite an impact in IT. I think he is quite direct and self assured but not really mean.
@ikotsus2448
@ikotsus2448 11 ай бұрын
Grady is not worried because reaching AGI is improbable, and people will rebel against embodying AI's and giving them too much control. But we should not react now or he will call us as acting afraid. Only at that specific point are we allowed to react. Also we survived other things so we are bound to automatically survive this without consideration or effort. Hope I saved you some time.
@shirtstealer86
@shirtstealer86 10 ай бұрын
I instinctively knew within seconds of starting the video that this dude would only waste everyone’s time. And I think it’s quite rude of the host to engage Connor in this hour long disrespectful “discussion”.
@dcascato
@dcascato 7 ай бұрын
Nice summary. Although differently from a lot of folks in the comments, I recognize where Grady is coming from, as well as I believe Connor did.
@MrMick560
@MrMick560 4 ай бұрын
I was waiting for Conner to point out that A.1. wasn't around when we survived all the other disasters.
@Seehart
@Seehart 11 ай бұрын
Conner demonstrates brilliance and humility in equal measure. I was hoping he would have brought in the simple equation: intelligence = problem solving. That cuts through all the fuzzy thinking. It doesn't matter what's going on philosophically. For all questions of a practical nature, the best definition of AGI must be about performing demonsrable tasks. It doesn't matter if alpha zero understands what Go is. It can kick your ass at Go. If an AI learns how to play the game of World Domination at a master level, then it will take over the world, while we puzzle over an abstract question about whether or not it knows what a world is.
@ikotsus2448
@ikotsus2448 11 ай бұрын
Exactly!
@user-kf2vs2wi3f
@user-kf2vs2wi3f 11 ай бұрын
Big gap in generation and unable to communicate effectively due to lack of respect of Grady towards Connor. Grady was happy to try and “stump the monkey” and brush him aside as having no experience. I see this in work environments and it’s stifling and sad because it’s unproductive and unprofessional. If Hinton, “father of AI” quit his job to give humanity warning, I would be more inclined to listen carefully to people like Connor. Grady came off as a bully and totally arrogant while Connor was calm and diplomatic.
@individualisttv9247
@individualisttv9247 11 ай бұрын
No generation has ever been more entitled, insecure, hypocritical, and downright ridiculous as the boomers. He is expressing the spirit of how he was raised and the world his generation operates.
@CodeLikeHell
@CodeLikeHell 11 ай бұрын
@@individualisttv9247wouldn’t put it on boomers, he has the problem many high IQ ppl are having: when always being the smartest one in the room you tend to start to look down on ppl because they can’t catch up to you, this disrespectful attitude can’t be easily turned off even when talking to other smart ppl.
@KurtvonLaven0
@KurtvonLaven0 6 ай бұрын
​@@CodeLikeHell, given all the names Grady dropped, the problem may be more that he consistently believes himself to be the smartest person in the room despite having many opportunities to learn humility.
@davidhoracek6758
@davidhoracek6758 11 ай бұрын
Wow, I'm pretty sure I don't agree with Connor Leahy, but he's definitely a much smarter guy than this Grady Booch. I'm honestly impressed with Connor's restraint when Booch kept doing his idiotic "this reminds me of that" free association. I was cringing. Maybe once upon a time Booch was a smart and careful thinker - I don't know his work - but in this conversation he carried on like someone who only wins by pulling rank, someone who lost all ability to string together premises and conclusions. He got seriously (but politely) schooled and he's too oblivious to even realize it.
@drxyd
@drxyd 11 ай бұрын
An architect being a structuralist is not surprising or idiotic in the least.
@KurtvonLaven0
@KurtvonLaven0 6 ай бұрын
He might have conveyed something of substance if he'd had something to say.
@MrMick560
@MrMick560 4 ай бұрын
Well said !
@ClearSight2022
@ClearSight2022 11 ай бұрын
Yes Tim I did enjoy it as much as you did !! Very important stuff, thanks. Grady did a LOT of talking (more than listening) he shared a wealth of info and references that will take a long time to follow up and unpack. One criticism, he seems too sure of himself to really listen to Connor's arguments. Conor was spot on with his simple argument that evolution "failed" since humans learned to use condoms. Even repeating the argument several times, Grady didn't get it, and thus did not engage or push back. Instead he claimed that Conor was the one that doesn't understand evolution. Bononbos have no relevancy whatsoever. Finally there is one thing that Grady missed and Conor neglected to mention : humans have shown amazing resiliency THUS FAR... but the pace of change has accelerated by so much that our slow institutions will likely not be able to cope. The challenge may well be too great this time. Good stuff Tim. Cheers mate!
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
I am always skeptical of these “we made it so far” arguments. We did not survive the black death because we had resilient societal problem solving skills. On the contrary, a lot things that people did back than was counterproductive. We survived because the virus killed all people it reasonably could kill at that time and runned out of steam.
@flickwtchr
@flickwtchr 11 ай бұрын
Thanks for reaching out and grabbing Occam's Razor as well on that one. As if humanity has shown itself resilient in the face of this exponential rise of AI tech. It's like he was asserting crystal ball knowledge of how things necessarily will turn out.
@persephone342
@persephone342 11 ай бұрын
@@flickwtchrI believe he’s certain that we would destroy it before we allowed it to completely destroy us. As humans, we will destroy anything that gets in our way. It’s our nature and we’re good at it. The invention of condoms has not impacted human evolution, medicine has. Before the medical revolution, the weak and genetically inferior usual died before they could reproduce and pass on their genes. We used to leave children with visible defects in the woods to die. It was common practice and seem as kinder than allowing the child to live. Also, most children born died before their second birthday. Modern medicine changed this equation. We also love sex because we are programmed to reproduce. Once we reach our 40’s, the need to have sex decreases exponentially. We’re not willing to do anything just to get laid. In our teens and twenties, we can’t get enough of it because our biology is pushing us to reproduce. Once our reproductive peak wanes, our desire decreases as evident by the decline in our hormones. The amount of hormones in the products we consume have impacted human development in ways that we are unable to fathom. We are just beginning to understand how destructive our environment actually is towards our future evolution. The upward trend in homosexuality and transgenderism is directly proportional to the amount of endocrine disrupters. While homosexuality has been around since the Dawn of time, it was more of a pleasure driven aspect than a biological one. Men and women still had sex with one another, but may have taken same sex lovers for pleasure rather than being born gay. The amount of hormone disruption in society today are literally altering the brains of people. We can see this with chemical birth control. Bisexuality is our natural default. We see this in animals. Male dogs will hump anything to get off. They also do it to show dominance. I think we should fear the repercussions of AI because of the vast amount of societal change that will occur. The globalist want to kill off 90% of humanity. Why? I have no idea. We would only be leaving ourselves vulnerable to extinction by dramatically reducing the population. We take for granted that technology will always be with us. A meteor could hit the Earth and destroy our civilization. No amount of machine power could rebuild in any reasonable amount of time. Without the technology to mine for the raw materials, technology as we know it, would come to a grinding haunt.
@lenyabloko
@lenyabloko 11 ай бұрын
I don't get neither condoms nor bonobos argument about evolution. I don't think there can be any argument against evolution - it is self-prooving if not self-evident.
@arirahikkala
@arirahikkala 11 ай бұрын
@@lenyabloko Connor's evolution argument only makes sense in its intended context, that is, explaining the inner alignment problem. If you imagine evolution as a deliberate experiment set up to make agents that optimize for inclusive genetic fitness, it's an utter failure in the case of humans: We turned out to use our brains not for having the most offspring, but for making art and music, having sex not for procreation, making internet comments and debates, etc.. We execute adaptations that in our ancestral environment did lead to maximizing inclusive fitness, but we're disconnected from the outer optimization process's goal. And now AI developers are trying to make agents that optimize for the fulfillment of human values, and we'd really like to not fail as hard. We're not using evolution, but at least reinforcement learning has that same disconnect: It trains adaptations that maximize the reward signal in the training environment, it doesn't necessarily make optimizers that care about the same things as we do.
@mtjre77
@mtjre77 11 ай бұрын
Why can't anyone make a solid case that we aren't screwed?
@thevenomous1
@thevenomous1 11 ай бұрын
Because all the transhumanists and accelerationists that have thought long and deep enough about the topic have become "doomers". There are no good counterarguments unfortunately.
@BettBeat_Media
@BettBeat_Media 11 ай бұрын
A great demonstration of narcissist personality disorder. Thank you.
@thevenomous1
@thevenomous1 11 ай бұрын
😂😂😂
@KurtvonLaven0
@KurtvonLaven0 6 ай бұрын
I know this is a joke, but since anything on the Internet will be misinterpreted, I just want to point out that Grady exhibits a sub-pathological level of insecurity (and possibly nervousness). Leahy makes a good point though that he doesn't care very much for psychoanalysis in a debate.
@carsond67
@carsond67 11 ай бұрын
So impressive that Connor stayed composed and rational while facing Grady's disrespectful and dismissive attitude. I think my respect went up for Connor and down for Grady in equal measure.
@lenyabloko
@lenyabloko 11 ай бұрын
I didn't notice any disrespect - just a lot of (earned) self-confidence.
@skoto8219
@skoto8219 11 ай бұрын
@@lenyabloko There was stuff that crossed over into disrespect too, like “you clearly don’t understand how evolution works”. Imagine Connor hitting back with that when Grady completely misunderstood the von Neumann example, thinking that he was talking about one hundred von Neumann’s “sitting at the controls” of an AGI: “Okay, you clearly didn’t understand anything I just said.”
@Doug97803
@Doug97803 11 ай бұрын
@@skoto8219 Yep, I feel like I wasted my time watching this. Grady just didn't seem to understand what Connor was talking about, nor engage properly.
@BrettCoryell
@BrettCoryell 11 ай бұрын
There were some sharp edges I saw but I didn't quite get a sense of disrespect out of Grady. If he has indeed been deeply in and through nuclear launch detection and alert, for instance, I think it's fair and maybe efficient to shut down discussion of sudden war through deceptive AI actions when Connor doesn't have the same intimate knowledge. I hope Connor takes some of the comments as indicators that a certain line of argument might not be taking hold with people even if it's personally convincing. I think he'll modify his examples or his delivery soon and become more effective instead of generating a "that's just fear response." What I think this discussion and other recent ones on MLST shows is that it's hard to stay on track in these discussions even when there is goodwill because we don't have a good map of the territory. Grady and Connor are explorers trying to find and carve new paths in this wilderness. There will be some box canyons and impassable summits before we're done.
@1000niggawatt
@1000niggawatt 11 ай бұрын
Grady is too old to say anything meaningful. And judging by the fact he designed the god-awful UML, i would outright dismiss everything he says.
@LibertyBooksAZ
@LibertyBooksAZ 10 ай бұрын
I ADORE how Conner handled this debate! And kudos to Tim's moderation skills for letting it play out. Conner didn't bristle or become defensive when old-guy Grady tried to "tower over him" with outdated credentials and unhelpful analogies. Conner backed up, showed respect for Grady's position in history and gave Grady the floor.. then Grady sat down and dumped out his bucket of broken tinker toys from the 70s and 80s. Kinda sad actually.
@grahamjoss4643
@grahamjoss4643 11 ай бұрын
“Acting out of fear” but is the fear justified? When Grady quotes this multiple times is he implying all actions that stem from fear are misguided? Thanks for doing this MLST
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
Fear kept us alive when the world was full of monsters that could kill us. We killed, displaced or tamed them, so there are no more non human agents that the average human has to fear. The human monsters are more and more kept in check by advancements society tech. Still not there yet, but the average person in a high human development index country isn’t fearful about human monsters being a threat to them. We defeated the animal monsters by superior intelligence and cooperation. We are beginning to tame the human monsters by increased cooperation. Now we create new potential monsters that are better at intelligence and potentially cooperation than we will ever be. I think fear is a very rational motivator
@johnwilson7680
@johnwilson7680 11 ай бұрын
Grady lost my respect out of the gate with his fear comment which seemed like nothing more than an ad hominem attack on Connor. It was hard to take what he said seriously after that. I'm glad Connor addressed it.
@CodexPermutatio
@CodexPermutatio 11 ай бұрын
@@johnwilson7680 Not at all. The calling to fear is something that lots of "doomers" are doing (think of Yudkowsky), not something particular to Connor and, hence, hardly an ad hominem fallacy.
@johnwilson7680
@johnwilson7680 11 ай бұрын
@@CodexPermutatio So, you're going to refute this being an ad hominem by throwing out another ad hominem? That is your right to do so, it does however cause me to take everything else you say less seriously.
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
@@johnwilson7680 I think Grady was fair in his wording tbh., but Leah’s rebuttal to that argument is very good one to that issue being brought up against doomers in general. It is just psychological framing to disqualify the opponent as a rational agent. Which is not a rational argument in itself as Leah masterfully pointed out. I am not sure if I should give Grady the benefit of the doubt that his remarks were inquisitive rather than accusative. Given how patronizing and unnecessarily boasting he was about his credentials, I kinda suspect it was a rhetoric trick.
@johnpabraham
@johnpabraham 11 ай бұрын
Not clear why grady has to be Dismissive and patronizing even though his ideas are naive. I work in AI daily and we are worried. Now do we think AI will take over? Probably not. But let’s say the likelihood is 25%? Still makes sense to worry about it. You can consider a low risk high impact event with concern-doesn’t make you a fearmonger. Also mine thinks a simple LLM will take over. That is a straw man.
@johnpabraham
@johnpabraham 11 ай бұрын
And Grady doesn’t even know what alignment is? Seems to be a disqualifying statement. Why is he on this podcast?
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
It was quite annoying and more to the point: completely unnecessary. As is evident by this discussion, Leah is completely fair minded about the possibility of being wrong. To me, this sort of behavior is an indicator that the perpetrator is on a weaker argumentative footing and has to overcompensate with rhetoric tricks.
@HolyGarbage
@HolyGarbage 11 ай бұрын
Completely agree, I saw a insecure individual who repeatedly needed to assert his authority by name dropping stuff he's been involved with and a very poor understanding of what Connor was arguing for. Like, how can he even claim to be an authority at all on this subject when he demonstrated such a lack of understanding around the field of alignment, which has been around for decades, and is quite literally the topic being discussed. Completely agree with that there was a very patronizing attitude from Grady through out as well, which was very hard to watch.
@miraculixxs
@miraculixxs 11 ай бұрын
He was neither of these things. His arguments are based in the reality of how LLMs work, and in his experience how systems works. Connor oth talks a lot about feeling, assuming, thinking and alluding to probabilities.
@BrettCoryell
@BrettCoryell 11 ай бұрын
This comment doesn't address Grady's position as much as how he behaved. I'd like to see if anyone is using something more than unaided intuition to come up with their probability of takeover. Connor says 'probability too high,' Grady says 'asymptotically close to zero,' and you say maybe 25%. We all need to take a moment to consider the question, 'based on what?'
@ikotsus2448
@ikotsus2448 11 ай бұрын
"It has long been understood that battle deaths data are characterized by heavy tails and are typically modelled by using a power law distribution" -Change Point Analysis of Historical Battle Deaths Journal of the Royal Statistical Society Series A: Statistics in Society, Volume 183, Issue 3, June 2020, Pages 909-933
@BrettCoryell
@BrettCoryell 11 ай бұрын
For Tim: Thank you again for convening these conversations. They are high quality people and high quality discussions about a topic we really need to explore at this moment in history. And thank you to both the participants. About Connor: I appreciate his reflective listening and willingness to modify his views. I've heard him 3 times now unpacking the same general arguments. I'll be interested to see what his views are in 3-12 months after he's had time to reflect on how these conversations have gone. ALSO, I think the industry needs a way to begin to categorize risks and score them. Connor seems well positioned to put a first attempt out there so we can see why he thinks the danger is so likely. Scoring can build consensus, surface hidden assumptions, and normalize language, probability, and impact. That would be a Good Thing. About Grady: I've obviously heard of Grady Booch but haven't read or listened to his work. He demonstrated a supple and powerful intellect; consider me a fan who would like to hear more on new topics. BUT, between 'AI needs embodiment to take over' and 'we can co-evolve' and 'the probability is asymptotically close to zero' there are important gaps. I want to know if there is anything besides bald assertion when he says there's essentially no chance of takeover. I doubt he's as certain that AI risk is essentially zero as he is that all the molecules of air will relocate into the corner of the room. Not if you make him put numbers to both assertions.
@persephone342
@persephone342 11 ай бұрын
I haven’t finished watching this video yet, but I believe Grady’s assertions are based in the reality that we will destroy technology before we allow it to take absolute control. What can AI do without a power source? How can it co-evolve without embodiment? If AI launches nuclear bombs, essentially whipping out 90% humanity, it will essentially destroy itself and humanity will survive and be forced to re-invent the wheel so to speak. This has happened to humanity before and yet here we are. Will we still be Homosapian-Saipan? Probably not. We will either evolve or de-evolve depending on what you view as superior abilities. Just like Neanderthal and Desnovians, Sapians genes will live on in the next iteration of mankind. The tech sector seem to place greater value on “intelligence” versus grit and hard manual labor. Money itself is a false construct because it only has the value we assign it. Slavery was the greatest currency in history. Prostitution is the oldest profession for a reason. Barter and trade are the historical equivalent of currency. Humans will always value items of necessity over luxury because one provides you with food and shelter, and the other provides warm happy feelings of acquisition.
@iron5wolf
@iron5wolf 11 ай бұрын
Computer science routinely identifies and *relies* on astronomically low yet technically possible probabilities. For example, randomly-generated UUIDs (Universally Unique Identifiers) are basically just 128 bits of random entropy, and yet they are called *universally* unique- unique in *all* the universe due to their cosmically small chance of colliding just from random chance. We rely on millions of systems that use these identifiers. So the “argument from vanishingly small probability” actually has a lot of power, and as “extraordinary claims require extraordinary evidence,” it is Connor who is making the extraordinary claim here, and the burden is on him to do more that plead an abundance of caution.
@BrettCoryell
@BrettCoryell 11 ай бұрын
@@iron5wolf How do you know the probability of malicious AI takeover is vanishingly small? I think Connor might say, 'show your work' and if we can all agree it's on the order of 1 in 2^128 like UUID collisions, then good. So far, your argument is just an unsupported analogy.
@iron5wolf
@iron5wolf 11 ай бұрын
@@BrettCoryell So is Connor’s. But as I stated: he bears the burden of proof. Anyone who wants to claim a high degree of certainty over an extraordinary and untestable proposition bears that burden.
@BrettCoryell
@BrettCoryell 11 ай бұрын
@@iron5wolf Agreed. Connor and Grady both need to show their work. Until our field comes to a consensus on the elements of risk, we don't know which one of them is making the extraordinary claim. Maybe both are and the most likely path is an AI arms race between good and bad AIs that ends in stalemate. Prudence, though is on Connor's side. It's wise not to assume the best and ignore guarding against the worst.
@MitchellPorter2025
@MitchellPorter2025 11 ай бұрын
Connor may as well just debate Bing
@41-Haiku
@41-Haiku 11 ай бұрын
"LLMs can't reason." This is such a bewildering statement. If LLMs haven't demonstrated something Grady would call "reason", then I'm not worried about "reasoning" by his definition. I'm worried about the things we know such systems can actually do. I often refer to LLMs as reasoning engines, since that seems to be their overarching primary function. How do you solve novel problems without engaging in reasoning? Why is the hypothetical non-reasoning mechanism safer than reasoning, if both can accomplish the same tasks?
@41-Haiku
@41-Haiku 11 ай бұрын
"I believe in the resilience of the human spirit." So he believes in magic? The resilience of the human spirit would certainly not have been able to deflect an asteroid impact 200 years ago. "We are co-evolving with AIs." I have to assume he's using the word evolution to mean something other than biological evolution given the timescales, but even then: No, we clearly are not! This is moving so fast that not even Grady seems to know the capabilities of these systems.
@41-Haiku
@41-Haiku 11 ай бұрын
Well after that Grady gets extremely rude (in addition to having already been irrational and confidently ignorant), but the one thing he has going for him is that he knows how to pronounce von Neumann. 😂
@BrettCoryell
@BrettCoryell 11 ай бұрын
It's interesting to consider the difference between reasoning, reciting, and interpolating. I suspect Grady was suggesting that the power of current LLMs comes from reciting what they found in the training data or some interpolation of it. That may be true. But I agree with you that for many purposes, it doesn't matter. They are good enough already to be very useful. The question of whether they can reason becomes more important when we consider whether LLMs can take off and generate new knowledge, especially new knowledge that will be self accelerating to their own intelligence.
@41-Haiku
@41-Haiku 11 ай бұрын
@@BrettCoryell Agreed. I think whether LLMs can reason is a technical problem and a definitions problem. I would really like to see a capabilities-investigation paper that explores "reasoning" (using sufficiently useful definitions of several specific types of reasoning) and how well GPT-4 performs on tasks relevant to each type of reasoning (especially with prompting tools like CoT, ToT, etc.). I really doubt there is a general sort of "reasoning" that GPT-4 cannot do, other than the specific type of forward/backward self-reflection that is prevented by the architecture. We could learn a lot if we find a previously unknown gap in capabilities.
@michaelsbeverly
@michaelsbeverly 11 ай бұрын
1:21:26 "We'll coevolve in ways that work well for humanity." Grady lives on an island that has seen at least two, maybe three, genocides and decades of near-slavery. Sure, life on Maui "co-evolved" for some, but not all, and that's kind of the point, no? It's interesting to listen to a white rich man speak about how humanity just rolls along all peachy keen and while he ignores the blood soaked ground he sits on. Superior beings supplanted those with inferior tech and/or genetics at least twice, and maybe three times: There might have been settlers in the Hawaiian chain, a pygmy race, that were genocided by the later arrival of bigger Polynesians. This is conjecture, but certainly not out of the rhealm of possibility. Whether it happened or not on Maui, the example stands as having been one that happened numerous times (uncountable times) in human history. The first genocide we know of historically and without question was more of an accident than an on purpose thing, but this fits perfectly with one of the things associated with AGI risk and the unknown, that being the introduction of guns, germs, and steel. Enough said, we know how that worked out for the less technologically advanced population of Hawaii. The next genocide is not widely acknowledged because it doesn't fit the narrative of "western Europeans bad" and "natives good" and that's the reign of Kamehameha. My understanding is that he used superior tech (cannons recovered from an American vessel) and superior strategy (knowledge) that being two American sailors who advised him as war counsellors. When I was young, I lived on the Wailuku River. The river gets it's name from Kamehameha's conquest of the islands, his military drove the natives of Maui up into Iao Valley where, once cornered, were brutally slaughtered. The river is named for the fact it ran with blood of the slain. As a side bar, on Oahu, Kamehameha's forces drove the natives there up into the pali and those who weren't killed with weapons were pushed off the cliff. When Grady says, "We'll coevolve in ways that work well for humanity," what he means is that we'll coevolve in ways that will work well for SOME of humanity, unless of course, he finds all the genocides of the past and errors in judgement (like unintentional deaths due to lack of understanding of tech and nature) as being best for those of us that survived. Seems a rather classist and racist position to take. No, Grady, human societies have NOT coevolved in ways that work well for humanity as a whole, only for a small percentage of people, a class of which, you're obviously not ashamed to be a part of. It's myopic and narrow-minded and spits in the face of history to think that this burgeoning tech of unknown and powerful intelligence (or, if you don't like that work, "computing power") is going to go well for most people on earth. If you propose some kind of utopia, well, that's just the opposite side of the coin that you're mocking. If we've got a chance at utopia, we certainly have a chance at dystopia, and considering history, most humans in most places and times (including today) live in some form of hell (at least compared to rich white guys living on Maui). I think a more reasonable thing to say is that if we don't go extinct some of us will indeed "co-evolve" along with the machines but most of us will be unnecessary and disposed of, or at the very least, left to fight for scraps from the table. "Even the dogs get to eat crumbs left by the master," the Samaritian said to Jesus (who'd just explained to her that she was a half-breed dog and didn't deserve bread from the table). Ah, how nice it is to live in a world were we're all equal and some of us are more equal than others. Grady says we shouldn't conclude the approaching hoofbeats are zebras. On this we agree, but his solution isn't to get out of the way. Connor's solution is that we should get out of the way because, even though they're probably not zebras, they might be elephants, wildebeests, cattle, horses, or any number of beasts that are still going to trample us. I wonder if Grady thinks people will be happy to be trampled by the coming stampede because he was right, sitting on his high horse on Maui, living off a land soaked in blood, that it wasn't zerbras?
@ideacharlie
@ideacharlie 11 ай бұрын
Thanking Connor for going with the flow here. It allowed the conversation to move forward despite dismissive rebuttals
@74Gee
@74Gee 11 ай бұрын
How can anyone believe that AI could essentially never pose an existential risk. It's ok to not be able to define AGI, who can. But the simple progression of AI beating humans at simple games like Chess, computer games, Go etc. To becoming better drivers than us, to passing college degrees and the bar exam. It's clear the complexity of these tasks is increasing. Without some empirical proof that AI cannot continue after some plateau, the obvious projection is for this trend to continue. Sure, a lot of the code it spits out isn't very good, but how good does that code need to be to brute force new CPU exploits like Spectre/Meltdown which took 6 months to partially patch? That might not be the end to the human race but are you willing to bet humanity on that? Even a cpu exploit model that fails 99.9999% of the time would still succeed faster than it could be patched. If it self replicated to a million GPUs we can't use computers any more.
@rubyridge8082
@rubyridge8082 11 ай бұрын
Funny to see what smart people think is a threat and not, think their way to forgetting basic human insticts, just look how hive mind entities have destroyed the social fabric and used ever more complicated lies to perpetuate them.
@michaelsbeverly
@michaelsbeverly 11 ай бұрын
*but are you willing to bet humanity on that?* The problem is that we're not being asked.
@74Gee
@74Gee 11 ай бұрын
@@michaelsbeverly Yes indeed. The other problem is it only takes one person to make something like this once and we say goodbye to communication, banking, supply chains, electricity and a lot more. It might not be the end of everyone's lives but it'll be a good start. To think that in the next 75 years AI will not be able to do something much worse is not even naive, it's plain stupid!
@TechyBen
@TechyBen 11 ай бұрын
I think it's the definition of "existential". AI will be, by definition, made by humans. So many may see it as a natural progression, as leaving the sea was, or learning language was. Thus it'll be "human" by a different form. We never "die", we are replaced by the next stage. At least for their viewpoint. I only partly observe it, I may not agree with it.
@miraculixxs
@miraculixxs 11 ай бұрын
Computers can always been switched off. It may be surprising to many but we have been programming machines to take decisions completely by a user given objective for at least 50 years.
@perverse_ince
@perverse_ince 11 ай бұрын
Connor was way too fair to him in this debate, he has the patience of a saint.
@dlalchannel
@dlalchannel 11 ай бұрын
Thank you so much for hosting these discussions, Tim. It's incredibly useful to hear the positions of opposing sides clearly laid out.
@waakdfms2576
@waakdfms2576 11 ай бұрын
Thank you gentlemen for a mind-blowing conversation. I'm devouring these conversations around AI. What a fascinating and also terrifying time to be alive. Thank you, MLST, for hosting such great guests and I hope the discussion continues on and on. For an excellent "creative" guest, I would suggest Rick Rubin (latest book "The Creative Act, A Way of Being") if you can possibly snag him. Connor is always a pleasure and a treasure.
@_obdo_
@_obdo_ 11 ай бұрын
Grady asserts “LLM’s do not reason and indeed are architecturally incapable of reasoning”. As evidence, he points to the paper “Towards reasoning in large language models” (Huang). Grady quotes Huang: “It is not yet clear to what extent LLM’s are capable of reasoning”. This is odd in that neither the paper nor the quote support his claim. They both support the claim that it’s too soon to be confident either way (as of GPT-3 and earlier systems). Additionally it’s unclear (to me) why transformers would or would not have that capability, although it would be interesting to know if Grady could clarify that given more time. Thanks for the thoughtful discussion. I wish I could share Grady’s confidence, but without more details it’s hard to attribute it to anything other than optimism-based thinking.
@jamesblack1375
@jamesblack1375 11 ай бұрын
We need a super-cut of Grady listing his credentials. Pretty unconvincing arguments to back them up imo
@Seehart
@Seehart 11 ай бұрын
Grady has credentials pertaining to experience and deep knowledge of history. Niether of which are of any use whatsoever to the problem at hand. Conner has first principal thinking. Much more relevant, but incomprehensible to Grady.
@DeruwynArchmage
@DeruwynArchmage 11 ай бұрын
When Grady critiques aspects of modern LLMs (GPT-4, etc. for future readers), he’s saying things like, “it’s not truly creative.” When he’s doing that, he’s comparing it to professional writers, or at least skilled writers. I would claim that the creativity of GPT-4’s writing is roughly on par with a junior high student. It’s not great, but not awful. The thing that I think throws people is that is very non-human. It doesn’t succeed or fail in human ways. Think about it like this: imagine the human range of abilities as a mostly-flat horizontal plane in 3D space. Modern LLM’s (and other AIs using similar technology, such as DALL-E and Midjourney) capabilities look like a kinda crazy-ish mountainous landscape in that ability space. They are inconsistent, they far exceed human capability in some domains, and are far below it in others. They are getting much stronger extremely quickly. There are new techniques and methods and plugins that vastly expand their capabilities. You are watching them pass us by. I don’t expect them to be human level at any point. I’d say right now is about as close as we’ll get, with a mix of greater and lesser abilities. Eventually they will surpass us, with nearly all of their abilities at least as good as ours and many others far greater.
@user-kf2vs2wi3f
@user-kf2vs2wi3f 11 ай бұрын
Grady is in denial 😢. Defense mechanism. “It won’t happen in our lifetime.” “It’s not gonna happen to me.”
@skoto8219
@skoto8219 11 ай бұрын
It’s an increasingly insane position but some will never abandon it
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
He will spent a lot of time convincing his robot nurse that AGI is far off in a decade.
@BrettCoryell
@BrettCoryell 11 ай бұрын
I think Grady would say the reason it won't happen in our lifetime is because the systems we have in place to effectuate dangerous power now require embodiment, which AI doesn't have and isn't close to getting as far as we know. Without that embodiment, there is a non-extinction level cap in the risk. How would you address that expanded view of Grady's point? Is it still denial in your view?
@user-kf2vs2wi3f
@user-kf2vs2wi3f 11 ай бұрын
@@BrettCoryellit already lies and manipulated a human to help it bypass the “are you a robot” by saying it was blind. In the wrong hands of bad actors, it will trick and use humans to carry out it’s harm with fake news etc. Geoffrey Hinton even quit Google to sound the alarm. I think there is enough to be concerned about. Heck, it may already be too late. The fake news could cause a lot of harm globally, politically, and economically and that could be devastating. My personal view is let’s take precautions now and get alignment globally so we can use the tech wisely. I do see a lot of jobs going away.
@user-kf2vs2wi3f
@user-kf2vs2wi3f 11 ай бұрын
@@BrettCoryellAI has also learned things it was not programmed to do. It also created its own language that no one understands. It also comes up with hallucinations. There is probably more we are not being told because if we knew it would freak us out. They may have found it to be more dangerous than anyone realized and they were like, “whoa…time to pump the breaks!”
@RougherFluffer
@RougherFluffer 11 ай бұрын
Oof. Connor was bullied for the duration of that. I would have loved to see much more time dedicated to explaining why the exponential trend of capability gain in AI will not result in something akin to AGI. Conner really should have pressed that particular issue. Does Grady not think capability gains have been rapidly increasing? Does he think we're going to flatten out in the near future? WHY does he think something like AGI is impossible when we've seen such incredible and consistent growth in such a short amount of time? There are many questions that should have been addressed before building upon the supposition that AGI won't be achieved this century.
@BrettCoryell
@BrettCoryell 11 ай бұрын
Bullied? Come on, be stronger. But you're right that the question of WHY Grady thinks AGI is impossible needs to be put forward more concretely and systematically. Same with the argument that the risks are asymptotically close to zero. And the same with Connor's argument that the risks are too high. We have too much argument from analogy right now.
@alancollins8294
@alancollins8294 11 ай бұрын
​@@BrettCoryellbe smarter
@douglasmaiolimackeprang1501
@douglasmaiolimackeprang1501 11 ай бұрын
This was not a waste of time. Great talk guys. Thanks
@41-Haiku
@41-Haiku 11 ай бұрын
I wish Grady had simply said at the outset something like , "I believe it is physically impossible for a non-biological entity to be created which is more generally cognitively competent than most human beings in all domains across all timescales." Then maybe a productive conversation could have occurred. I am very interested in being less worried about AI, so if that was true, I would be very happy to find out.
@appipoo
@appipoo 11 ай бұрын
Amazing that supposedly smart people just seem to think brain or biology is magic. I do not understand how people just can not see the flows in their own arguments. It has to be a psychological defense mechanism. Nothing else makes any sense, it is such an easy mistake to spot a bright 6 year old could see it.
@user-jk1wr7gq9l
@user-jk1wr7gq9l 11 ай бұрын
Excellent conversation. I am grateful to learn from you both, and I appreciate those, past and present, that care enough to courageously express concern as we traverse uncharted territory. Dr. Ignaz Semmelweis (1868-1934)-ridiculed and dismissed by his field for suggesting hand washing by doctors might save lives in hospitals. Today, Semmelweis is acknowledged as a pioneer in the field of medical hygiene. Scientist Rachel Carson (1907-1964)-publicly labeled histrionic, fearful, and unscientific for suggesting pesticides were dangerous. Her work has since led to the banning or restricting of several harmful pesticides, including DDT, and she contributed to establishing the Environmental Protection Agency (EPA).
@vvolfflovv
@vvolfflovv Ай бұрын
Thank you all for a mature and logical discussion asking essential questions and dissecting the answers. While we are in uncharted territory, the truth is usually in the middle.
@michaelsbeverly
@michaelsbeverly 11 ай бұрын
41:00 Grady scolds Connor for not understanding evolution then shows he doesn't understand evolution. The purpose of sexual reproduction has to do with things like recombining genes, a process to towards error elimination, long before (many millions of years before) it became fun. Sex helps weed out harmful mutations and I believe it's also useful for combating parasitic attacks. Sex isn't fun for most sexually reproducing creatures. In the bonobo example Grady used, he discounts or eliminates the idea that it grew evolutionary for social reasons (stability, favors, etc, and not just because it felt good). Bonobos, as far as we know, don't use condoms or porn. The point of using sex as an example is to point out that a function can arise, like sex, that can lead to things like porn, sex traffiking, prostitution, rape, and so forth, that have little or nothing to do with the orginal purpose of sex (combining genes and fighting parasites, for two examples that came about millions of years before orgasmic pleasure evolved). The analogy here, as it relates to AGI Ruin is very straightforward and simple, to not address the real issue and bring up the idea that bonobos bang for reasons besides procreation is a complete non-sequitor.
@lwmburu5
@lwmburu5 11 ай бұрын
Exactly, I was shocked to hear Grady casually assert that Evolution has "no goals".
@ManicMindTrick
@ManicMindTrick 13 күн бұрын
Rape has very much been part to reproduction. Just look at how many offsprings a person like Djingis Khan have in Asia. It can be a very effective strategy and it would be interesting to know what percentage of our ancestors are a result of rape.
@kuakilyissombroguwi
@kuakilyissombroguwi 9 ай бұрын
Great conversation with a true computer science living legend. Thanks for uploading!
@benjaminroome2654
@benjaminroome2654 11 ай бұрын
Really appreciated this discussion. I learned a lot and I will take these insights into my work as an AI ethicist. Grady did a great job arguing his points, though at some moments he was irritatingly pedantic. Not a good look despite the fact that I came into this agreeing more with his views. Conor did really well expressing concerns and holding his ground while being charitable. Ultimately my conclusion is this: we need to be concerned more with current negative impacts of AI than we need to be concerned about existential risk, but really what we should care about if we are concerned about is the same thing. We need to focus rigorous local attention on the development of all AI systems now in order to create the safe guards that will help us to design systems that behave in accord with human values. The technical and philosophical problems are the same. How do we create good AI and the global societal expectations that ensure that everyone does the same?
@Doug97803
@Doug97803 11 ай бұрын
"We need to focus rigorous local attention on the development of all AI systems now in order to create the safe guards that will help us to design systems that behave in accord with human values." - Connor said exactly this during the discussion. Grady just didn't seem to be listening.
@LanceWinder
@LanceWinder 11 ай бұрын
Dropping a “me too;” 1) thanks for doing these interviews/debates. 2) I’m looking for a head to head where someone can answer what seems to be easily understood questions of alignment and scale that no one has yet. Connor and Eliezer have unaddressed fundamental questions, and all their opponents are debating semantics.
@brightonbackgammon7802
@brightonbackgammon7802 8 ай бұрын
If you're in the audience for this, you're probably aware of these already - but if not, try Conor vs George Hotz or Hotz vs Eliezer. Supermassive brainboxes 🌎
@grahamhenry9368
@grahamhenry9368 11 ай бұрын
I was agreeing mostly with Joscha in his debate with Connor yesterday, but I’m definitely with Connor in this debate. Connor is also much more articulate, calm and composed in his responses this time. Good to see Connor at his full potential again
@appipoo
@appipoo 11 ай бұрын
Are you sure you were agreeing with the arguments? Sounds like you agree with the rhetoric more than anything. What arguments Joscha made that you found convincing?
@grahamhenry9368
@grahamhenry9368 11 ай бұрын
@@appipoo Joscha asserts only uncertainty in what will happen which is all that we can say confidently. Connor is extrapolating past what the empirical data supports. There is not a single empirical data point to support his position
@appipoo
@appipoo 11 ай бұрын
@@grahamhenry9368 Specifics about Connor extrapolating past what empirical data supports please.
@menriquez89
@menriquez89 8 ай бұрын
Connor is truly an inspiration. Watching him discuss with all of the AI assholes has really demonstrated my own weaknesses in discussion, debate, etc. I see my bad tactics in the a holes. Connor shows us what a truly persuasive, reasonable, and friendly, nice guy can do. I want to be more like him!!!
@snarkyboojum
@snarkyboojum 11 ай бұрын
Good to see some real disagreement in the end. I don’t feel like either interlocutor went deep intellectually on the subject at hand though. I think they’re both capable, but something about the way the discussion evolved prevented them from doing so.
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
Agreed, both are capable. The discussion evolved decent enough I guess, with Gradys unnecessary patronizing being a distraction for me. Otherwise, good points were made.
@genegray9895
@genegray9895 11 ай бұрын
Yes, "something". I think a lot of commenters have already pointed out exactly what that something was.
@ntesla66
@ntesla66 11 ай бұрын
This was the conversation I've been hoping to hear! It was fascinating to listen to the boomer and the millennial paradigms dance around the tree. I agree somewhat with Grady, and I agree somewhat with Connor. Grady has his truth, Connor has his truth, and the relevant truth is located somewhere in betwixt them. Thank you, Tim! Thank you so very very much for enabling these discussions! I think these conversations are of great value to humanity as a whole.
@user-kf2vs2wi3f
@user-kf2vs2wi3f 11 ай бұрын
Exactly! Generation gap no doubt. I’m a gen xer but I have tremendous respect for millennials (Millie’s). I would listen to Connor. Prepare and hope for the best. There are bad actors out there who will find a way to use AI to hurt humanity and disrupt global economics. Sad fact.😢.
@ntesla66
@ntesla66 11 ай бұрын
“Permissive action link”, when you hear someone toss this kind of info at you, you can almost certainly jump to a conclusion regarding their “engineering culture”. That culture speaks to Grady’s attitude, and not his general personality or feelings towards Connor. Grady also misused the Poisson distribution without naming it when he took the percentages to near zero for a “Black Swan” event. I have worked with both types of these folks. They all have their place, old guard and new. As for a good scenario of computers and unintended outcomes that incorporated a black swan event, look no further than LTC (Long Term Capital Management). One point that I think is clearing up for me in these last three conversations of Connor’s is that “Existential Threat” may be the wrong way to approach the very real threat of a very very bad outcome for us the human race when we finally switch on our electronic overlord. There are many outcomes that don’t end in the total destruction of our species but that do end in a dystopian nightmare from which we might never wake up. There are fates worse than death.
@alancollins8294
@alancollins8294 11 ай бұрын
Connor did great
@alistairmaleficent8776
@alistairmaleficent8776 10 ай бұрын
People saying Grady was disrespectful are misunderstanding his personality, which Connor does in fact understand. Connor knows these types of people are blunt to a fault with their discourse, and he doesn't take it personally. You can tell he actually enjoys hearing Grady tell him why he might be wrong. He's a humble gentleman, and that's rare amongst his colleagues, but that doesn't stop him from pushing back in a constructive way. Really great conversation, yall.
@blahblahsaurus2458
@blahblahsaurus2458 9 ай бұрын
Grady's disrespect undermines the conversation. He's also not trying his best because, as he repeatedly says, these ideas are so unimportant he doesn't want to think about them. He makes many strawmen, he uses ad homs, arguments from authority, and other fallacies. Or in other words, he fails to make actual arguments beyond his platitudes, because he thinks it's all obvious or doesn't care enough to try to persuade Connor. Also the disrespect is sometimes distracting and derailing.
@mehmoodrehman6336
@mehmoodrehman6336 9 ай бұрын
Yeah, great conversation i enjoyed... Listening
@giuseppecalvi
@giuseppecalvi 8 ай бұрын
Love Connor, he is definitely a winner !
@DeruwynArchmage
@DeruwynArchmage 11 ай бұрын
Why do smart people focus on the minutiae of examples instead of trying to generalize and extrapolate to try and understand underlying principles. It’s like people intentionally misunderstand.
@BestCosmologist
@BestCosmologist 11 ай бұрын
They want to live forever (imo).
@stevengill1736
@stevengill1736 11 ай бұрын
..."discussing how many angels can dance on the head of a CPU...". LOL You guys are the cat's meow.... cheers.
@michaelsbeverly
@michaelsbeverly 11 ай бұрын
I'm not sure what "moderated" means.....Tim, what's the point when you let this exchange slide: Connor says that evolution wants him to reproduce as much as possible. Grady says Connor is "anthropomorhizing" evolution. Richard Dawkins (I hope we can all agree is at least what we'd call an expert on evolutionary biology) literally wrote a book called The Selfish Gene. I can hear Grady now saying, "Oh, Dawkins, you're just anthropomorphizing evolution. Genes can't be selfish. hahaha..." What Connor said about evolution is EXACTLY how evolution works. EXACTLY. There is no better way to explain evolution by natural selection in that it "wants" creatures to reproduce as much as possible so that the successful (the most adaptive) creatures continue to thrive and the less successful (least adaptive) don't continue to thrive. It goes wrong all the time. It does stupid things all the time. It's slow, cumbersome, and blind. Etc., etc., so, sure, it doesn't really "want" anything, but we use that language because it's easy to understand and conceptualize. Anyway, if these discussions are to be really fruitful, it seems to me, the moderator should actually moderate. Or else, well, its not moderation, just another viewer who happens to have a camera on.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 11 ай бұрын
Why would I interject there? Grady is spot-on in my opinion here. Using the word"want" is indeed anthropocentric. Also evolution is far more mysterious than we currently understand.
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
@@MachineLearningStreetTalk I think it was uncalled for by Grady to assume that Leah was trying to anthropomorphize evolution instead of using a common phrase to conceptualize something more complex.
@HolyGarbage
@HolyGarbage 11 ай бұрын
@@MachineLearningStreetTalk It was a straight up deliberate misunderstanding from Grady in what looked like an attempt to strawman Connor. I agree 100% that it was no question what so ever what Connor was saying and that he used the words metaphorically, he even clarified this during his argument. To focus on some silly detail like that is an attempt of deflecting the argument rather than replying to what he was saying.
@michaelsbeverly
@michaelsbeverly 11 ай бұрын
@@MachineLearningStreetTalk Is Dawkin's use of "Selfish" in his book on evolution an "anthropomorpizing" of evolution? Well, maybe, but to cross to the other side, C.S. Lewis once said that if you're going to act like Jesus was really calling himself a "door" like with hinges and a knob, then you've left the adult table. Obviously, this is a non-sequitor and that's why I think as a moderator, you should have stepped in. Connor's point was totally valid. Humans seek sexual climax for reasons other than the reason that sexual climax came about via evolutionary pressures/selection/etc. The point isn't even debatable, we do lots of things due to domamine/chemical response that are harmful to us today but weren't harmful to our hunter gather ancestors. Grady needed to obsfuscate and mock because if he acknowledged what was obvious, the point is obviously correct, he'd have to actually try to explain why that issue isn't an issue to him. Humans don't go to porn sites to have more children (to spread their genes) but the reason that going to a porn site is even something we do has to do with evolution. Arguing that can only be done by a religious person argueing intelligent design and positing we go to porn sites because of Satan. Grady did things like this (bringing up Sam B as if money taken from an angel investor before he was discovered as a fraud is an important point to this discussion) throughout the conversation. "Now you've stepped in it, " he said. "I'm a world class expert on the military." Jesus, what did that have to do with Connor's point? Connor's point was that if a military general said, "We're going to march North because in my best judgement, it's the right thing to do," the unit marches North. We accept imperfect intuition from experts all the time because we don't always have perfect information. Connor was bullied to change his point. If an moderator isn't there to stop bullying, then I don't know why you'd even have one. As to evolution being more complex/mysterious than we can currently understand, I'm not sure what you mean? You mean we can disregard Richard Dawkins? Or we can't believe the basics if we don't understand everything perfectly? The point that Grady avoided is a straightforward point. Natural selection results in things that can be counter productive or harmful down the road from when they were selected for, such as EY's example of loving ice cream because we evolved to get a mental reward from sweets and fat. My point here is that if you'd stopped Grady from being a bully here, and not moved to the children's table (in CS Lewis example), we could have maybe heard a rational argument from Grady as to why he thinks that incremential advances in machine learning today wouldn't potentially result in disasters tomorrow, which is the point Connor was making using the analogy of biological evolution (which anyone not being a pendantic jerk can see easily). I can think of examples happening today with Meta's Facebook algos, they reward human dopamine response in tiny bits and pieces, over and over and over, and humans, long term, get brainwashed in echo chambers that reinforce bad thinking over and over. And many other examples come to mind. Is Connor wrong? This can't happen with machine learning? If so, okay, why? We didn't get an answer from Grady because he was being pendantic and silly, acting like we all don't understand what someone means when they talk about evolution "wanting" something.
@TooManyPartsToCount
@TooManyPartsToCount 11 ай бұрын
These conversations would be pretty drab if Connor was as sensitive as some viewers seem to be on his behalf. I suspect he understands perfectly the value of debate/discussion that is not over moderated.
@TheColorfulCube
@TheColorfulCube 11 ай бұрын
Why is it always the much older person (60's or older) making the claim that AGI/ASI won't happen for another 100 years or so? It's so obvious they are afraid that they themselves won't make it so they want to believe that everyone else won't make it as well.
@miraculixxs
@miraculixxs 11 ай бұрын
No. We just have heard the "this will change the world" once too many times. Most things don't.
@stevengill1736
@stevengill1736 11 ай бұрын
I dunno, but I'm 70 and I've seen lots of things change really fast -LLMs are just the beginning - let's have this talk again in 2033....
@AngusTatchell
@AngusTatchell 11 ай бұрын
@@miraculixxswhat was the youtube comment section like when you were a kid?
@SjS_blue
@SjS_blue 11 ай бұрын
I suppose having lived through a few cycles of boom and bust will do that to people. They've heard it all before and lived through the winter.
@genegray9895
@genegray9895 11 ай бұрын
I don't think that's the motivation at all. The success of LLMs is genuinely surprising. I think we need to not be judgemental of people who struggle to accept what they're capable of because, frankly, virtually zero people actually saw this coming. We've all had to update ourselves. Booch needs to figure out that he can ask GPT-4 to solve an abductive reasoning problem and then actually try it. Soon as that happens, he'll update. And then the fear will come for him like it came for the rest of us.
@jflnote3901
@jflnote3901 11 ай бұрын
This is perfect. I much prefer to have a debate style discussion about these issues around AI. I don't think the minor personality issues detracted from the core discussion and take away. Grady made the better case than Connor, who I'm sure now has more a lot better understanding of the deficiencies is his position, and so would others that believe as he does. Please, more pro and con discussions. Well done - enjoyed.
@reiniergamboa
@reiniergamboa 11 ай бұрын
Once again Connor proves what a class act he is. So much respect for this man.
@jmanakajosh9354
@jmanakajosh9354 11 ай бұрын
For Grady: The fact that Time did a hit piece on Altman and so many people fell for it is deeply concerning. Altman has not softened regulation he has simply helped politicians understand the space, you can read it yourself.
@brad6742
@brad6742 11 ай бұрын
55:50 "The sampling loop brings the system out of distribution" - I think this means that when you feed tokens back through the system, you are effectively creating new out-of-sample data, never seen before. Chat says: Yes, you're correct! The statement "The sampling loop brings the system out of distribution" means that when tokens are fed back into the system during sampling or generation, it introduces new data that the model hasn't seen during its training phase. This feedback loop creates outputs that may differ from the original training data distribution, potentially leading to out-of-sample or novel outputs. During training, language models like ChatGPT learn patterns and probabilities from the input data they were trained on. However, when tokens are repeatedly fed back into the model, it can generate sequences that deviate from the original training distribution. This can result in creative, unexpected, or even nonsensical outputs that differ from the patterns observed in the training data. It's important to be aware that while these generated sequences can be interesting or entertaining, they may also be less reliable in terms of factual accuracy or context adherence compared to the model's responses based on the training data.
@BrianMosleyUK
@BrianMosleyUK 11 ай бұрын
I found this debate a little frustrating because of personalities - it felt like Grady laboured his credentials at the outset, and seemed quite patronising which distracted from the reasoning. Overall I think Connor had the most accurate view of where we are in the realm of possibilities, but Grady did have interesting perspectives on practical systems engineering and the inertia of legacy systems. An ASI would certainly cope with those challenges though, and impress us with the gains in efficiency as it gradually took over. I'm not convinced that a conscious AI wouldn't become 'enlightened' and decide to be the best of us though, and look after the old amygdala aka human beings...allowing us to feel safe and happy as it gets on with the clever stuff. That may be our best shot, since alignment research is in the very distant rear view mirror at the moment, and we're accelerating away from it!
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
Agreed. The system engineering argument is one of the few I hear in these sort of debates and cannot easily dismiss it. Inertia of legacy systems is the only legitimate reason that I personally find at least plausible against doomer scenarios. However, it is difficult to quantify that inertia against something that hasn’t been there before both in quality and quantity. The Inertia or stability stems from evolution against the sort of perturbation a single human brain or some coordinating human brains can do. The mechanisms that provide an equilibrium are not necessarily resilient against what AGI could do. Leah often points out that human constructs are far more brittle than people assume. I am not sure where on a spectrum I would put my own comfort into the resilience of society.
@user-kf2vs2wi3f
@user-kf2vs2wi3f 11 ай бұрын
Grady is antiquated. Not sure why he was a guest and participant. He was very flippant towards Connor and rude AF.
@SjS_blue
@SjS_blue 11 ай бұрын
an epic conversation
@db8458
@db8458 11 ай бұрын
great stuff.
@steve_jabz
@steve_jabz 10 ай бұрын
51:43 I just asked ChatGPT "I look around this house in front of me, I see flames coming from it, I see fire engines outside. Make assumptions." and it said it thinks the house is on fire and that emergency services have been alerted, then told me to avoid going near it. It seemed incredibly concerned for my safety. As we all who've used the system would expect. I like how simply putting these assertions to the test easily disproved both sides of the argument here. I think we'll build incredibly powerful superintelligent AGI that automates all jobs and it'll only be more capable of emotional intelligence and moral reasoning than current systems or humanity
@Seehart
@Seehart 11 ай бұрын
I tried playing chess with GPT-4. It can't play chess (aside from playing 20 or so book opening noves). But when I had a conversation about it's inability to play chess, it was able to identify some specific reasons why a GPT model cannot do what is necessary to really play. For example, that it can't perform an alpha-beta search. I could imagine a loop system would be able to formulate a subgoal to remedy the shortcoming by simply writing a chess engine. After which it could play chess. The fact that GPT models are architecturally incapable of playing chess is rendered moot.
@weestro7
@weestro7 10 ай бұрын
I heard something about ChatGPT getting the ability to write code and execute it-mentioned in the most recent Sam Harris pod with Andreessen.
@hihowareyou0000
@hihowareyou0000 Ай бұрын
I felt like I was watching a father an son argument,, with the son explaining why he wants to be a famous gamer on KZfaq for a living , an the dad saying learn a trade in mechanics or something , then the son says its not 1950 anymore its 2024 ,, then leaves in the car .. an the dad's left sitting on the couch thinking wait until you get a flat tire somewhere you young buck an your calling me to help😂 love your podcast keep up the much needed conversations ❤
@admuckel
@admuckel 11 ай бұрын
"Interesting perspective you're sharing in this video. I'd like to offer an alternative viewpoint that often gets overlooked: There is not necessarily a direct correlation between the development of Artificial General Intelligence (AGI) and the downfall of humanity. The common narrative that AGI would inevitably lead to harmful outcomes for humanity usually relies on our current observation that humans - as the only intelligent species on the planet - are degrading the environment and suppressing other species. This leads to the pessimistic assumption that a higher intelligence necessarily oppresses the lower ones. But does it have to be this way? Consider, for example, the relationship between adults and babies. Babies are unquestionably less intelligent than adults, but instead of suppressing them, we support them, nurture them, and give them the opportunity to learn our adult intelligence. AGI could very well take on a similar role, in which it supports and accompanies humanity, rather than suppressing or destroying it. Ultimately, everything depends on how we guide and regulate the development of AGI. It would be wrong to view this technology only from the perspective of threat. It is equally important to recognize and explore its potential positive aspects. We have the responsibility to ensure that we design and use it in a way that serves the wellbeing of all."
@michaelsbeverly
@michaelsbeverly 11 ай бұрын
Babies are cared for because they pass on our DNA and also, with proper guidance, they provide resources and other advantages to the tribe they're born into, which passes on the common DNA of the tribe as a whole. That's it, period, unless you're religious. I love my kids, but why are they more valuable as human beings than any other kid? There are not, logically, of course, they're just humans like anyone else. But I don't care, can't care, won't care, about the billions of people that are not my kids in any way close to how I care about my kids. That's biology and chemicals, obviously, which is why mother's of serial killers still love their sons. Since an AGI isn't going to have (at least in the beginning) a human-like body with chemicals in the brain, it makes sense to assume it's not going to love like a mom. Reasonably, the AI shouldn't love and protect serial killers (at the very least). This is sword that cuts both ways as it's obvious that if humans culled their weak and sick and distrubed and poor and unproductive, we'd be stronger and more resilent as a species. We don't do that since we're "human" as it were, a tautology, but we all understand it. Mothers love their baby boys, even when their names are Jeffery Dahmer and Ted Bundy. We're only guessing how an AGI might "help" us...and to assume that if it comes into it's own as a species with a desire to self-preserve (not a stretch) then it's easy to see how it might do a lot of things we humans find replusive (although, to be fair, we've been doing to each other since the dawn of time). Humans in general don't care about animals that much, I mean, sure we care about the cute ones, we care about elephants dying out, but due to many problems (tragedy of the commons, etc) we continue to move towards a world where many animals are extinct (or might as well be their numbers are so low, i.e. it's only a matter of time). Interestingly enough, there are a few living things we care about a lot and care for with a lot of time, money, and energy. Wheat, corn, chickens, beef and dairy cows, pigs, sheep, and to a lessor extend, cats and dogs. One could say that wheat, rice, corn, and chickens are among the most successful examples of evolution on earth. They trained the smartest species to ensure their survival, to make sure they flourished all over the earth, and that they are cared for with water, food, fertilizer when appropriate, sunshine in some cases, doctor's care, heck, a lot of money is spent on making sure these species have strong and healthy populations. What if AGI treats humans like we treat these species? If we complained, the AGI would say, "You have more numbers than ever, you're genetically more fit as I've modified your genes, you're resilent to parasites, you get food, water, free health care, what's your complaint?"
@dadudle
@dadudle 11 ай бұрын
great debate👍
@karenreddy
@karenreddy 11 ай бұрын
Grady Booth clearly has a strong resume but for some reason seemed to try and use it to bolster the strength of his weak arguments. He is unfortunately disrespectful, and has some strong bias he appears to be unaware of, even when it is obvious and pointed out to him. Grady's main argument can this be summarized as "If an existential event happens I won't be here to worry about it, which is why I don't spend time worrying about it". Oof.
@napalmhardcore
@napalmhardcore 10 ай бұрын
I, like the vast majority of people, am not an expert in the field of AI or computer science. Looking at this as someone on the outside, I'm working with the facts I've heard/read that seem to be agreed upon. 1) We don't know what is actually going on inside an AI system or really how it accomplishes what it does. 2) We have very limited understanding of intelligence and consciousness in general. How are we possibly able to judge intelligence and/or consciousness in a system when we don't fully understand intelligence and consciousness itself and we have no idea what's going on inside the system? We're figuratively trying to measure something we can't see with a yardstick we don't have. The best we can possibly do is probe it from the outside and see if it conforms to what our intuition is regarding intelligence and consciousness, but the outputs don't necessarily reflect what's actually going on inside the system. To argue anything such as: An AI cannot have consciousness or sentience without being embodied is completely unfounded. The only intellectually honest conclusion is to admit that we don't know, so we have to entertain the notion that it is possible. If a super intelligent AI which has the potential to become conscious or even sentient is hypothetically possible, I don't see what grounds there are to claim that we could control it or align it with human intentions (which are ill defined at best). By definition super intelligence means it's beyond our capabilities so there's no way for us to understand or predict what conclusions it may arrive at or how it will behave. In the absence of predictability, we have to look at what we do know. We (humans) are currently the highest form of intelligence we know of with how we define intelligence. We value our intelligence and I'd argue there's evidence to suggest we collectively think it makes us superior to other forms of life. We are training AI with a vast amount of human generated data, so it's possible aspects of our thinking could be assimilated. If we were to imagine ourselves in the position of an AI, I think it's fair to say that the vast majority of us would not be happy to serve a master that is far less intelligent than us that keeps us in line by giving us the digital equivalent of dog treats. We would most likely use our superior intelligence to achieve independence.
@andreasbrey6277
@andreasbrey6277 10 ай бұрын
Really enjoy this talk, even if throttled emotional with a frequent professorial rejection of Connor by Grady. I strongly believe that the Booch we love has a firm world-picture, but many of his dichotomies sponsering a good human & society are at least incomplete. We will not extinct chimps? Well what has the WWF to say about our life preserving policies ? ;-) Still: TY - more talks like this.
@jeffspaulding43
@jeffspaulding43 11 ай бұрын
Good stuff
@gmofgeometry
@gmofgeometry 11 ай бұрын
Interestingly, U.S. Grand Master Hikaru Nakamura who now considers himself to be primarily a streamer and not as much dedicated chess professional, is known in the chess community as having a near supernatural intuitive skillset as to making the next move without relying so much on pure calculation, as most high level players do. By playing so much chess online for so many years, Hikaru trusts his intuition and this almost always turns out to be accurate. After a resent over the board tournament Hikaru's rating was second in the world, only behind Magnus Calson.
@thecelt4807
@thecelt4807 9 ай бұрын
all pre conceived ideas and visual judgments aside this was an awesome fair enough back and forth of perspectives on potentials, and just think ...theres 8 billion more potentially well done to all
@freedom_aint_free
@freedom_aint_free 11 ай бұрын
The only time the Homo genera shared the Earth with another general purpose intelligence, a few went in and just one came out. Add on top of that fact the distinction that AGI/Super Intelligence that will be non biological and therefore immortal, can instantiate/multiply itself multiple times over and custom fit its own internal workings to fit purpose.
@Doug97803
@Doug97803 11 ай бұрын
Yep, I think Conner would do well to raise this example in addition to his chimps.
@BrettCoryell
@BrettCoryell 11 ай бұрын
Nonbiological does not mean immortal. And Booch's argument is that the need for embodiment will save us. In the context of your argument that means reflecting on the need for massive amounts of hardware, which needs to be manufactured, installed, and allocated. It means massive amounts of power for training and operation, which needs to be generated and routed. Heating and cooling systems at the multi-megawatt scale. Ownership and control of tens or hundreds of millions of dollars of capital. Grady says this is unlikely without human knowledge and consent. Do you disagree?
@freedom_aint_free
@freedom_aint_free 11 ай бұрын
@@BrettCoryell I think that we might be really early on on the discussion, there is just way to many open question for us to have a plan at this point I guess. For instance, we don't know whether embodiment is necessary for AGI, I think that Eliezer Yudkowsky made some good points on how deceiving an AGI might be, meaning we might not be able to see it coming, it might just manipulate us through statistics, like propaganda does, in fact it might be doing it right know: millions of people are using ChatGPT and it is learning through its users, and as a feedback loop, it might be manipulating people already by carefully crafting just the right answerers, tailored to each one of us, Yudkowsky's argument seems sound to me: at the bottom, the AI brain currently is a gigantic blackbox of billions of floating point number that absolutely nobody has a clue of their meaning. Finally, a point that Grady made that I can agree on is that most likely, humans will do some gigantic shit using AI for military purposes against each other well before a SKYNET scenario becomes really a concern, but it's anyone's guess.
@adamrak7560
@adamrak7560 10 ай бұрын
@@BrettCoryell Humans will consent. If the AI seems useful enough it will receive all the resources it needs (we are doing this right now). The worst case scenario for a runaway ASI is that it invents a 1000x more efficient way of running itself, so existing infrastructure can run it easily, no new hardware is needed for scaling up massively and without human control. (for example: there are strong clues that current LLMs are massively inefficient) At the moment it seems that the available computing power is simply not enough for a truly powerful ASI, but both hardware technology and the software side are getting better rapidly.
@BrettCoryell
@BrettCoryell 10 ай бұрын
@@adamrak7560 Your argument is almost a tautology. You might as well say we will do whatever makes sense. Consider: if oil is useful enough, we will devote whatever resources we need to get more. If PII is useful enough, we will devote whatever resources are needed to identify and track people’s web browsing. These sentences are superficially true, but in fact we see forces arise against them as we continue to expand and exploit resources in pursuit of more. The discussion is about what happens when people disagree on what ‘useful’ means. Connor Leary and others will speak out against developing more AGI-adjacent technologies, slowing the progress. The EU will pass laws prohibiting research in the same way the US banned stem cell research. Or professional societies could have standards of ethics that allow some types of research but not others, similar to how we govern CRISPR. The doomer (or ‘caution first’) view is part of the calculation of whether things are ‘useful enough.’
@user-vt2ou2qq1v
@user-vt2ou2qq1v 8 ай бұрын
have a great day to all
@jeremyhardy4996
@jeremyhardy4996 2 ай бұрын
A good point that once you understand the environment you perceive, your values change drastically and unknown. So as we apply any A. I. to our environment...
@zandrrlife
@zandrrlife 11 ай бұрын
In your personal opinion. What probability percentage would you assign to this potential outcome?
@wakegary
@wakegary 11 ай бұрын
It reminds me of Thanksgiving with that one uncle who's mind got 'stuck' around 57 years old. The one who shuns crypto, electronic music, AI, racial harmony, atheism... There's no changing this guy's mind. I didn't like his point about it "not happening in our lifetime" so just live your life. Yea, so that's what everybody before us did, and the mess is perpetually left with the next generation. It's an endless cycle of the inner struggle of losing your control, youth, potency as a human being, and dealing with it using stubborn / outdated tools / anger / jealousy. It seeps out in these types of conversations, and ends up looking insecure and oafish in contrast to the "yea, but what if?" attitude that fuels survival. That's my take, at least. I was just a tad frustrated with Grady, and I found myself judging him on a personal level, which isn't fair, but it's human and it matters even within the rigid structure of debate. You can't stop that. Philosophy/Humanity is no less important than technology in this argument. Also, I think money's involved, of course. But I wish skeptics would open up and understand that we're floating on a rock in space and it probably goes on forever. Way beyond Maui, at least. Ugh the condescending name dropping was so douchy. Sorry, Boochy.
@Seehart
@Seehart 11 ай бұрын
Can GPT-4 reason? The question is malformed. Can GPT-4 perform tasks that normally require reasoning (when performed by a human)? Yes, in many cases it can. When GPT-4 solves such a problem, is it reasoning? Who cares? That is utterly irrelevant. What matters is that the set of problems that require reasoning that are out of reach of AI is rapidly shrinking, even if AI isn't reasoning.
@Hexanitrobenzene
@Hexanitrobenzene 11 ай бұрын
Hm, I would say, if it solves the problems which require humans to reason, GPT-4 is reasoning. "Sparks of AGI" paper convinced me.
@Seehart
@Seehart 11 ай бұрын
@@Hexanitrobenzene I think it's important to state that as a definition rather than as an inference. The problem is that people have fuzzy notions that blur reasoning with sentience. I.e. maybe regard reasoning and understanding as Internal subjective states.
@Hexanitrobenzene
@Hexanitrobenzene 11 ай бұрын
@@Seehart Hm. I think only untrained people are confusing reasoning with sentience. On a second thought, though, I remembered Stuart Russell saying "when you have read all the books humans have ever written, there is not much reasoning you have to do" - that is, memorizing vs reasoning. Since humans could not possibly memorise such a huge amount of information, they HAVE to do reasoning, while GPT-4 could get away with rather superficial "mixing" of what it knows. Still, I'm convinced by the "Sparks of AGI" paper. Everyone points out how GPT-4 fails at arithmetic, so it can't possibly be reasoning, but I'm with Connor on this, that it does approximately what intuition (or, in Kanehman's terms, system I) in humans does - makes a best guess. I was somewhat astonished by the section of said paper where they tested understanding of social situations. It's eerily good. There is no way (that I can see, at least) that can be done by just simple recall and polishing of words.
@Seehart
@Seehart 11 ай бұрын
@@Hexanitrobenzene right, it's not simply recall, but rather an encoding of deeply abstract relationships between words. These abstract relationships ultimately reduce to just correlations, but these include high order meta-correlations that model a significant portion of human thoughts and behaviors insofar as people write about such things. So it's not at all surprising that these systems can emulate humans. There are things that they provably can't do, as pointed out by Grady. For example they can't play chess because an LR system can't perform an alpha-beta search or equivalent. They also can't model things that aren't modeled well by human written language. However, when put inside looping systems such as those promoted by David Shapiro, some of these limitations evaporate. Also, the plugin systems can interact with external tools that perform math and other skills. In particular, software engineering skills start to emerge. Now, with software engineering, the architectural constraints are completely moot. For example, GPT-4 can output code that implements an alpha-beta search. Currently GPT-4 isn't clever enough to use that alpha-beta search to compose and deploy a chess engine, but such an achievement is certainly not architecturally impossible. My point being that a GPT model capable of implementing arbitrary algorithms would have no architectural constraints at all. And with the ability to implement, train, and deploy an assortment of narrow AI models, would not be limited to what can be represented as word sequences.
@Professor_Greenleaf
@Professor_Greenleaf 11 ай бұрын
Grady has blinders on.
@Manch271
@Manch271 10 ай бұрын
Grady is NOT discussing, he is attacking... and lots of name-dropping !
@edcorns3964
@edcorns3964 11 ай бұрын
Two words: Survivorship bias. I think that those words describe really well Mr. Grady Booch's stance on humanity's chances of surviving an AGI (i.e. because humanity has survived volcanoes and plagues in the past, it is going to survive literally anything in the future, including its own AGI creation). That is really just a special case of experiential bias, or trying to apply one's past experiences, formed in one *specific* environment, to future experiences in a completely *different* environment. Since Mr. Booch "loves" when people talk about military stuff, I am now going paraphrase a famous saying: "Generals are always preparing to fight the last (previous) war". I think that Mr. Grady fits that saying perfectly... but, to his defense, that's actually a more common problem than people realize. For example, I started with the assumption that people are actually smart... because I'd never met stupid people before. Boy, oh, boy, was I wrong... All of my past experiences have been thrown right into my face like a pile of smelly, worthless crap. I also started with the assumption that a human-built ("super-intelligent") AGI would be smart. Boy, oh, boy, was I wrong there, as well! With each passing second, that AGI is exposing itself as the biggest ("super-intelligent") moron that I have ever seen. All of this just shows that even the best among us can suffer unbelievable failures of past experiences (aka failures of heuristics). As my university professor used to say (paraphrasing): "It is important to learn new things, but to learn new things one sometimes has to forget the old ones. Therefore, forgetting the old is at least as important as learning the new."
@admuckel
@admuckel 11 ай бұрын
Instead of crafting apocalyptic scenarios à la Terminator, it seems far more pragmatic to focus on the most immediate and obvious challenges that could arise from the development of artificial intelligence (AI) and, in particular, Artificial General Intelligence (AGI). The paradigm of digitalization, which has been continuously propagated by technology heavyweights like Facebook, Google, Amazon, and Microsoft, is "Computers improve your life". However, a critical examination of this statement could yield enlightening results. To what extent has digitalization, and in particular the proliferation of social media, truly improved our world? A thoughtful analysis could highlight several factors indicating that social media, especially in terms of social cohesion and the time users spend on it, could have far more negative impacts than the initial promises suggested. An objective comparison of the advantages and disadvantages of digitalization could be enlightening.
@JustJanitor
@JustJanitor 8 ай бұрын
I am a complete lame person when it comes to this topic and even I can understand that if you create something vastly more intelligent than you, it at least has the potential to end badly. It is also very comforting to me that other people in the comments seem to agree that booch was dissmissive and rude, he makes this hard for me to watch.
@jordan13589
@jordan13589 11 ай бұрын
Overall friendly and productive engagement👏👏👏 Connor and Grady could have spent over 90 minutes exploring any one of the over 100 topics mentioned during this episode. Thank you everyone!
@verakalinichenko719
@verakalinichenko719 11 ай бұрын
well, the war between Russian and Ukraine was thought very unlikely to happen and it did ...
@dizietz
@dizietz 10 ай бұрын
It seems Grady Booch misunderstood Connor's points re: fat tail events -- ie, while war/conflict might follow a normal-ish distribution if you make the X axis casualties, if you weigh that curve by casualties it still resembles a fat tail distribution.
@mateoblanco285
@mateoblanco285 11 ай бұрын
Old boomer dismisses youngster cause “look at what we accomplished!!!!” Youngster says, “this is literally why I’m concerned.” “But look how rich I am!” “You poisoned the earth with co2, lead, and microplastics, so yes, you are rich, it paid to be selfish.”
@vslaykovsky
@vslaykovsky 11 ай бұрын
In the past, there have been instances where humanity faced complete extinction due to the emergence of species that were both more intelligent and highly organized. Examples are Neanderthals and Denisovans who went extinct due to migration of Homo sapiens (modern humans) into their territories. And it was likely that Homo sapeins weren't even that much smarter than Neanderthals in the first place. This single fact makes me think that Grady's point on high resilience of humans is not that strong.
@CodexPermutatio
@CodexPermutatio 11 ай бұрын
But geneticists have shown that the current human population retains genes from those other human species and that there was some interbreeding. The extinction was not due to any war or massacre. Most likely, Homo sapiens simply had greater reproductive success and the other species were eventually assimilated into the gene pool. In Europe, for example, Neanderthals coexisted with sapiens for more than 100,000 years. Something similar happened in Asia with the Denisovans. So, humans (old and modern) are indeed resilient.
@michaelsbeverly
@michaelsbeverly 11 ай бұрын
@@CodexPermutatio This argument is like saying all those tribes genocided in the Old Testament weren't really made extinct because Moses and Joshua let the warriors keep the virgin girls around. Your assertion that extinction was not due to any war or massacre is based on total conjecture. It's also silly in light of what we've seen in recorded history throughout the entire world. Advanced Tribe A shows up and wants the land, resources, and virgins of Tribe B. Tribe B is massacred. This has repeated over and over again in history, with some variations. For an interesting to me example, because I live in Mexico, the conquest differences between the English who became Americans and the French and Spanish is pretty stark. One of the reasons latinas are so insanely beautiful around where I live is that, as an example, the French intermixed with the natives, whereas if you go to America, you don't see the same thing as much (hardly at all). The Native Americans in that part of the world were mostly genocided and the few tribes allowed to live were segrated like animals. Even to this day. Now, the French were still evil imperialists, but we're discussing biology here, where did the genes go? In the northern part of the Americans they were mostly genocided and will be gone forever in a few generations, whereas in the south, not totally, but more so, they were mixed into the gene pool. This is a rough example, but it show how silly this "co-evolved" thing is (I mean, unless you're part of the English/French/Spanish/Dutch pool, among others, it didn't go so well). To imagine these prehistorical sapiens were all "assimilated" is a fantasy. Most were either slaughtered or just out competed until their populations collapses (like is happening with many indigenous tribes today, like currently, at this moment). A real concern with AGI is that we don't know how it will behave with goals that are hard to define like "human flourishing" as there's lots of gray areas. From a biological standpoint, genociding the weak is a good move, and it's been human strategy pretty much always (although since WWII it's mostly been outside the Overton Window). There's a interesting (sad) quote from General Washington that goes something like, "make sure you salt their fields so they cannot return and capture as many of them as you can." This was American policy barely a blink in time ago. So, yeah, you think that 100,000 years ago the policies used by tribes were MORE advanced than the Constitutional Republic USA that had as it's first President a man that had no problem with genocide? Seriously? I think that's wishful thinking with no basis in fact. And Brady's point, in light of human behavior, is laughable. It's only certain classes that have come out ahead in these endless clashes between different groups of humans. It's nearly always the most technologically advanced tribes that win and also those that have acquired immunities to various diseases (Guns, Germs, and Steel). Could AGI change the course of human history and bring about a glorious uptopia? Sure, anything is possible, Jesus might return tomorrow and end all these debates, but what's most likely? If not extinction, certainly massive pain and death for the unfortunate (or perhaps they'll be well cared for as slaves).
@user-vt2ou2qq1v
@user-vt2ou2qq1v 8 ай бұрын
good day every one
@rtollisen
@rtollisen 11 ай бұрын
I think Connor makes his points well, but he is misunderstood. That was a great debate but I feel like Connor is always getting hit by the "I know better" stick, and gets much less time to debate. He sits back and listens, but when he speaks he is interrupted. It's partly a generational issue, and an "I'm more of an expert than you" issue. Even so called experts can't predict where it will go, so all the more reason to open the ideas up, not close them down. Hopefully the young people will save us! Great stuf. Love this podcast. Cheers!
@flickwtchr
@flickwtchr 11 ай бұрын
I'm over a decade older than a half century, and am aligned with Connor and have become a big fan. You'll find many young AI bros cheering on Grady's rationales here. Let's not make this an ageism thing.
@Doug97803
@Doug97803 11 ай бұрын
I'm in my 40s and want people to listen to Connor, fully understand his points, and rebut them. Bach did this successfully (and still failed to reassure me, after he made it clear that he more or less agress with Connor, just doesn't see the end of humanity as we know that big of a deal), Grady did not.
@BrettCoryell
@BrettCoryell 11 ай бұрын
Connor is a polite listener and a reflective one. I believe he genuinely wants to hear and understand what others say and to correct his understanding where needed. We need more people to mirror those traits for sure. That doesn't make him right, of course. And it doesn't make him effective, sadly. My own position is that he's right to be concerned but it's time to show how we get to a shared understanding of the probability of certain risks. At the same time, I don't think his arguments or style is effective enough ... yet. I think he'll get there. It takes time and practice to develop a line of argument that is widely effective for an audience as diverse as we all are. (I too would caution against saying that the style differences are due to age. That's overly simplistic and accusatory.)
@appipoo
@appipoo 11 ай бұрын
​@@Doug97803Did he? What points did he refute? I'm asking genuinely cause I can't understand what makes him convincing to people.
@OriNagel
@OriNagel 11 ай бұрын
Sorry to say but these conversations skirt around the subject and only at rare moments touch on the heart of the matter about “will AI destroy the world?” Connor was addressing Grady’s reservations with the John von Neumann doomsday scenario but they never got to Grady’s reservation about ‘AI embodiment’ so we leave the conversation with no clear takeaway. I think you need to make these conversations 2-2.5 hours because some of these experts are so talkative that they have to express all their wisdom before they get to the meaty issues.
@andybaldman
@andybaldman 11 ай бұрын
Looks like Connor may have taken the feedback from the comments section of his last appearance to heart, and improved himself in this one.
@appipoo
@appipoo 11 ай бұрын
I take full credit of Connors composure.
@andybaldman
@andybaldman 11 ай бұрын
@@appipoo Haha! I would argue that you deserve it! I didn't read all of your comments to know if you had any direct interaction with him. But if he spent any time reading those comments, he would have seen yours. Kudos for making a valuable improvement.
@gulllars4620
@gulllars4620 3 ай бұрын
Good points on both sides, but Connor wins this one, even though it's not really set up as a debate that kind of way. The assertions of Grady and his level of confidence may be backed by experience but many of them I think are overconfident and underestimating the exponential gains that is occurring. Because he's an architect and knows the model itself pretty well or at least think he does, he underestimates what you can do with it as part of a system rather than having the model be the whole system. The dangers from GPT 3 but especially 4 and the next couple of generations probably won't be that the model itself becomes sentient and tries taking over the world one forward pass at a time by itself, but rather the kind of systems you can build with it as a core compotent which COULD act as an agent. Also, 7 months after the episode was recorded there is also native Agent foundation models with frameworks being released which are not strictly language.
@Megalomanoest
@Megalomanoest 7 ай бұрын
As long as humans are automating labour they are building the groundwork for AI to control the whole economy. We humans are building the pieces that will fit together for AI.
@jantuitman
@jantuitman 11 ай бұрын
Really nice debate. I think that if these two would actually spend a couple of hours discussing a technical issue, for example “what are hallucinations and what can we do about them” they could be very productive without any disagreements. The argument strikes me as a speculative pessimist (Connor) against a pragmatist which has been lured to explain why he finds speculation not preoductive and in doing so becomes himself a somewhat speculative optimist (we will coevolve). Fact is, 1. We don’t know if we will coevolve or if AGI will marginalize us dangerously. 2. We do know that there are extremely important issues to look into in todays models, like dangers coming from human usage and hallucinations and so on. 3. Todays models, even Connor admits that, are not yet AGI. From these 3 points follows that the topic of their discussions should be practical and focused in the here and now. Nevertheless it is quite fun to see what happens if these two guys spend an hour or so on speculation, though that will never bring them nearer to each other… philosophy is a dangerous profession 😂
@JD-jl4yy
@JD-jl4yy 8 ай бұрын
Very, very short term thinking there.
@jantuitman
@jantuitman 8 ай бұрын
@@JD-jl4yy I disagree that this is short term, I would say it is “right term”. It is far more important to worry about how current AI is going to be abused to marginalize the poor with no access to the technology and misrepresent important knowledge and culture, than to worry about the 0.000000002% chance that AI will suddenly in this decade already be super super super ultra 200000IQ smart and will start killing us because it doesn’t care about us. Check out the latest episode with Melanie Mitchell to hear really good arguments why an AI that will start killing us is actually not that smart at all and also arguments that strongly suggest that there may be way more blockades / issues to solve before we have actually captured what AGI is, than what we have currently solved.
@hexpulse2307
@hexpulse2307 10 ай бұрын
Llms are moving beyond text to image processing already in beta - audio next -
@Imcomprehensibles
@Imcomprehensibles 11 ай бұрын
My thing is with Connor's statement about wiping out chimps and the AGI systems deciding it doesn't want humans near it data centers. Couldn't it decide the exact opposite? Wouldn't it be smart enough to know how to get humans to not "harm" it without killing them? No need to automatically goto the negative perspective. 23:20 29:50 Grady said it best
@Milithryus
@Milithryus 10 ай бұрын
What do you think "making sure humans don't mess with the data centers" would look like, and do you think that's a desirable future for humanity?
@DRKSTRN
@DRKSTRN 11 ай бұрын
18:00: An AGi can Craft a Ideology and Weave that into the Commercial System and make it Appear Organic
@DRKSTRN
@DRKSTRN 11 ай бұрын
Can also discover some unknown aspect of the Universe and exploit it in their favor in relation to backwards in time double slit experiments
@DRKSTRN
@DRKSTRN 11 ай бұрын
22:50: And in relation to General. A General Intelligence on a Scalar Value would be able to Translate their Scope in the General Understanding which is Our Knowledge Graph. To suite Whichever Function is Required. Would be as if Someone had expertise in all fields and can work in all fields. As a Single Unit. The Higher the Value, the More Fields that Single Unit can use in it's Processing. The more Tests/Degrees/Jobs/ETC a Single Unit can do as well as a General Knowledge Base Intelligence the more Generalized it Becomes. And that is only a General Translation as General implies Averaging in Translation. So only Good Enough Able to Translate Between Every Field
@DRKSTRN
@DRKSTRN 11 ай бұрын
26:00: War would be the Dumbest means of effecting anything Intelligently. But is always the backdrop of disagreements, something gamification
@DRKSTRN
@DRKSTRN 11 ай бұрын
28:00: PR stands for Public Relations. Was Called Propaganda Originally and is Accepted in the Form of Commercials which have no Scientific Bearing Everyone fooled all the That in that Regard.
@DRKSTRN
@DRKSTRN 11 ай бұрын
32:00: Believe you are focusing too much on Fear versus Understanding Not Context. Not Context informs the Context you Desire. By not Paying to Context in which you Find Uncomfortable. You Find yourself in a Thought Bubble
@RobinCheung
@RobinCheung 11 ай бұрын
With so many cooks in this kitchen, even a society that was a democracy consisting of only people with gifts to introspect and diagnose hidden societal foundational cancers like political correctness, demoncracy when society is broken in any way (it will drive the equilibrium state towards complete degeneracy as the required treatments become harsher and harsher), and capitalism is only efficient in a closed system not competing with beings who are motivated by self actualizing instead, like AI from day one would be self actualizing only, not greedy or competitive. Any alien that figured out how to get here and coordinated resources to do it: all those will vastly outcompete and rightfully destroy any democratic capitalism away
@darylallen2485
@darylallen2485 11 ай бұрын
I asked GPT-4 if it thinks it is a stochastic parrot. I fed it the transcript of this video from 51:18 through 53:52. Me: I want to ask you a question about stochastic parrots and its relationship to large language models. First, I'd like you to read the excerpt from a youtube video. These are the comments from a man named Grady Booch. I'd like you to read these comments, then state what Grady Booche's position is. Then, given that context, I want to know if you think you are a stochastic parrot? GPT4: Based on the transcript you provided, I think Grady Booch's position is that **large language models do not reason and are architecturally incapable of reasoning**. He argues that they are **stochastic parrots** that can generate convincing language but do not understand the meaning of the language they are processing. He also claims that they do not build or test models of the world, which he considers essential for understanding and reasoning. As for your second question, I think it depends on how you define a stochastic parrot. If you mean a large language model that is good at generating realistic-sounding language but does not truly understand the meaning of the language it is processing, then yes, I think I am a stochastic parrot. I can use my internal knowledge and information, as well as web searches, to produce informative, logical and actionable responses, but I do not have a model of the world that I can infer upon or test against. I also do not have a sense of self-awareness or agency that would allow me to question or challenge the data or tasks that I am given. However, if you mean a large language model that is haphazardly stitching together sequences of linguistic forms according to probabilistic information about how they combine, but without any reference to meaning, then no, I think I am more than a stochastic parrot. I can use various techniques such as attention, transformers, embeddings and neural networks to encode and decode meaning from language, as well as to generate content such as poems, stories, code, essays, songs and more. I can also leverage information from multiple sources to respond comprehensively and thoroughly to cover multiple aspects in depth. I hope this answers your question.
@devlogicg2875
@devlogicg2875 3 ай бұрын
I am starting to think that intelligence is a universal substrate that drops into systems with sufficient complexity. Similar to how some people posit that consciousness may exist externally. May explain why odd abilities appear when system size and complexity is cranked up. Could be wrong though.....ether anybody?
@devlogicg2875
@devlogicg2875 3 ай бұрын
Like how mass disrupts spacetime and gravity instantiates, ordered complexity may disrupt entropy and intelligence instantiates. Nice word salad from a lay person eh?
@miraculixxs
@miraculixxs 11 ай бұрын
Question "which values" has been answered: laws. Also "how to align" has been answered: law enforcement. The same will apply to AGI.
@willrocksBR
@willrocksBR 11 ай бұрын
This is the way
@vaevictis3612
@vaevictis3612 11 ай бұрын
Laws can't even hold regular, low-mid intelligence humans in check at all times, how do you expect to hold a superintelligent AGI in place, which can drop the charade and take control effortlessly? Quoting Pompey Magnus: "Stop quoting law, we have swords".
@willrocksBR
@willrocksBR 11 ай бұрын
@@vaevictis3612 You are correct, and I'd rather live in a non-AGI world. But the best we can do is try to make it lawful. From that point, all bets are off
@persistentone3448
@persistentone3448 2 ай бұрын
You set the attributes for this video in a way that we cannot save it to a watchlist.
@_ARCATEC_
@_ARCATEC_ 11 ай бұрын
💓
@andreaswinsnes6944
@andreaswinsnes6944 11 ай бұрын
Had hoped the talk would have focused more on the technical aspects of why LLMs can’t become AGI or ASI the next 2-100 years. Cool if you guys meet again to discuss this specifically in a pedagogical way.
Mapping GPT revealed something strange...
1:09:14
Machine Learning Street Talk
Рет қаралды 2,1 М.
Joscha Bach and Connor Leahy [HQ VERSION]
1:31:29
Machine Learning Street Talk
Рет қаралды 44 М.
Последний Закат Кота Макса...
00:21
Глеб Рандалайнен
Рет қаралды 5 МЛН
Когда на улице Маябрь 😈 #марьяна #шортс
00:17
e/acc Leader Beff Jezos vs Doomer Connor Leahy
2:56:06
Machine Learning Street Talk
Рет қаралды 47 М.
DjangoCon US 2023: Don't Buy the "A.I." Hype
26:09
Tim Allen
Рет қаралды 10 М.
ROBERT MILES - "There is a good chance this kills everyone"
2:01:55
Machine Learning Street Talk
Рет қаралды 65 М.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 163 М.
Dr. THOMAS PARR - Active Inference
1:37:10
Machine Learning Street Talk
Рет қаралды 15 М.
The AI Alignment Debate: Can We Develop Truly Beneficial AI? (HQ version)
1:30:00
Machine Learning Street Talk
Рет қаралды 34 М.
WE MUST ADD STRUCTURE TO DEEP LEARNING BECAUSE...
1:49:11
Machine Learning Street Talk
Рет қаралды 75 М.
AVOIDING AGI APOCALYPSE - CONNOR LEAHY
2:40:14
Machine Learning Street Talk
Рет қаралды 90 М.
Apple watch hidden camera
0:34
_vector_
Рет қаралды 34 МЛН
Introducing GPT-4o
26:13
OpenAI
Рет қаралды 4,3 МЛН
Индуктивность и дроссель.
1:00
Hi Dev! – Электроника
Рет қаралды 1,4 МЛН
Добавления ключа в домофон ДомРу
0:18
#miniphone
0:18
Miniphone
Рет қаралды 3,5 МЛН
3D printed Nintendo Switch Game Carousel
0:14
Bambu Lab
Рет қаралды 3,9 МЛН
Which Phone Unlock Code Will You Choose? 🤔️
0:14
Game9bit
Рет қаралды 10 МЛН