e/acc Leader Beff Jezos vs Doomer Connor Leahy

  Рет қаралды 48,201

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values.
Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week!
/ mlst (public discord)
/ discord
/ mlstreettalk
As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path.
Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions.
Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism.
MORE CONTENT!
Post-interview with Beff and Connor: / 97905213
Pre-interview with Connor and his colleague Dan: / connor-leahy-and-97631416
This debate was mapped with the society library:
www.societylibrary.org/connor...
Beff Jezos (Guillaume Verdon):
/ basedbeffjezos
/ gillverd
Connor Leahy:
/ npcollapse
TOC:
00:00:00 - Intro
00:08:14 - Society library reference
00:08:44 - Debate starts
00:10:17 - Should any tech be banned?
00:25:48 - Leaded Gasoline
00:34:06 - False vacuum collapse method?
00:40:05 - What if there are dangerous aliens?
00:42:05 - Risk tolerances
00:44:35 - Optimizing for growth vs value
00:57:47 - Is vs ought
01:07:38 - AI discussion
01:12:47 - War / global competition
01:16:11 - Open source F16 designs
01:25:46 - Offense vs defense
01:33:58 - Morality / value
01:48:43 - What would Conor do
01:55:45 - Institutions/regulation
02:31:50 - Competition vs. Regulation Dilemma
02:37:59 - Existential Risks and Future Planning
02:46:55 - Conclusion and Reflection
Pod version: podcasters.spotify.com/pod/sh...

Пікірлер: 843
@sofia.eris.bauhaus
@sofia.eris.bauhaus 3 ай бұрын
starts at 8:12. i think that intro is way too long btw.
@CipherOne
@CipherOne 3 ай бұрын
Dear god, thank you 😂
@GarryGabriel
@GarryGabriel 3 ай бұрын
10000%
@rysw19
@rysw19 3 ай бұрын
Disagree the intro was the best part, downhill from there
@iFastee
@iFastee 2 ай бұрын
pseud royale... none of these will be here in 5 years. especially Doomer Connor... clearly he lives of something that doesnt exist. it's cringe and sad. all these doomers are weak in the phenotype... i salute connor for not being as low of a phenotype as eliezer
@6AxisSage
@6AxisSage Ай бұрын
Thanks, wtf do they have an 8 minute introduction?
@DrFlashburn
@DrFlashburn 2 ай бұрын
I can picture Beff Jesos driving a car at high speed, closing his eyes and yelling "JESUS TAKE THE WHEEL"
@megaslayercho
@megaslayercho 28 күн бұрын
I can imagine a scenario in which two face escapes Arkham asylum and kid naps Beff Jesos ,hanging him over a barrel full of acid and when two face spin the coin and asks "Head or tails!" Beff goes like: "Well I think you should really consider the fact that the roman's empire downfall was partialy caused by the degradation of their coins. Originaly the roman coins contained more gold which over time began to be replaced with more and more copper. I mean really is there an objective difference between heads and tails ,from the inside of the coin it's all just technically atoms, there for I dont believe that..." *boom* (Two face shot himself)
@skitcostanza5130
@skitcostanza5130 3 ай бұрын
This is my Super Bowl
@shauryai
@shauryai 3 ай бұрын
True
@TTGTanner
@TTGTanner 3 ай бұрын
Real
@TheManinBlack9054
@TheManinBlack9054 3 ай бұрын
Tbh, i really haven't heard any compelling arguments from the accelerationist sides on how is it really safe to create smarter-than-human AI systems. So far it's just pseudophilosophic bs. If this is your intellectual originator, then you're not convincing me with those arguments. I really do think it would be wiser to be prudent than to be sorry.
@jdietzVispop
@jdietzVispop 3 ай бұрын
Can Beff be the Packers and the other guy be the Panthers please?
@cliddily
@cliddily 3 ай бұрын
The cheerleaders flip when they wiggle their hips.
@Alice_Fumo
@Alice_Fumo 3 ай бұрын
I find this extremely agitating to listen to. Responding to a hypothetical with "I don't believe in that" is the most useless response anyone could possibly give. Beff seems to try insanely hard to avoid answering ANY questions to not fall into any 'gotchas' or whatever and thus goes off on these insane tangents which do nothing other than try to get the conversation off track. This makes it impossible for Connor to ever figure out at which point their reasoning actually has disagreements and really just makes any useful discussion impossible. Let's say someone asked me what should be done if we found a way to collapse a false vacuum, I'd answer "Destroy all the research, commit suicide or set up misinformation campaigns designed to prevent anyone else from ever figuring this out - unless it is likely this is going to naturally occur anyways in which case well firstly we're fully fucked, but also pour more research into this to figure out how to prevent this from happening." Honestly, the way Beff gets off topic is like trying to talk with schizophrenics.
@--LZ---
@--LZ--- 2 ай бұрын
Sadly some people who seem intelligent on some topics arent intelligent in all topics or lack social / conversational skills. For me this is also a learning experience, what not to do in a conversation and how to steer it in a more productive way. I also highly dislike this type of answers to hypotheticals, and it means either a person is conflicted within him self, confused, or trying to hide his evil. Which one of these options is better for people working on AI, I'm not sure, but all the options sound really bad.
@megaslayercho
@megaslayercho Ай бұрын
Yup,watching Beff talks makes me wonder how can such a smart person be so clueless and utterly unable to give a straight answer to a single question or understand the concept of a hypotetical example. Like I cant figure out if he is just trolling and beeing opaque on purpose or if he generaly fails to understand what Connor is asking him.
@OzFush
@OzFush 3 ай бұрын
Connor: If we keep inventing more and more powerful technologies without caring about safety, eventually we’ll destroy civilisation by accident. Guillaume: Nah, humanity has plot armour. 1:03:25
@danielbrown001
@danielbrown001 2 ай бұрын
Well, I mean, we’re not dead so far despite trying pretty hard to destroy one another. So maybe we do! 😂
@OzFush
@OzFush 3 ай бұрын
Implicit in Guillaume’s position of maximising growth is that you need to ensure survival to achieve this over time. He considers threats to civilisational survival to be very low probability and excludes them as a rounding error, leaving only “maximise growth” as the plan to be followed. Connor is more concerned about these threats and considers them to be much more likely, with historical and logical justification.
@TechyBen
@TechyBen 3 ай бұрын
I'd argue there isn't historical and logical justification here. As one side is claiming it's like a nuke, the other it's like the printing press. Thus we can't yet decide which parts of history apply.
@RedmotionGames
@RedmotionGames 3 ай бұрын
The threat to civilisation is - currently and very obviously (and as determined by multiple corroborated scientific metrics and ironically) the maximisation of growth. That's not a rounding error. lol.
@NullHand
@NullHand 3 ай бұрын
My argument is that the advent of a non-human, non-biological intelligence is Out-of-Scope for any historical analysis. More akin to the emergence of humanity itself, or even aerobic multicellular life. Predicting the machine Singularity is futile.
@2CSST2
@2CSST2 3 ай бұрын
We're talking about there being threats at all or about the odds of surviving them? There has been countless threats to humanity but also life in general in planet Earth, yet all of them have been survived and lead to us here being more advanced than ever and living the best overall conditions of life. So in fact the data is on Beff's side. In terms of logic, refer to thermodynamics, the system exponentially favors changes of state that lead to greater dissipation of heat, hence life and complexity, not end of existence or annihilation. So still on Beff's side, and also coherent with the historical data.
@LucidLiquidity
@LucidLiquidity 3 ай бұрын
The second someone tries to bring religion into a conversation like this, it's a little hard for me to trust their ability to think more practically, which is of dire importance given the stakes. We don't have time to be muddying the waters with religious ideology. We just need some solutions, and fast lol.
@MitchellPorter2025
@MitchellPorter2025 3 ай бұрын
This is like an updated version of the opposition that used to exist between Robin Hanson and Eliezer Yudkowsky, in the late 00s. Hanson is a transhumanist but also an economist, and thinks in terms of social systems. Eliezer thought in terms of self-enhancement and a single agent bootstrapping to power over the entire world. Robin and Guillaume emphasize holistic principles like trade, self-organization, and robustness through decentralization and redundancy; Eliezer and Connor emphasize the contingency of human-friendly values and the need for policy precision.
@MitchellPorter2025
@MitchellPorter2025 3 ай бұрын
@@NathanielKrefman Yes, and all earthly affairs are just epiphenomena of the surface chemistry of a cosmic dust mote... In other words, if you're a busy person who has no time or interest for any further details, then yes, you can boil it down to that. But if you are interested in history of ideas, historical context, or how any of these people think, the comparison is informative.
@kreek22
@kreek22 3 ай бұрын
"the opposition that used to exist between Robin Hanson and Eliezer Yudkowsky" They never came to terms. Otherwise, great summary. The Hanson/Guillaume approach violates the Kelly Criterion.
@Levi7hart
@Levi7hart 3 ай бұрын
its a good example because robin has now moved to full on anithumanism like beff. Except I do think robin's intellectual life is much more robust with original ideas; where as beff is extremely smart, but isn't developing any new ideas. (note robin is an economist and beff is working on actual tech so the time alloted to these things are real are different) But.. the concerning thing to me and most people is that guillaume and robin both believe humanity dying out for any form of technology (AI) is a good thing and an evolutionary stepping stone and thats like the most anthitheical to morality and sanity view a human can have for the world imo
@Gnaritas42
@Gnaritas42 3 ай бұрын
@@NathanielKrefman no, it's more like Connor and Eliezer suggesting we just have to control everyone on earth and Beff are Hanson are like, nah, that's not even a possibility, that's not how reality works, go sit at the kids table you idiots.
@darklordvadermort
@darklordvadermort 3 ай бұрын
@@Levi7hart on a long enough timeline humans are toast anyway without help because our civilizational reproduction time (time to terraform venus (without AI) (mars honestly sux more than the moon, for humans)) is much less than our expected civilizational lifespan, (e.g. 10k years vs like maybe 700 years), and that's ignoring risk of total human extinction, or the concerns of the individual (we, you and i, are all gonna die much quicker than 700 years)
@martinkunev9911
@martinkunev9911 3 ай бұрын
Beff Jezos really has a problem with counterfactuals.
@ryanbigguy
@ryanbigguy Ай бұрын
I wonder what he would have done if he didn't have breakfast this morning.
@AstroGray
@AstroGray 3 ай бұрын
Starts at 8:44
@XShollaj
@XShollaj 3 ай бұрын
Thank you!
@covle9180
@covle9180 3 ай бұрын
Doing the ai lord's work
@a97807
@a97807 3 ай бұрын
Wish I'd have scrolled down to read this first. Thanks!
@siroutrage1045
@siroutrage1045 3 ай бұрын
KZfaq Nobel prize award coming
@TranshumanismVideos
@TranshumanismVideos 3 ай бұрын
Just watched a waterfall vs agile argument
@JohnVandivier
@JohnVandivier 3 ай бұрын
underrated comment
@matinusdisseque
@matinusdisseque 3 ай бұрын
Waterfall is linear and phase-based, criticized for its rigidity. Agile is iterative, promotes flexibility, and adapts to changes well. These are IT projects concepts.
@willrocksBR
@willrocksBR 3 ай бұрын
Where are those 'agile' guys? We have massive technical safety uncertainty and they aren't doing shit. Connor is the one doing the AI safety startup, not Beff.
@nessbrawlaaja
@nessbrawlaaja 3 ай бұрын
This is a surprising take to me, I would have said 100% agile vs 95% agile or something 🤷
@christopherspavins9250
@christopherspavins9250 3 ай бұрын
More than machine or man.
@tearnfourstar
@tearnfourstar 3 ай бұрын
This would be so much better if it was just the debate, imo there was too much pre-roll and it was hard to find where the debate even started without any timestamps. Also there's a few minute portion played twice near the start of the debate, I think less is more when it comes to editing on these types of videos. Aside from that I'm enjoying it very much, and thank you MLST for putting it together!
@Matt-yp7io
@Matt-yp7io 3 ай бұрын
yeh the editting in general on this channel is a mess. I don't even know how to describe it. Its like a soup of videos chained together with no structure and u dont even know what ur supposed to be watching
@ArtOfTheProblem
@ArtOfTheProblem 3 ай бұрын
he's havin fun
@dungeon_architect
@dungeon_architect 3 ай бұрын
I believe Tim is looking to hire a video editor. He's aware he's not the best in the world at video editing. Fortunately his podcast is the best AI podcast so we let it slide 😁
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
@@dungeon_architect The best AI podcast may be "AI insiders" by AI Explained, but it's behind a paywall...
@dungeon_architect
@dungeon_architect 3 ай бұрын
@@Hexanitrobenzene I enjoy AI Explained (I'll try check out AI Insiders) but it can't really match the hardcore philosophical bent of MLST, which is really its unique selling point
@jwilliamcase
@jwilliamcase 3 ай бұрын
I didn't realize Beff was so fluent in Yappanese.
@EricDMMiller
@EricDMMiller 3 ай бұрын
He sure knows how to speak! But he doesn't know how to say anything.
@amonkeysden
@amonkeysden Ай бұрын
I was blown away by how he is unable to form a coherent view on the world state and current risks and opportunities. I say I was blown away, then he was unable to respond to simple questions like "should weapons of mass destruction be open sourced". He needs to watch Team America and ask himself some serious questions! 😢
@Rugg-qk4pl
@Rugg-qk4pl Ай бұрын
First 25 minutes is actually crazy yapping
@alancollins8294
@alancollins8294 3 ай бұрын
Acceleration isn't bad. In fact we should accelerate alignemnt research. However, accelerating at the *cost* of safety is the problem. Any life saved on the way to hurdling towards unaligned AGI is meaningless as it's ultimately destroyed. We can save more lives with safe AGI without the cost of longterm extinction in exchange for short term benefits.
@jamesmedina2062
@jamesmedina2062 3 ай бұрын
very well stated sir. I concur.
@gotgunpowder
@gotgunpowder Ай бұрын
There is literally zero evidence supporting the idea that AGI acceleration will lead to extinction. That is pure fear mongering not based on facts or research. The fact that you treat it as a given to the point where you think any life saved to create AGI is a waste of time speaks to how you've been brainwashed by it. The actually realistic issues with AI are not nearly as dramatic and their solutions are not as drastic as alignment zealots want you to believe.
@andrewcampbell7011
@andrewcampbell7011 3 ай бұрын
Man this is painful. It’s Socratic hypotheticals vs jargon laden tangents. Everyone loses
@superresistant8041
@superresistant8041 3 ай бұрын
yeah it's a pain
@gulllars4620
@gulllars4620 3 ай бұрын
It's an asymmetrical debate skill level discussion, and yes a bit cringe, but overall very informative. I think this could have been distilled down to about 30-45 minutes of proper debate if Beff was more of a debater and less abstract visionary optimist. Hopefully he comes back better prepared after having holes in his model(s) pointed out, like no short or long term even hypothetical/contingency plans and the naturalistic fallacy (is does not prescribe ought). Beff basically conceded E/ACC is a version of might makes right without specifically calling it that, and he's not wrong factually looking back or projecting current world state forward, but that isn't necessary what we want or think should happen. He has sort of surrendered some agency and human-centric value systems to align his philosophy with the mechanics underlying his world model to more predictably have a future aligned with him rather than having it as an emergent separate guide for mutating the state of the world into something which fits his concrete contextualized values. I would definitely watch their follow up chats or podcasts, as they have a lot of common ground in interesting areas and seemingly a good take on that but slightly different perspectives. Creds to Connor here for being a good spirit but critical debater and not just going for trying to destroy Beff in politics style.
@DaveKeil
@DaveKeil 3 ай бұрын
got to 10:49 with the question about should any technologies be banned, and he ducks it with "I don't think it's enforceable". I mean, come on. OBVIOUS technologies to ban - Concentration camps. Is it enforceable - yes, some country starts using them everyone else invades them to stop it. SMH.
@TechyBen
@TechyBen 3 ай бұрын
@@DaveKeil Those are not technologies. Like... that's not even what the word means.
@EdFormer
@EdFormer 3 ай бұрын
​​@@DaveKeilI mean, it is potentially one of the most poorly thought out questions I've heard recently. What did Connor mean by "ban"? As @TechyBen pointed out, concentration camps are not a technology, so I'll ignore that, but most technologies that are restricted in many countries (e.g. firearms) are still available to those with a license or access to the black market. Are those things "banned" by Connor's definition? If not, Beff is right to question the idea that any technology can be banned and, if so, Connor's point is meaningless.
@ekhadley
@ekhadley 3 ай бұрын
I feel like Connor gave up right at the finish line with the is vs ought tangent. I wish he'd asked if 'growth' was Guillaume's terminal goal or an instrumental one. I imagine Guillaume would probably say it is terminal, but this isnt compatible with his response to saying he wouldn't get rid of all humans if an ASI told him it was growth optimal. I think this is what Connor meant when he said Guillaume "doesn't really want growth". If growth is an instrumental goal for him then they probably both just want human flourishing and can move directly to the 'what are good policies' tangent.
@reidelliot1972
@reidelliot1972 3 ай бұрын
You heard the man, his terminal goal is entropy.
@eSKAone-
@eSKAone- 3 ай бұрын
Todays humans would perish to evolutionary change anyway. Over time species change into other species 💟🌌☮️
@eSKAone-
@eSKAone- 3 ай бұрын
It's inevitable. Biology is just one step of evolution. So just chill out and enjoy life 💟🌌☮️
@mackiej
@mackiej 3 ай бұрын
For other readers: instrumental goals are pursued to help reach other goals, not necessarily their own sake (Regulation of AI and Tech, Promotion of Open Access to AI, Adaptable and Flexible Policy. Terminal goals are pursued for their own sake, representing intrinsic values (Growth Maximization, Balance Innovation and Stability, Preserve and Enhance Civilization). Of course, we can disagree whether a goal is instrumental or terminal. This info came by feeding the transcript in two halves into GPT-4 and asking, "What are instrumental and terminal goals in the context of the full transcript?" I haven't watched the video yet.
@Dan-hw9iu
@Dan-hw9iu 3 ай бұрын
Connor wants a plan right now. Provides no plan. Demands a plan from guy who said we don't need one. 10/10
@blackmartini7684
@blackmartini7684 3 ай бұрын
😂 the perfect summary. To add one thing it's not that Beff doesn't think there should be a plan. It's that at the current moment implementing one could be detrimental and too early. Like he said, it needs to stabilize first.
@caparcher2074
@caparcher2074 3 ай бұрын
It's not like that. Connor just wants him to admit that we need a plan. He's not asking him to solve alignment
@kensho123456
@kensho123456 3 ай бұрын
They both made it clear they were talking thematically so no need to reduce it to it "for and against" they just expressed their differing viewpoints. BTW I agree with Conger Thingby.
@Dan-hw9iu
@Dan-hw9iu 3 ай бұрын
@@caparcher2074 I believe Beff repeatedly retorted that we might not need one, etc. That said, Beff also called two things similar by _saying that their dot product was large._ You know, like socialized adults often say. So if Connor missed some of Beff's points, I frankly don't blame him...
@rickevans7941
@rickevans7941 3 ай бұрын
Because it's maybe impossible but he's trying. What's the problem here with asserting we need a plan immediately WHILE TRYING TO MAKE IT he's not just armchair mode he's putting in effort towards the change he claims is necessary!!
@Qumeric
@Qumeric 3 ай бұрын
I learned that answering questions is apparently not aligned with growth maximisation
@nitroyetevn
@nitroyetevn 3 ай бұрын
LOL
@andreipaven4388
@andreipaven4388 3 ай бұрын
GOLD
@EliudLamboy
@EliudLamboy 3 ай бұрын
😅
@benjaminkemper5876
@benjaminkemper5876 2 ай бұрын
Lmao. Well to be fair he didn't want to be led into a trap that presupposes a flimsy analogy, so he was trying to cut through the analogies, a little bit too preemptively in some cases.
@ts4gv
@ts4gv 2 ай бұрын
@@benjaminkemper5876 AI safety guys don't tend to "trap" people with analogies. they're almost always used just to clarify opponent's position.
@dexterdrax
@dexterdrax 3 ай бұрын
The way he fumbled with the first question tells you everything...
@matten_zero
@matten_zero 3 ай бұрын
2:08:00 I align with e/acc but I agree with that position. We had free markets and they devolved into the situation we have today. Maximally free markets are unstable because of power asymmetry within populations. Someone always wins and does things to maximize their own benefits
@blahblahsaurus2458
@blahblahsaurus2458 3 ай бұрын
45:10 Connor asks why an AI that spends resources on protecting humans and making us happy would win a war against an AI that doesn't. Beff says "you can ask that about countries or companies". Well, for one thing, Saudi Arabia is very repressive and uncaring, but very successful. But more importantly: in an oppressive country people can rebel! The government is just a minority of people, they don't have the kind of advantage over the rest of the population that an AI would. Also, companies in particular can and do use child labor, pay pennies for back breaking work, and straight up use slave labor where they can get away with it. But it would probably be even worse if companies didn't care about their image and public opinion, another thing AI would not be vulnerable to.
@andersfant4997
@andersfant4997 3 ай бұрын
Can people rebel in North Korea, Iran, Russia? Good luck with that🙂
@Rugg-qk4pl
@Rugg-qk4pl Ай бұрын
There's no reason to think a sufficiently smart AI wouldn't care about its outward appearance. Safe to assume it will know that certain actions will end up leading to its shutdown
@blahblahsaurus2458
@blahblahsaurus2458 Ай бұрын
@@Rugg-qk4pl that's certainly possible and a fair point. And that's one reason I've always been much less concerned about the medium term danger of ASI disobeying its creators, and more concerned about the short term danger of AGI that's happy to obey its creators. What if the AGI is controlled by a dictator or evil billionaire? A small group of people could have an AGI automate a bunch of factories that build robots, and those robots could serve as an army. And as the AGI becomes more competent, the number of people necessary to build a robot army shrinks. It drives me insane when everyone assumes that all humans are on the same side, and skip the question of what humans could do with AI that *doesn't* go rogue. That will be a problem much sooner, and may be worse than anything ASI would choose to do.
@dexterdrax
@dexterdrax 3 ай бұрын
It's better to have discussions rather than debates. Debates just highlight the merit of the speaker but not the topic itself. It's better to have questions prepared by either party beforehand so that the answers can be a bit more precise and understandable...
@randylefebvre3151
@randylefebvre3151 3 ай бұрын
Makes me think about using a high discount factor vs low discount factor in RL. Guillaume is saying that the system is so chaotic that we can't and shouldn't plan, kinda like in a very hard POMDP which could resemble a bandit setting. Connor proposes to try and plan anyway, which could lead to a suboptimal policy in such a system.
@darklordvadermort
@darklordvadermort 3 ай бұрын
high quality comment
@JD-jl4yy
@JD-jl4yy 3 ай бұрын
Yep. And what makes more sense, trying our hardest to optimize for a best plan, or shrug our shoulders, not even try and only accelerate? We're already accelerating at light speed atm. Do we really want to be so cynical that we shouldn't even try to come up with plans to steer things in better directions? Is that the best humanity has to offer?
@GuillaumeVerdonA
@GuillaumeVerdonA 3 ай бұрын
good comment.
@SmittyWerbenjagermanjensen
@SmittyWerbenjagermanjensen 3 ай бұрын
@@JD-jl4yy Yes, What are you talking about, we know for sure that we can't predict the weather or market for at most a couple days prior for the best of us. The best route is acceleration on computers, what makes computers great is the level of control, and what science is about is control, prediction and whatnot are byproducts, it's about control. Acceleration on current computers is speeding up the understanding whilst the medium is still limited, imagine slowing it down and it being still explored on much more capable hardware in the future? Not that I care, I'm just fascinated and would like to see development faster than need to be. Imagine externalities off a computer in the first place, wtf, if that was the case, ban photoshop and clip studio, making false images of people or bros materializing their lolicon-fun lol
@jamesmedina2062
@jamesmedina2062 3 ай бұрын
@Werbenjagermanjensen no. We were accelerating very quickly when we raced to throw astronauts onto the moon. But there were at least attempts at doing this safely and to basically pull out all the stops in favor of safety whilst still accomplishing the missions. So, the costs were only money and some astronaut lives. Today, the safety is not even being prioritized at all and yet the penalties are not just a handful of astronauts but millions of human beings and possibly even the fitness of the planet for organic life. Plus the freedom of humanity from machines is at stake.
@codeantlers485
@codeantlers485 3 ай бұрын
Wait, wait, wait. At 8:00, "Like, I don't know how to say this in a polite way, but death is evil. Like game isn't evil. Beff is evil like Beff is an evil character. And I think you wrote him intentionally to be evil." But that's during the intro promo part. Where in the rest of the video is that part? I don't think it's there. You can't be throwing around the word evil in a promo, and then not include it in the actual video. Why isn't it part of the edit? I want to see Guillaume's response!
@SBalajii
@SBalajii 3 ай бұрын
agreed, that's quite important
@danielbrown001
@danielbrown001 2 ай бұрын
I think it might be part of the pre or post-interview parts that you have to be a member of their Patreon to access unfortunately.
@smileifyoudontexist6320
@smileifyoudontexist6320 3 ай бұрын
Important Topics.. I’d like to see the key points here expanded on more., … Ahhh Yess … I like the unseen 3rd person chiming in. For a moment the discussion felt like i was scrolling pointlessly through important topics . Thanks for great work/ Perspectives … Appreciate!
@Mynestrone
@Mynestrone 3 ай бұрын
Don't read the comments for an opinion. Watch it first.
@zachschillaci9533
@zachschillaci9533 3 ай бұрын
As a physicist myself, I hate the way Beff abuses and relies on physics analogies. It’s just cringe
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
Yeah, I don't think you can apply statistical mechanics to agents who always try to outsmart "the rules of the game". Electrons in the material do not try to "outsmart" the measurement. Connor has mentioned in some other podcast, I believe "Future of Life", that we don't really have a framework to describe interactions between adversarial systems, such as in economy.
@jondor654
@jondor654 3 күн бұрын
10:11 12:57 This is a very cogent response to Colm's question . However what is wrong with " leaving some thing on the table " in a world of wide uncertainty and indeed the phrase begs a second interpretation of the apparent refusal of some to leave anything on the table . The name e/acc also looks untimely in connotation considering that our selves might benefit from a more restrained model of development .
@the3rdworlder293
@the3rdworlder293 3 ай бұрын
Nuhh the editsss are funnnnny 😂😂 I love ittt. Who ever came up with it is my hero
@1stPrinciplesFM
@1stPrinciplesFM 3 ай бұрын
I don't agree with Connor on much, but the insane quality of his camera setup makes me WANT to agree with him
@ramonarobot
@ramonarobot 3 ай бұрын
He even captures himself in different angles 😅
@Aziz0938
@Aziz0938 3 ай бұрын
Thats the trick
@karasira2696
@karasira2696 3 ай бұрын
@@ramonarobot that was super cringe 🤣
@ageresequituresse
@ageresequituresse Ай бұрын
As a photographer, his camera isn't even particularly quality. He just turns up the equivalent of Photoshop's "luminance" in whatever software he's using. A rookie move.
@damianlewis7550
@damianlewis7550 3 ай бұрын
Thanks Dr Tim!
@naesone2653
@naesone2653 3 ай бұрын
More of these longer talks please
@micheldavidovich6940
@micheldavidovich6940 3 ай бұрын
Why does the e/acc guy speak like that? It seems like he could say the same thing in simpler terms. If you are layman watching this it’s very hard to understand him
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
This was the best summary I have found in the comments: "@NathanielKrefman 4 days ago I think his [Connor's] aim was to force Beff to actually make positive assertions about values or policy and to find opportunities to point out inconsistencies/contradictions/hypocrisies. Beff sensed Connor was baiting him, and I think he avoided agreeing or disagreeing to evade being trapped. It would have gone better if Beff had just answered plainly and trusted that he could make an argument against Connor’s follow-up tactic. He might have also challenged the apparent posture of both of them that Beff was the only person who needed to justify his views. Beff never forced Connor to defend a position.
@megaslayercho
@megaslayercho 28 күн бұрын
I think Beff is intentionaly using complicated words and beeing vague on purpose. He just either doesnt seem to understand connor's questions or most likely feels like he is about to loose a certain argument and rather than concede a point he just tries to sound as complicated as possible in the hope everyone gets confused. But if you are actually use to the terms he is using and that doesnt throw you off and you follow what he is saying ,you ll quickly understand he is speaking high volumes of gibberish with very little volume of actual meaning/points beeing made.
@10produz90
@10produz90 3 ай бұрын
This was a great debate. Many new things to think through
@DeadtomGCthe2nd
@DeadtomGCthe2nd 3 ай бұрын
Conner Leahy - "If the church said to murder all babies and people in the world, would you do it?" Any Christian- "no" Conner Leahy - "Then you're not Christian" Real good argument there 👏 😂
@happyduck1
@happyduck1 3 ай бұрын
Connor's response would have been "Then you don't follow everything the church says", and if the other person would have previously claimed "I always follow everything the church says" then that would have been a very useful argument, to show that that claim was actually false.
@federico-bayarea
@federico-bayarea 3 ай бұрын
Fascinating dialogue! Love how both your visions complement each other. I add a peaceful thought. The gradient towards the reduction of violence also favors the coexistence of multiple subsystems, regardless of the values or speeds they choose. Some may choose to go at the highest speeds, while others may choose to live with the values of the current or previous eras. There's still nomadic people in some corners of the world, aren't there? And because it's not a zero-sum game, they can benefit too. I guess it's a way of saying "we're all in this together" in creating this fireball on the Earth while traveling at light speed through spacetime.
@Aldraz
@Aldraz 3 ай бұрын
Great conversation, so to sum up everything: Beff Jezos (optimist) wants to let everything to "chance", he feels like there is still a lot of time for action, he wants to open-source models no matter what their abilities are (maybe till a certain point), wants to stop regulations for now and wait for couple of years till things get more clear and wants the regulations to be gradual - very general at first, not impactful at the beginning - so compute cap is an extreme regulation to him, he also believes we should fight for new decentralization methods that will replace democracy, while knowing that some centralization will always be likely a bigger entity and they will co-exist. He wants to optimize for growth (natural progression or competition). Connor Leahy (doomer) thinks we can actually create smart laws and not let everything to chance and natural evolution (or physics), he feels like there is not a lot of time left for an action, he wants to open source models only to a certain point of intelligence, he wants to create new institutions not to rely on government and wants to see more cooperation in the world, but he also supports some decentralization, although he thinks it would be extremely hard to implement. He wants to optimize for civilization happiness. Both agree on a lot of points and both make good arguments, but they seem to miss the optimal solution here which is quite obvious to me. Just do everything in the middle, yes start regulating now, but very slowly with laws that will not harm anyone, including the companies. No hard caps limits, etc. For example start with laws that will define what AI is, how does it differ from other algorithms, how can the data for it be gathered, etc. Maybe over time say that you can't have more than 50% of data that is malicious in content, etc.. Just do it gradually with the rate of AI progression.
@bobbsurname3140
@bobbsurname3140 3 ай бұрын
I don't trust the current bureucratic consensus on what "malicious" is.
@Aldraz
@Aldraz 3 ай бұрын
@@bobbsurname3140 Oh me neither, that was just an example to imagine what's possible. Such a rule would be stupid.
@kreek22
@kreek22 3 ай бұрын
Your solution isn't simple because the danger AI poses, now and in future, is not known and not agreed upon. The most dangerous system is no more likely to announce its intentions than Bernie Madoff was.
@potatodog7910
@potatodog7910 3 ай бұрын
Ya
@Victor-kd9dh
@Victor-kd9dh 3 ай бұрын
Nuance is always key
@drhxa
@drhxa 3 ай бұрын
This is hilarious, thank you for sharing!
@seanbradley562
@seanbradley562 3 ай бұрын
Holy fucking shit. This is my evening now❤️🫡
@joshismyhandle
@joshismyhandle 3 ай бұрын
Amazing discussion
@JD-jl4yy
@JD-jl4yy 3 ай бұрын
55:03 - 55:55 This seals the deal for me. I've never seen e/acc people give a good response to this.
@darklordvadermort
@darklordvadermort 3 ай бұрын
are you kidding that was the biggest fail in the conversation on the part of connor up to that point lol. LLM is optimized to predict the next word/token - but so much grows out of that. Human (or maybe genes if you like selfish gene thesis) is optimized to reproduce... just obviously not true
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
The "AI discussion" section was also illustrative. First Beff said that the best way forward is to decentralize control, and then said that the entity with the most capital (that is, compute) wins. I agree with Connor - Beff doesn't follow his own premises to their logical conclusions. He is a classic libertarian. These people are fine with destroying the world as long as their freedom is not touched... without understanding somehow that they would be destroyed along with said world.
@onagain2796
@onagain2796 3 ай бұрын
@@Hexanitrobenzene DRUMPF ALER!!!
@onagain2796
@onagain2796 3 ай бұрын
This is actually utter horse shit. All of unsupervised learning is about doing exactly what he says but getting results out of it. Optimize for X to get Y result.
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
​@@darklordvadermort This is what happens when the concepts are not rigorous enough. Misunderstanding. Sure, if you program ASI to optimise for growth, you could get nanorobots, thermonuclear fusion, quantum computing, etc., but all these would be in service of growth. Such a system would teraform the Earth into an entity best suited to spread to the rest of the Solar system, which would almost certainly make it unlivable. "Oh, you need air, water and food ? Sorry, not in my objective function." ...and then nanobots disassemble you, because ASI calculated trillion ways to arrange your atoms into more useful things. More useful for growth, that is...
@DrFlashburn
@DrFlashburn 2 ай бұрын
How does this entire debate happen without discussing the difficulty of control or aligning superintelligence on short, accelerated timelines. It seems the assumption that alignment of superintelligence is possible and happens easily on accelerated AGI timelines was granted and the entire discussion was about who would control the superintelligence(s).
@shauryai
@shauryai 3 ай бұрын
These debates should be premiered on tv! prime time XD
@RickDelmonico
@RickDelmonico 3 ай бұрын
"Robust versus resilient. Levee versus estuary." Dave Snowden
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
1:36:26 "Oh, you fed Chtulhu, he'll be nice to you!" Connor's sarcasm is off the scale :D
@shodan6401
@shodan6401 Ай бұрын
What was the line in Lord of the Rings? "Don't you understand? There won't BE a Shire anymore."
@rubic0n2008
@rubic0n2008 3 ай бұрын
2:50:11 😂 You're pro death ☠️. That killed me!
@DJWESG1
@DJWESG1 3 ай бұрын
No, Nick Land isnt the 'only....' Id be more than happy to expand on this area. Please see the work of Ulrich Beck .
@masonlee9109
@masonlee9109 3 ай бұрын
Thanks! Any specific work of Beck we should check out? Ray Kurzweil also comes to mind, seemingly in favor of an outcome where today's biological life is replaced.
@vinith3773
@vinith3773 3 ай бұрын
"is is not ought" doesn't mean that literally every "is" is not "ought -_- or that from a particular "is" you CANNOT derive an "ought" You need a deeper discussion to see if you have a framework to go from that particular "is" to an "ought" If every person had to summarise the other person and make sure they are on the same page before going too deep this would have been valuable. This is mostly just people talking over each other. The intro/thing before the debate is super confusing. Its pretty cool we're having these open discussions though
@TonyJMatos
@TonyJMatos 3 ай бұрын
Connors analogies are little wide though, wish he would stick to the specific arguments regarding AI specifically
@matten_zero
@matten_zero 3 ай бұрын
That's because he's and ethicist. He enjoys talking down to people as an "authority" because he's "concerned" about humanity. Or at least likes posturing like it because it's socially powerful position. Doesn't have to build anything, just dictate morality to others
@arde4
@arde4 3 ай бұрын
I wish he weren't so rude.
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
@@NathanielKrefman Very good summary.
@OnigoroshiZero
@OnigoroshiZero 3 ай бұрын
He can't support his bs logic that way. He is just a doomer that wants to create drama.
@denishclarke4470
@denishclarke4470 3 ай бұрын
I've heard this is one of the fierce debate, Let's see
@metaronin
@metaronin Ай бұрын
Finally watching this way overdue
@arinco3817
@arinco3817 3 ай бұрын
Holy shit! This is like the ultimate!
@6AxisSage
@6AxisSage Ай бұрын
I used to hold myself back generating content because I thought it was so cringe but this video has shown me that cringe is not a barrier to success.
@AndersHansgaard
@AndersHansgaard 3 ай бұрын
Maybe Connor Leahy's rolling eyes, blatant smugness, disinterest in questioning, skipping any and every step in all arguments to arrive at the most extreme and his guardedness aren't the best ingredients for a thoughtful debate.
@osuf3581
@osuf3581 3 ай бұрын
Don't think that's the side that is unable to actually argue
@ideacharlie
@ideacharlie 3 ай бұрын
Thinking his perfect hair got to his head
@hehehe991
@hehehe991 3 ай бұрын
Dude is insufferable
@willrocksBR
@willrocksBR 3 ай бұрын
Your comment added nothing to the debate. Zero substance.
@2CSST2
@2CSST2 3 ай бұрын
@@willrocksBR Neither did yours...
@averyhaskell1577
@averyhaskell1577 3 ай бұрын
This is literally the weirdest drama I’ve ever seen in tech and I’ve lived in Silicon Valley since 2012
@joecunningham6939
@joecunningham6939 3 ай бұрын
Absolutely painful. Connor constantly interrupting with pompous condescension and Beff rambling on about simplistic economic ideologies and refusing to answer any questions or take any moral stances. No moderation to speak of. Just terrible, im sorry, and I am a die hard fan of the channel
@Scott_Raynor
@Scott_Raynor 3 ай бұрын
Surely interrupting pointless rambling is good though?
@joecunningham6939
@joecunningham6939 3 ай бұрын
@@Scott_Raynornot when it's just more pointless rambling and grandstanding
@user-cf1iw7tf3k
@user-cf1iw7tf3k 3 ай бұрын
"e/acc movement vs AI doomerism vs some rando editing tf out of the whole discussion"
@simo4875
@simo4875 3 ай бұрын
Did it not not loop at some point early on? Thought I was having a stroke.
@captaincaption
@captaincaption 3 ай бұрын
​@@simo4875 yea I thought I somehow clicked a button somewhere, but yep it did loop probably around 5 minutes in the beginning.
@espenglomsvoll
@espenglomsvoll 3 ай бұрын
Just look at the world right now, in 2024. Peace and love is easy to say, not so easy to practice. We still have a long way to go and I don`t think we are ready for this AI-race. Love from Norway.
@EricDMMiller
@EricDMMiller 3 ай бұрын
We have never been able to align humans. And most of them are dumb.
@ErolCanAkbaba
@ErolCanAkbaba 3 ай бұрын
IMO, Beff is demonstrating a great example of missing the forest for the trees.
@danielillner8187
@danielillner8187 3 ай бұрын
Haha nice for bringing them together. Great choice
@davidrichards1302
@davidrichards1302 3 ай бұрын
This discussion was inevitable. (vide "Determined", by Robert Sapolsky)
@melasonos6132
@melasonos6132 3 ай бұрын
Everyone should read this book
@tombjornebark
@tombjornebark 3 ай бұрын
As our understanding deepens, it becomes clear that there's still much we don't comprehend about why certain algorithms yield the results they do. We recognize that some algorithms perform better than others under specific conditions, but the underlying reasons remain elusive. What concerns me isn't the technology itself but the oversimplified way in which it's often perceived by the younger generation. I frequently encounter the notion that happiness can be maximized by reducing our workload-a concept that, while appealing on the surface, overlooks the deeper value of having a purpose and the journey required to achieve it. It's through this journey, with its challenges and achievements, that we experience genuine moments of happiness.
@andreamairani1512
@andreamairani1512 3 ай бұрын
Connor's facts vs Beff's faith in futurism, the ultimate battle of brainiacs.
@vfwh
@vfwh 3 ай бұрын
Imagine Tom Cotton or Nancy Pelosi listening to this conversation before drafting ai legislation.
@Houshalter
@Houshalter 3 ай бұрын
Do you really think they write anything, or even read it?
@vfwh
@vfwh 3 ай бұрын
@@Houshalter Not really, no, I just liked to imagine the scene.
@danielbrown001
@danielbrown001 2 ай бұрын
@@vfwhThey’d ignore the entire conversation, not understanding any of it. They’d ask the lobbyists who own them, “Hey, what laws should we pass with regards to this stuff?” The lobbyists would hand them a bill, and they’d put their signature on it. Then they’d go on CNN or Fox to talk about some random social issue to distract people into caring about that and ignore the AI stuff.
@masonlee9109
@masonlee9109 3 ай бұрын
Call me a luddite, but I don't think developing autopoietic computronium bombs should be legal right now.
@TheManinBlack9054
@TheManinBlack9054 3 ай бұрын
Doesn't matter, we'll open source it anyways.
@41-Haiku
@41-Haiku 3 ай бұрын
Hear hear!
@jdietzVispop
@jdietzVispop 3 ай бұрын
Oh thank god for this.
@dylanalexisalfaromonroy9468
@dylanalexisalfaromonroy9468 2 ай бұрын
These guys should do this more often, interesting discussion.
@arde4
@arde4 3 ай бұрын
Connor's argument is wrong. He assumes a regulation, even prohibition, can be imposed on world-dominating technologies, which is simply absurd. Nuclear regulations are imposed by nuclear powers on others, but nobody imposes them on them. Any restriction they obey is simply one that they have agreed on with its peers for their own benefit. The fact is one can endlessly think about impossible things, and it may be interesting but ultimately it is a waste of time. Effective prohibition of high power technologies is impossible. Deal with it: use it to your advantage, despair or spend your last days with your loved ones until its power is imposed on you, but the genie is not going back to the bottle.
@jeffrey970
@jeffrey970 3 ай бұрын
China and Russia just announced they're discontinuing research on superintelligence because Connor doesn't think it's safe. Oh wait..
@Rugg-qk4pl
@Rugg-qk4pl Ай бұрын
Is this not just a summary of beffs claim which just reduces down to ought-is? "You'll die if you try, therefore it's not right"
@ajohny8954
@ajohny8954 3 ай бұрын
I am not a fan of the debate so far, BUT I love that you just let these guys talk, I hate structured / moderated debates Edit: I have now listened to the whole thing. It gets slightly better towards the end, but Connor really was not interested in talking it seems. He was hyperfixated on 2 things he wanted to say, and was trying to “guide” Beff into giving him the best entry for saying those 2 things. A bit disappointing
@Eggs-n-Jakey
@Eggs-n-Jakey 3 ай бұрын
I deleted my previous comment. Over an hour in and it doesn't seem like either have taken a position, or hell even said anything of substance.
@Pianoblook
@Pianoblook 3 ай бұрын
I couldn't make it past ~1:15, I tried my best. Feels like the Beff guy is trying to have a chat and share views, while Connor is stuck in a loop of swing-for-a-gotcha --> cry fallacy when it doesn't work -> rinse&repeat. Would love the channel to consider bringing in real ethicists or philosophers to discuss these very fascinating topics! Feels like this is what I'd expect lighting up in undergrad.
@2CSST2
@2CSST2 3 ай бұрын
"was trying to “guide” Beff into giving him the best entry for saying those 2 things" I think you hit the nail on the head.
@jeffrey5602
@jeffrey5602 3 ай бұрын
Connor trying so hard to establish a single point of absolute truth with these weird analogies and get Beff to agree so he can then derive his whole beliefs from that and check mate him. If in the limit we are all gonna die anyways so why do you even get up in the morning anyways Connor?
@caelumforder9710
@caelumforder9710 3 ай бұрын
I think Connor was trying to establish the bounds to Beff's position, which he persisted to be blurry about. Rather than trying to understand the spirit of Connor's questions, Beff was trying to answer in as favourable spirit as possible. It was good Connor didn't let that fly. It would have been better if Beff were less defensive and more eagerly shared his true model. I suspect the reason he was obscuring his true model is because he hasn't actually thought about the bounds of his own position very much. I guess he is under pressure from other growth ideologues not to show nuance, or risk getting replaced
@abby5493
@abby5493 3 ай бұрын
The edits are so good and can tell you put lots of time and efforts in to creating this. Thank you so much MLST.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 ай бұрын
Thank you Abby!! 🙏🐸🦁
@JohnVandivier
@JohnVandivier 3 ай бұрын
'a few edit distance away' it repeated twice the whole related block...
@nakedsquirtle
@nakedsquirtle 3 ай бұрын
Okay, but it's funny how no one is addressing that this dude's name is literally Jeff Bezos but swapping the first letters xD
@user-mq2kt1kx1c
@user-mq2kt1kx1c Ай бұрын
👍
@kyneticist
@kyneticist Ай бұрын
1:27:15 How does one maintain equilibrium, a "careful balance" in a high entropy system with many empowered actors (remembering that e/acc says that policing, government and/or regulations are inherently not compatible with a self-balancing structure)?
@TheFrenchGenius
@TheFrenchGenius 3 ай бұрын
This was amazing! Gives me hope for a future where government is decided by a younger generation than the one in power right now.
@ahabkapitany
@ahabkapitany 3 ай бұрын
"Libertarians are like house cats. Fully dependent upon a system they neither understand nor appreciate." lmao the perfect summary
@Eggs-n-Jakey
@Eggs-n-Jakey 3 ай бұрын
It's really hard to imagine this conversation having value to a normal person e.g. Policy makers. There will be a party line drawn and it will be fought on that line without deep contemplation.
@kreek22
@kreek22 3 ай бұрын
That usually happens. It does not always happen. Both parties in America currently support the tech restrictions imposed on China a couple of years ago. Those are really 95% intended as AI restrictions.
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
I have seen a policy discussion on AI, where Stuart Russell participated. It was so boring... Basically only Rusell had something of substance to add. This one is at least interesting.
@aaronweiss3294
@aaronweiss3294 3 ай бұрын
There is always some common ground
@coryc9040
@coryc9040 2 ай бұрын
I think nothing will happen until there's some catastrophe or it hurts a lot of people financially.
@Eggs-n-Jakey
@Eggs-n-Jakey 2 ай бұрын
@@coryc9040 maybe, it will likely be a lot of small crimes like fraud (already happening) that build until policy makers act. It's really hard to say because of the lobbying power behind this tech is insane.
@atheistbushman
@atheistbushman 3 ай бұрын
Respect, excellent discussion
@aaronweiss3294
@aaronweiss3294 3 ай бұрын
The epistemic crux here (aside from the value debates and the cruxes at 2:48:00 and 2:52:00 )is modeling Black Swans. Beff believes in being optimized based on the data we already have - bureaucratic failures, governmental corruption, technophobia, tech improving civilization- and as we get more data on AI itself, we'll worry more about alignment. By analogy: no point in trying to create a martial art from 1st principles before you have ever tried throwing a punch. We aren't at the precipice of a FOOM yet, so any plan we'll make now will be irrelevant compared to plans we can make immediately prior to AGI onset. Connor is worried that we are headed in the direction of extinction, and we aren't making real plans yet. We don't have adults who are capable of saying 'timeout, let's start making a gameplan' if alignment isn't solved and we're about to create AGI.
@ikiphoenix9505
@ikiphoenix9505 3 ай бұрын
Decentralised systems are hard, all fall on this point. That's why Ohad Asor need to come in MLST. Look at "Nullary second order logic with recurrence" and what their work means for IT. Thanks for your show by the way !
@allinballsout1
@allinballsout1 3 ай бұрын
😂😂😂 This was hilarious and rightly exposes both these bozos. Now, please no more acid tripping phd students either. Just some really good technical talk on machine learning 🙏🏽
@LuisManuelLealDias
@LuisManuelLealDias 3 ай бұрын
Beff Jezzos is just like any Sillicon Valley tech genius: very proficient in maths, science and sci-fi, incredibly dumb at philosophy, morality, ethics. He refused to even begin to understand the Is-Ought fallacy. He just couldn't even begin to understand that there was anything to understand here. His insisntence on an objective measurement of a moral reference being in evolution (and even in increasing entropy) has to be the dumbest smartest idea I have ever encountered in my life. You really have to be really smart to not just come up with the concept but create an entire moral framework about Entropy in this manner, and you really have to be incredibly dumb not to realise that this is an operation that you are just not allowed to do. A cognitive dissonance the likes of which that it might just kill us all in the name of Entropy increase.
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
Interesting summary :)
@potatodog7910
@potatodog7910 3 ай бұрын
Interesting
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
Wait, why is it "an operation that you are just not allowed to do" ? I think it's unacceptable given a reasonable moral understanding, but not allowed ? Logically not allowed, you mean ?
@LuisManuelLealDias
@LuisManuelLealDias 3 ай бұрын
@@Hexanitrobenzene Yes. it does not follow logically. You cannot say, X *should* be this way because it's the way it *is*. This is not an acceptable syllogism.
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
@@LuisManuelLealDias Oh, you mean the same point as Connor makes - "is" is not "ought", Hume's guillotine.
@Thedeepseanomad
@Thedeepseanomad 3 ай бұрын
They are both right, but about different parts. At the heart lies termodynamics and the efficent use of energy to do work (cause change or resist it) according to preferences, it is just that they have to be physically and socially sustainable in a win-win type of setting.
@LuisManuelLealDias
@LuisManuelLealDias 3 ай бұрын
I have a last comment to make. It was actually impressive how absent the moderation was. It was literally unnecessary, and this is a kudos to both participants of the debate, who always kept it professional, objective, and respectful of each other's time. I don't remember listening to a debate that had this level of non-moderation.
@optimusprimevil1646
@optimusprimevil1646 3 ай бұрын
the market isn't going to save us from skynet, the market is skynet.
@Hexanitrobenzene
@Hexanitrobenzene 3 ай бұрын
The market is Moloch.
@deku6737
@deku6737 3 ай бұрын
Joe Rogan: Did you hear they're genetically engineering super tigers? 1:51:40
@rerrer3346
@rerrer3346 3 ай бұрын
Connor must be getting money now the hair gets smoother every debate😂😂😂 I wish he expanded on his plan more I wanted to hear him out but his cynicism got the best of him. Hope there is a part 2 in 6 months.
@ngbrother
@ngbrother 3 ай бұрын
The discussion about "new institutions" sounded hopeful. For me it was the silver-lining from this debate. But I'm skeptical that I'll see these new institutions realized in my lifetime - or even in my children's lifetimes. IMHO, The trajectory we are on points toward greater consolidation of power through memetic control and regulatory capture by incumbents. No amount of "open sourcing of ideas" is going to redistribute access to capital or compute in the next 50 years. The incumbents have enough existing power to stretch out their lead to the point where it becomes impractical to talk about "maintaining a narrow power differential" between levels in the hierarchy.
@KCM25NJL
@KCM25NJL 3 ай бұрын
I'm commenting early on this one, from around the 1 hour 5 mark.... trying to soak in what I've listened to so far. I think this is an important debate that comes from two extremes and desperately attempts to seek common ground. I feel however, that like many catalysing events, discoveries, paradigm shifts we go through as a species, the one thing we have never been able to do..... is "Unrub the Lantern". The genie is well and truly out of his little golden cave and we'll probably do what we always do..... we'll adapt and overcome. If we aren't capable of overcoming the age of AI as a species, then quite frankly, we were never supposed to. We often romanticise about how simple things were in the past, but they never were simple. They were just times with much less knowledge, every bit as fraught with pressure, anxiety and danger as they are today. We only made it this far because we clung on to hope and dared to press forward.... and I think that's all we have today.. and tomorrow... and maybe even the day after that.
@2CSST2
@2CSST2 3 ай бұрын
I think a crux of the whole discussion for me is that there's this belief that the physical world naturally tends to do good things versus not. Connor spent a lot of time trying to get Beff to say that he just wants to let physics do its thing IN SPITE of whether that ends up being good or not, the "is versus ought". Beff on the other hand, doesn't necessarily disagree in that he does want to let physics do its thing, but he resists acknowledging what Connor wants him to acknowledge, because Connor tries to make him do it but in a package deal way. The package always ends up being a kind of "but really physics will end up doing terrible things, we'll end up all dying from one accident at some point. Physics is all Might is Right, physics is evil". Beff does acknowledge there's no guarantee physics playing out will end up well for us all. On the other hand, he does think it probably will be good, after all it is pure physics that gave birth to humanity and everything that Connor cares for, and there wasn't any "should" involved, it's not like genes thought "we should evolve in a way that people can feel love and happiness", so clearly physics can bring about beauty and goodness without the need of someone steering it with what he thinks "should" happen. But more importantly, Beff's point is that it's not like we have a choice anyway, and that's where I agree with him. Maybe Connor is right in arguing from the viewpoint of we have things we care for and we should fight for them and avoid risks. But if that entails going against the forces of physics, specifically thermodynamics, however more right your view is in the moral sphere, you'll end up losing anyway. And actually, you'll probably end up increasing the chances of your loss if you're trying so hard to go against physics. In other words, if you think your values are worth fighting for, whether you "should" or not, you HAVE to ally the forces of physics with you, one way or the other, otherwise you're actually yourself condemning your values to a quick death. And furthermore, you really have some freedom in the way you can ally yourself with physics. That is the reason why Beff's e/acc is about a framework more than a prescription. But Connor doesn't want to have any of it, because for him physics is basically pure evil, and there's not any chance of an alignment with it that preserves what he considers good. Therefore, doom, basically.
@DJWESG1
@DJWESG1 3 ай бұрын
I don't see the same what is vrs what ought dynamic here.. nor can I hear any proper sociological language or reference.
@JD-jl4yy
@JD-jl4yy 3 ай бұрын
If we "don't have a choice anyway", then that defeats both of their endeavors for advocating what they think is a better course of action. Fighting regulatory and safety work doesn't make sense either if all there is is "physics playing out".
@2CSST2
@2CSST2 3 ай бұрын
@@JD-jl4yy No not necessarily, that's the last point I brought up about there being freedom in the ways you can align with physics, and the fact that e/acc is a framework not a prescription. The reason for that is that the physics we're talking about, thermodynamics, is statistical in nature, it's not something completely deterministic like gravity. For example if indeed what you want is to fight regulatory and exploitable safety rules, you ought to align yourself with physics in the way you do it (which for example is why he engineered e/acc in the way he did). Thermodynamics doesn't say "you can't fight regulatory practice", it says the system is trying to optimize to detect and use free energy where available. There's a lot of room to do lots of different things that make sense within that.
@JD-jl4yy
@JD-jl4yy 3 ай бұрын
@@2CSST2 This is incoherent. Determinism still holds, so where do those degrees of freedom come from? Thermodynamics is only "statistical" because of a lack of compute. If you know the position and momentum of all particles, it becomes deterministic.
@2CSST2
@2CSST2 3 ай бұрын
@@JD-jl4yy It's not incoherent at all. Again, you're mixing thermodynamics with determinism. Thermodynamics is not "statistical" because of lack of compute, statistics is *literally* part of it. If you go do a physics degree, you'll find out the courses called "statistical mechanics", and they're actually a huge part of thermodynamics. Entropy is by *design* a measure of the total number state of a system, it's a feature not a bug. And the point is that a system will statistically trend towards the state of maximum entropy, no where in there is there any attempt at computing what every single particle is doing, and nor would that ever actually be possible. All the laws of thermodynamics you hear of, they're all laws derived from studying the *statistics* of a mechanical system, and they're all statements about the *statistics* of that system, about the average macroscopic state. To fully and correctly simulate the universe, you would actually need to run the entire universe. This is something that comes up from Stephen Wolfram's cellular automata. There's no other or shorter way to find the exact outcome of the universe, than to let the universe run itself. BUT again please don't jump immediately at the "ah there it is, it's determinism, can't do anything about it, therefore useless and pointless". The thing is there's a *wide* array of possibilities between being able to predict the future exactly and not being able to say anything about it. What thermodynamics says is that they are STRONG general statements we can make about HOW the system evolves. I really hope you're trying to think about what I'm saying because otherwise I feel like it's starting to be like you're just parroting the same thing and not listening. I feel like I explained it well enough that you can understand it's not determinism, and therefore there are many many degrees of freedom.
@Neomadra
@Neomadra 3 ай бұрын
I really don't like how heated this debate is. I wish it was more academic
@injustice4194
@injustice4194 Ай бұрын
How do you enforce "Thou shall not kill." ?
@Guy_Reverse
@Guy_Reverse 3 ай бұрын
what was that 8 minute intro, just let me watch the debate.
@jwulf
@jwulf Ай бұрын
Does anyone else think that they both could be variants of each other? They both look like eventual outcomes from the same seed person.
@ricksminecraft
@ricksminecraft 3 ай бұрын
One side makes the case for his beliefs and goals, while the other side continuously tries to trap to trap him in word games and "misunderstanding" what he said. I couldn't make it the whole way through!!!
@dungeon_architect
@dungeon_architect 3 ай бұрын
Connor is just using the Socratic method to expose the incoherency of Beff Jezos' purported beliefs. Though I don't think his beliefs are actually incoherent, he just communicates them as such, since if he communicated his desires clearly, he would reveal his real low empathy, anti-human worldview.
@ricksminecraft
@ricksminecraft 3 ай бұрын
@@dungeon_architect I find myself aligning with your view on this. While Beff Jezos' ideas come across as fairly clear to me, despite their complexity, it's Connor's approach that adds a layer of frustration to the dialogue. His reliance on the Socratic method seems less about seeking clarity and more about obscuring his own stance. As a software engineer deeply immersed in AI and programming, I appreciate straightforwardness and precision in communication, especially when discussing nuanced topics like technology ethics. This conversation could have been a rich platform for exploring differing viewpoints on such critical issues. However, Connor's reluctance to articulate his own beliefs clearly detracts from the potential depth and constructiveness of the exchange. It’s a missed opportunity for a meaningful dialogue that could benefit the wider audience, particularly those of us in the tech field who grapple with these ethical dilemmas in our work.
@Low_commotion
@Low_commotion 3 ай бұрын
This conversation has made both of them seem more likable & reasonable to me than any other appearance I've seen of them. I think Connor underestimates the possibility of nuclear fission-style indefinite stagnation out of fear becoming a reinforcing cultural loop (I'm bearish on new/modular nuclear), but I also think the line between that & _his_ fear of accelerating faster & faster while understanding less until catastrophe might be razor-thin in the case of AI. I don't think it's unthinkable that democracies outright ban the technology out of the currently-prevalent techno-pessimism, and at least China seems eager to shut the door on AI (at least consumer AI). But at the same time it might be hard to scale even _just_ interpretability as quickly as capability, and I think we all know interpretability is not alignment.
@41-Haiku
@41-Haiku 3 ай бұрын
For clarity: The "AI Doomer" crowd -- the experts and others who claim that AI is somewhat or very likely to kill all of humanity in the relatively near future -- are mostly techno-optimists. If you asked almost any of these people "would you be glad if we completely stopped work on AGI, but accelerated most other technologies including narrow AI," they would say "Of course! That would be a wonderful world!" For most experts, the reasons they have for expecting doom from AI are mostly technical reasons related specifically to AI. Half of all published AI researchers think there's a 10% or greater chance that AGI will lead to a bad outcome on par with human extinction.
@Appleloucious
@Appleloucious 3 ай бұрын
One Love! Always forward, never ever backward!! ☀☀☀ 💚💛❤ 🙏🏿🙏🙏🏼
@josephgorka
@josephgorka 3 ай бұрын
What are they talking about 48:55 ? What's 'EAC' and 'EA'?
@Victor-kd9dh
@Victor-kd9dh 3 ай бұрын
e/acc refers to effective accelerationism, maximising growth because thermodynamics and physics laws seems to push systems that way. E/A is effective altruism, philosophy and a community focused on answering the question, “How can we best use our resources to help others?” But kind. of a cult with fucked up takes as well. Both movements polarized.
@josephgorka
@josephgorka 3 ай бұрын
@@Victor-kd9dh thank you for getting back to me about this one. Much appreciated 🙏🙏🙏
@XOPOIIIO
@XOPOIIIO 3 ай бұрын
We should focus on using our knowledge to live as simple and as sustainable life as possible.
@GalenMatson
@GalenMatson 2 ай бұрын
More!
@disarmyouwitha
@disarmyouwitha 3 ай бұрын
Line goes up!
Debating the existential risk of AI, with Connor Leahy
1:07:21
Azeem Azhar
Рет қаралды 4,7 М.
Why Israel is in deep trouble: John Mearsheimer with Tom Switzer
1:35:01
Centre for Independent Studies
Рет қаралды 2 МЛН
Omega Boy Past 3 #funny #viral #comedy
00:22
CRAZY GREAPA
Рет қаралды 21 МЛН
100❤️
00:19
Nonomen ノノメン
Рет қаралды 37 МЛН
顔面水槽がブサイク過ぎるwwwww
00:58
はじめしゃちょー(hajime)
Рет қаралды 125 МЛН
CBMM10 Panel: Research on Intelligence in the Age of AI
1:27:21
Will AI Wipe Us Out | Joe Rogan
8:10
JRE +
Рет қаралды 49 М.
The Myth of Pure Intelligence
1:07:08
Machine Learning Street Talk
Рет қаралды 11 М.
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 212 М.
Mechanistic Interpretability - NEEL NANDA (DeepMind)
3:57:44
Machine Learning Street Talk
Рет қаралды 34 М.
Apple watch hidden camera
0:34
_vector_
Рет қаралды 49 МЛН
Карточка Зарядка 📱 ( @ArshSoni )
0:23
EpicShortsRussia
Рет қаралды 105 М.
Apple iPhone 15 Pro Max With Smallrig Professional Photography kit #shorts
0:14
cool watercooled mobile phone radiator #tech #cooler #ytfeed
0:14
Stark Edition
Рет қаралды 6 МЛН
Xiaomi Note 13 Pro по безумной цене в России
0:43
Простые Технологии
Рет қаралды 1,9 МЛН