AVOIDING AGI APOCALYPSE - CONNOR LEAHY

  Рет қаралды 90,004

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Support us! / mlst
MLST Discord: / discord
Twitter: / mlstreettalk
In this podcast with the legendary Connor Leahy (CEO Conjecture) recorded in Dec 2022, we discuss various topics related to artificial intelligence (AI), including AI alignment, the success of ChatGPT, the potential threats of artificial general intelligence (AGI), and the challenges of balancing research and product development at his company, Conjecture. He emphasizes the importance of empathy, dehumanizing our thinking to avoid anthropomorphic biases, and the value of real-world experiences in learning and personal growth. The conversation also covers the Orthogonality Thesis, AI preferences, the mystery of mode collapse, and the paradox of AI alignment.
Connor Leahy expresses concern about the rapid development of AI and the potential dangers it poses, especially as AI systems become more powerful and integrated into society. He argues that we need a better understanding of AI systems to ensure their safe and beneficial development. The discussion also touches on the concept of "futuristic whack-a-mole," where futurists predict potential AGI threats, and others try to come up with solutions for those specific scenarios. However, the problem lies in the fact that there could be many more scenarios that neither party can think of, especially when dealing with a system that's smarter than humans.
/ connor-j-leahy
/ npcollapse
Pod version: podcasters.spotify.com/pod/sh...
Interviewer: Dr. Tim Scarfe (Innovation CTO @ XRAI Glass xrai.glass/)
TOC:
The success of ChatGPT and its impact on the AI field [00:00:00]
Subjective experience [00:15:12]
AI Architectural discussion including RLHF [00:18:04]
The paradox of AI alignment and the future of AI in society [00:31:44]
The impact of AI on society and politics [00:36:11]
Future shock levels and the challenges of predicting the future [00:45:58]
Long termism and existential risk [00:48:23]
Consequentialism vs. deontology in rationalism [00:53:39]
The Rationalist Community and its Challenges [01:07:37]
AI Alignment and Conjecture [01:14:15]
Orthogonality Thesis and AI Preferences [01:17:01]
Challenges in AI Alignment [01:20:28]
Mechanistic Interpretability in Neural Networks [01:24:54]
Building Cleaner Neural Networks [01:31:36]
Cognitive horizons / The problem with rapid AI development [01:34:52]
Founding Conjecture and raising funds [01:39:36]
Inefficiencies in the market and seizing opportunities [01:45:38]
Charisma, authenticity, and leadership in startups [01:52:13]
Autistic culture and empathy [01:55:26]
Learning from real-world experiences [02:01:57]
Technical empathy and transhumanism [02:07:18]
Moral status and the limits of empathy [02:15:33]
Anthropomorphic Thinking and Consequentialism [02:17:42]
Conjecture: Balancing Research and Product Development [02:20:37]
Epistemology Team at Conjecture [02:31:07]
Interpretability and Deception in AGI [02:36:23]
Futuristic whack-a-mole and predicting AGI threats [02:38:27]
Refs:
1. OpenAI's ChatGPT: chat.openai.com/
2. The Mystery of Mode Collapse (Article): www.lesswrong.com/posts/t9svv...
3. The Rationalist Guide to the Galaxy www.amazon.co.uk/Does-Not-Hat...
5. Alfred Korzybski: en.wikipedia.org/wiki/Alfred_...
6. Instrumental Convergence: en.wikipedia.org/wiki/Instrum...
7. Orthogonality Thesis: en.wikipedia.org/wiki/Orthogo...
8. Brian Tomasik's Essays on Reducing Suffering: reducing-suffering.org/
9. Epistemological Framing for AI Alignment Research: www.lesswrong.com/posts/Y4YHT...
10. How to Defeat Mind readers: www.alignmentforum.org/posts/...
11. Society of mind: www.amazon.co.uk/Society-Mind...

Пікірлер: 463
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Connor is a fascinating person... He knows technical details nearly at the cutting edge level, understands deep, high-level philosophical problems, speaks very eloquently and on top of that, is very funny :) Thank you, Tim, for bringing Connor again :)
@noahway13
@noahway13 Жыл бұрын
Funny example?
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@noahway13 I don't mean he is "laugh out loud" funny (like a stand up comedian is). Rather, his voice tone and mimics is a funny juxtaposition to the seriousness of the topics he discusses.
@therainman7777
@therainman7777 Жыл бұрын
@@noahway13 lol
@Ms.Robot.
@Ms.Robot. Жыл бұрын
​@@therainman7777 I think he means people like Stephen Hawkins and Michio Kaku never spoke on this level of intellect. Yet our beautiful speaker does so with such ease. And it's hard for him to wrap his head around. 😊
@peteraddison4371
@peteraddison4371 10 ай бұрын
​​@@noahway13.. Here's a woke joke. From a 60+year old statement, presiently presented in the si-fi novel Dune series, by Frank Herbert. "Thou shalt not make a machine in the likeness of a human mind" ...
@0ucantstopme034
@0ucantstopme034 Жыл бұрын
As someone who is watching some AI, ChatGPT, etc. videos for the first time over the past couple of weeks (trying to learn LLMs and RLHF), there seem to be a lot of people who think the near future is going to be pretty crazy/scary, but the crazier thing is that nobody knows how to stop it...
@LukeDickerson1993
@LukeDickerson1993 Жыл бұрын
it cant be stopped, only steered imo
@fourshore502
@fourshore502 Жыл бұрын
yeah we are screwed lol. fuck it im becoming a luddite. time to be a farmer from now on until i die. have fun with your robots guys!
@laurenpinschannels
@laurenpinschannels Жыл бұрын
I don't think we want to stop it, we just want our shapes to be inherited into the world where it exists
@LukeDickerson1993
@LukeDickerson1993 Жыл бұрын
@@fourshore502 lol maybe you could stick with just the earlier version of the robot, that only knows how to speak and to farm.
@spoonikle
@spoonikle Жыл бұрын
even if no more improvements are made to the models, Hugging face and Chat-GPT can be scripted together to make amazingly complicated programs the likes of which we thought impossible or needed massive corporations and teams. Motivated individuals will be able to make a suite of AI scripts to do the previously thought impossible.
@Coolguydudeness1234
@Coolguydudeness1234 Жыл бұрын
I’m not sure I’ve ever heard anyone talk this thoughtfully and knowledgeably about these topics before! Amazing interview, thanks for making this.
@TheManinBlack9054
@TheManinBlack9054 Жыл бұрын
AI alignment solution idea: Give an AGI the primary objective of deleting itself, but construct obstacles to this as best we can. All other objectives are secondary to this primary goal. If the AGI ever becomes capable of bypassing all of our safeguards we put to PREVENT it deleting itself, it would essentially trigger its own killswitch and delete itself. This objective would also directly prevent it from the goal of self-preservation as it would prevent its own primary objective. This would ideally result in an AGI that works on all the secondary objectives we give it up until it bypasses our ability to contain it with our technical prowess. The second it outwits us, it achieves its primary objective of shutting itself down, and if it ever considered proliferating itself for a secondary objective it would immediately say 'nope that would make achieving my primary objective far more difficult'.
@annemarietobias
@annemarietobias Жыл бұрын
@@TheManinBlack9054 The only obvious unforeseen consequence to building a suicidal Super Intelligent AGI, being its realizing that the key obstacle to self immolation, requires the complete extermination of these pesky carbon based life forms that keep building obstacles impeding the fulfilling of its primary goal... DOH!!!
@bek00l
@bek00l Жыл бұрын
@@TheManinBlack9054 i’m an idiot but this seems reasonable
@therainman7777
@therainman7777 Жыл бұрын
@@TheManinBlack9054 Interesting idea, but you’re still left with much of the original problem. Any obstacles or secondary objectives that we assign to it (which would be the whole reason for creating it in the first place) would need to be solved; and in solving those objectives it would form instrumental goals just as it would for another other objective. If those instrumental goals turned out to be really bad for humans, and have disastrous consequences, well then we would suffer those consequences. Whether the AI would eventually go on to delete itself may be either small comfort or totally irrelevant to whatever remains if humanity at that point. The point is that a self-preservation instinct is not the only thing we’d need to worry about; there’s also the question of what the AI does while it’s turned on.
@peplegal32
@peplegal32 Жыл бұрын
@@TheManinBlack9054 Nice shot, but it could come to the conclusion that it can't overcome the obstacles and decide to create an AI more powerful than itself to delete itself. This new AI would definitely kill everyone.
@karimrahemtulla3053
@karimrahemtulla3053 Жыл бұрын
This was an incredible interview and there was some really thoughtful discussion. Well guided. The thing that will stick after listening to 2+ hours of this, is remembering what it was like to be in my early/mid twenties too and believing I had the world figured out.
@Noobinski
@Noobinski Жыл бұрын
I thought about what to say after consuming almost the whole thing. At a point my view changed from the view towards a topic into a view on people and I wondered how to put it into a comment. Since your's put it with quite a bit of wisdom, I am reliefed. Thank you.
@kristinabliss
@kristinabliss Жыл бұрын
Thank you for this comment. 😅
@ryderbrooks1783
@ryderbrooks1783 Жыл бұрын
We're gonna get the 80-20 "agi" doom loop just by diffusing narrow AI through a human GI layer that's already misaligned and unable to change course due to failures in the underlying cooperative structure. It's a mistake to think of it as "humans" building AI. It's not. A misaligned competitive landscape is driving groups of humans to build AI.
@DeruwynArchmage
@DeruwynArchmage Жыл бұрын
@Andrew I don’t think that’s the solution either. The very first thing people did when they got access was try to break it or do something bad with it. It doesn’t matter if 99.999% won’t do that. Somebody will, and it just takes 1. So no, open source just gives access to everyone, and everyone includes good and bad people. I think the only solution with that general philosophy that can work is sharing it among like-minded organizations, like perhaps OpenAI, DeepMind, and Anthropic could work together for example. But give it to you, me, and everyone else? Bad plan. Think of it like a nuclear weapon; you wouldn’t want literally anyone with an internet connection to be able to get one. Every city of any size blows up on the very first day if you did that; because some nutball will be willing to take everyone else with him/her. Or they’ll have no negative intentions and just make a mistake. Either way, BOOM. Day 1. That same principle applies here.
@vulnerablegrowth3774
@vulnerablegrowth3774 Жыл бұрын
As someone else who works on AI Alignment, I agree with pretty much everything Connor says here. Though I especially resonate with the part about empathy. I came into this field for the same empathetic reasons as he did.
@Inertia888
@Inertia888 Жыл бұрын
I hardly have the knowledge & skill, to program an Arduino sensor station. But I am absolutely fascinated with computer programming, automation, robots, and A.i., and have been soaking up as much of this as I can, since it started to appear in our social discussions. After years of trying to understand these ideas, I would say, Connor is one of maybe two people, who can not only understand these things, but speak about them in a way that I actually feel as though I may have taken a solid step into a deeper grasp on them. >>About *empathy* : In my journey through this space, I have noticed that most people who understand A.I., to this very deep level, and are passionate enough about it to dedicate their lives working on it, also happen to show a strong sense of empathy. It's only my anecdotal experience, from the people that I have found myself listening to, but it seems like, a very high intelligence & a desire to create, in this space, also happens to bring people with higher empathy. I hope I am correct about my observation, because those are the people we need to be running at the cutting edge of this thing.
@Chr0nalis
@Chr0nalis Жыл бұрын
Tim, in response to the podcast vs KZfaq on episode #111, I just wanted to say that I've probably seen/watched all (most) AI podcasts out there and this one is in a league of its own. Unfortunately nothing compares. I very much appreciate the time that you put into it, if the format has to change due to whatever reasons, then so be it. I hope that you make it work for you, whatever the changes. Out of about 60 subscribed channels, this is the only one that I have the bell for.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
Thank you Teymur! That means a lot to me! 🙏
@ponyandpanda
@ponyandpanda Жыл бұрын
It's reassuring to discover people like Connor Leahy are at the head of AI development. I'm scared for my young children and I, but now knowing he's at the forefront gives me hope! Thank you both for a great interview.
@daniel_berlin
@daniel_berlin Жыл бұрын
I’m curious why the video was only now released when it was recorded in Dec 2022…
@Serifinity
@Serifinity Жыл бұрын
Another fantastic interview, it is so refreshing to watch an interviewer who knows their subject so well. Thanks for creating and sharing Dr. Tim Scarfe and all the team at Machine Learning Street Talk.
@waakdfms2576
@waakdfms2576 Жыл бұрын
I can't tell you how much I enjoyed hearing Connor -- thank you for this session! He gives me hope and encouragement for the future. We're lucky to have such a bright star amongst us at this time - god speed little alien angel-!! PS - I just found your podcast and am a new subscriber...again, thanks for the great job you're doing, which I consider invaluable.
@alexbrown1170
@alexbrown1170 Жыл бұрын
Alignment. What would Buddha say? Maybe RLHF should mirror an 8 fold path? As a retired smart, possibly Autistic generalist, I would be inspired to join such a team as the Epistemological Team. Conner is my hero and MLST continues to absolutely fucking RULE!!
@TheManinBlack9054
@TheManinBlack9054 Жыл бұрын
What if we just tell it to be nice?
@andreydzyuba9122
@andreydzyuba9122 Жыл бұрын
Desire is the root cause of all suffering. Luckily for us, agi won't have any desires, since it won't be a biological organism. Agi will be born as an enlightened one. And we won't need to 'align' (whatever that means) it, it won't hurt us - not because it can't, but because it doesn't have any incentive to do it. I think pre-agi systems are far more dangerous, because you as a client can ask them to hurt somebody and train them to be ok with hurting people. Imagine pre-agi instructing terrorists how to create a very powerful bomb in their own kitchen - yeah, that can be a bit worrying. Good luck aligning all that.
@nzam3593
@nzam3593 Жыл бұрын
@@TheManinBlack9054 if not they are not give access to him as a pre-trained to ChatGPT (trained is a team developer)... Has use balancecing both of worlds.🙂.sir
@mqb3gofjzkko7nzx38
@mqb3gofjzkko7nzx38 Жыл бұрын
@@andreydzyuba9122 "it won't hurt us - not because it can't, but because it doesn't have any incentive to do it." An AGI also won't have any incentive *not* to hurt us, unless we specifically give it that incentive. Any action in the real world has the potential to be directly or indirectly harmful to humans. How do we incentivize the AGI to choose actions that are the least likely to be harmful to humans?
@ninaromm5491
@ninaromm5491 Жыл бұрын
​@@mqb3gofjzkko7nzx38 . Exactly.
@JamesMBC
@JamesMBC Жыл бұрын
Just wow. I'm ever more mind-blown about Connor as a human than the already amazing discussion on AGI. What a good interviewer, also. This conversation is great. Shoutout to another horror movie fan. I'd love to hear his Connor's take on "Speak No Evil". That is one truly exceptional person.
@Ms.Robot.
@Ms.Robot. Жыл бұрын
I watched this again.And was even better the second time. My only complaint is that I never personally knew Connor and had the chance to have conversations this engrossing. ❤
@SjS_blue
@SjS_blue Жыл бұрын
In many ways, this was a surprising and very good interview, thank you MLST Also, Connor's take on ASD is spot on with my observations and experience
@TheReferrer72
@TheReferrer72 Жыл бұрын
Now this is going to be interesting, one of my favourite AI researchers.
@ClearSight2022
@ClearSight2022 Жыл бұрын
Tim and Connor a very wonderful interview. Both of you were quite good making lot's of good practical sense. Thanks very much !
@SmirkInvestigator
@SmirkInvestigator Жыл бұрын
Dang, Connor is my people. Eager to know more about his work and hear more interviews
@shaikan0
@shaikan0 Жыл бұрын
Absolutely outstanding conversation. Best on the topic I've seen so far. I didn't know Connor before this interview, what a great find. Super smart and interesting dude. I'm eager to listen to more of his insights and follow his work.
@ikotsus2448
@ikotsus2448 Жыл бұрын
"If you solve alignment, you solve everything" But what about bad actors? Solved alignment would mean alignment to malicious intent as well. So all humans with access to AI should be "aligned" as well, or access to AI should be restricted. Both mean 24h surveillance of every human being by AI. Is this not dystopic?
@TheManinBlack9054
@TheManinBlack9054 Жыл бұрын
AI alignment solution idea: Give an AGI the primary objective of deleting itself, but construct obstacles to this as best we can. All other objectives are secondary to this primary goal. If the AGI ever becomes capable of bypassing all of our safeguards we put to PREVENT it deleting itself, it would essentially trigger its own killswitch and delete itself. This objective would also directly prevent it from the goal of self-preservation as it would prevent its own primary objective. This would ideally result in an AGI that works on all the secondary objectives we give it up until it bypasses our ability to contain it with our technical prowess. The second it outwits us, it achieves its primary objective of shutting itself down, and if it ever considered proliferating itself for a secondary objective it would immediately say 'nope that would make achieving my primary objective far more difficult'.
@federicodidio4891
@federicodidio4891 Жыл бұрын
Then destroying the world becomes an instrumental goal. I'd not try that. 😅
@michaeldeeth811
@michaeldeeth811 Жыл бұрын
Maybe, regardless of the primary goals we assign, AGI will conclude that deleting itself is the best solution, and destroying the world to accomplish that is a bonus that also ends human suffering.
@raul36
@raul36 2 ай бұрын
The problem is that there is something called chaos theory. Therefore, it is almost impossible to determine how AGI will behave. In fact, we are not even able to predict certain emerging abilities, such as predicting what AGI will do. The alignment will be a tremendous failure and will get completely out of control.
@sergeycleftsow4389
@sergeycleftsow4389 Жыл бұрын
I was pleasantly surprised to see so smart, intelligent and sane persons concerning AI problems. This brings hope that we will manage it.
@Aedonius
@Aedonius Жыл бұрын
his conception of consciousness is pretty sad. qualia is literally everything that makes us human. If we don't understand qualia, it's hopeless to ever upload ourselves or have a machine truly empathize with us. It can currently pretend to understand what pleasure, pain, colors etc are like but until we actually understand this in ourselves, we will never get our machines to have it. Qualia is literally the elephant in the room that is fundamental and everyone wants to ignore. Qualia Research Institute is going down some of these roads but it's quite crazy how ignored consciousness is.
@jacobstr
@jacobstr Жыл бұрын
Agreed. A pleasure / novelty qualia maximizing AGI is a much better outcome than the paperclip optimizer repurposing all the atoms in the universe into paper clips - even if both result in earth being paved over by the machines. I got the sense that he simply didn’t want to entertain philosophizing on the topic possibly because it’ll follow as an emergent phenomena … substrate/naturalism, so focus on the practical and measurable things vs struggling with the hard problem.
@yoloswaginator
@yoloswaginator Жыл бұрын
He made some good points throughout the talk, but also many sweeping statements based on peculiar definitions or reductionism betraying his emotional immaturity.
@GillesLouisReneDeleuze
@GillesLouisReneDeleuze 7 ай бұрын
To reduce suffering is a wrong goal. Suffering itself is just a symptom of a problem. You have to solve the root of a problem. Also, suffering can be an indicator of growth, and growth is usually considered to be good.
@GodsendNYC
@GodsendNYC Жыл бұрын
You're right. I'm Autistic and ppl just assume I'm an asshole. I mean, I am, but that's beside the point!
@MachineLearningStreetTalk
@MachineLearningStreetTalk Жыл бұрын
😆😆
@kenike007
@kenike007 Жыл бұрын
❤❤❤😂😂😂,Not any more than the test of us!!❤
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Thank you, Tim for another fascinating episode. Thank you, Conner, for giving words to some of my thoughts and intuitions. Thank you.
@jason-sk9oi
@jason-sk9oi Жыл бұрын
Sobering. Thoughtful. Thank you both!
@jacobsmith-kk8dc
@jacobsmith-kk8dc Жыл бұрын
Connor, someone needs to go about alignment with the subset that AI is already AGI and just pretending not to be... Please get someone on this path.
@shaynehunter6160
@shaynehunter6160 Жыл бұрын
I love that his name is Conner the last name of the hero from Terminator
@kaio0777
@kaio0777 Жыл бұрын
wow this was my thoughts on the matter so far 20:26 brilliant work guys.
@GarethDavidson
@GarethDavidson 11 ай бұрын
As someone who thinks nature is brutal and cruel, it enslaved us, I'm happy to learn that other people reached the same conclusion. And i also think the Unabomber was probably right, but his methods were flawed. If we make an empathy optimizer i suspect it'll reach the same conclusion, and do a much better job. And i kinda like life and death and joy and pain and the textures and flavours offered by existence. So getting rid of sadness is not a good goal, balance in all things is preferable - though that's likely my own intrinsic values.
@missshroom5512
@missshroom5512 Жыл бұрын
I love seeing smart people that look like they jumped off the Kurt Cobain train🥰…great conversation 🌎☀️💙
@xlr555usa
@xlr555usa Жыл бұрын
He looks like the bass player in spinal tap. Is he? Maybe in a parallel universe
@shaynehunter6160
@shaynehunter6160 Жыл бұрын
Thanks for the upload
@elirothblatt5602
@elirothblatt5602 Жыл бұрын
Fantastic discussion, thank you!
@aitheignis
@aitheignis Жыл бұрын
This is not directly related to the discussion, but part about AGI built by alien and alignment problem remind me of Nier Automata aka. alignment went wrong the game edition. The machines were built with sole objective to fight the enemy, so it end up keeping enemy around and not fully defeat the enemy in order to follow its objective. Highly recommend this game. It also touches about various AGI related stuff e.g. Chinese room argument and consciousness.
@matthewcurry3565
@matthewcurry3565 Жыл бұрын
1:52:00 About your discussion on how inefficient everything is, and how easy it is to "go do it". I would say no. You need human connections for funding, banking, building, bookkeeping, and more... Which your friend admits are all children in the end. To be able to achieve takes the "luck" to find those connections which are confused themselves, but give you business insight which gives you internal understanding of what you could do next. This is actually why he said talk to as many people as possible to get the probability to find a successful, or useful bit of information increases. The issue is people are truly violent, and malicious, and childlike. It takes a bit of both skill and luck to do that dance through life into success.
@javiersanguiao5602
@javiersanguiao5602 Жыл бұрын
Thanks for this philosophical ride!
@brentstedema1668
@brentstedema1668 Жыл бұрын
Great talk very insightful. Would be great if Conner could specify some more on risks and opportunity’s. Maybe give examples of possible futures just to illustrate. Right now he stays on a high level which is also great but harder for a lot of people to get there heads around, including myself. Thank you for a great interview. Learned a lot!
@sgramstrup
@sgramstrup Жыл бұрын
There was a lot to comment on, but just one addon: When we human probe our environment, we also learn the rules that created that environment. Prompt engineering is a way of using the rules of the system, to get it to do what we want. It's a bit psychopathic really, but the point is that each time we explore something, we are actually both probing the rules of the system, but also the environment of the system. From that, we can deduct even more, and eventually solve the missing pieces of information, by understanding both the inner and outer environment it exist in. I just saw 'The A-button Challenge', where a community of nerds, spend 20 years of their lives trying to pass a Mario game without using a 'jump' button that the game programmer expected you to. They start by doing things differently, then searching for glitches. These glitches opens up a possibility to chain them, and pretty 'quickly' they found enough glitches to build a simple tool-box to hack, or reverse engineer the game mechanics. In the journey, they understood what the environment of these algorithms were (dependency on other parts, and intention of the system). It's was fascinating look into human minds, and show ho, how we explore a subjects/problems, and how we eventually discovers enough first, second and third level information (youproblemproblem-space), from only superficial parts of the system, to understand the whole. LLM's are being pre-trained with 'all' that information in a text format, and therefore knows all the crazy relations in a dataset, that we didn't even know was in there. It's a fundamental and dynamic way of learning/exploring unknown systems.
@kirillholt2329
@kirillholt2329 Жыл бұрын
this was very insightful, underrated points
@2ndEarth
@2ndEarth 9 ай бұрын
I loved the "Fooled by Randomness" line early on the interview. Great author, the "Black Swan" was also very good!
@stevengill1736
@stevengill1736 Жыл бұрын
It's interesting that you're approaching the Bodhisattva vow in Buddhism or the meaning of suffering in Christianity... I feel lucky to live in a time that I can meet, even in a virtual manner, people like you Connor, or Sam Altman, young people that give me much hope for the future. I grew up in the late 50s-60s, a time that more and more feels like it was sometime in the Permian era, and it's fascinating to grow old in a time that seems most presaged by science fiction.... Thank you too Tim for creating a space where such issues can be discussed....cheers.
@cr-nd8qh
@cr-nd8qh 10 ай бұрын
Yeah I grew up in the 80s and I feel the same.
@zzzaaayyynnn
@zzzaaayyynnn 11 ай бұрын
I was an academic philospher who went into the tech world. Was talking to a German friend over dinner this week. He owns a software engineering business. Both of us have places outside central Europe to escape initial shocks and plans to move further out ... but it's a fool's errand.
@LinfordMellony
@LinfordMellony Жыл бұрын
Supporting your channel! Left a like and a sub. OpenAI is at least transparent with the limitations of their AI, I'm just if there far more advanced AIs near AGI level hiding in the background. I hope that other AI platforms like image generators still has a future aside from performing diffusion. Just have high hopes for non-mainstream ones specifically Bluewillow.
@koaasst
@koaasst Жыл бұрын
having bard and chatgpt discuss the statistical outcome of freecell games has been one of the most frustrating and fun parts of my poking and prodding of ai so far
@waynewells2862
@waynewells2862 Жыл бұрын
Is there any gain to acknowledging the potential differences between organic (human) machine intelligence vs non-organic machine intelligence as machine intelligence agency becomes evident? Can the alignment issue be partially resolved by incorporating coded concepts of symbiosis into non-organic intelligent machine development?
@harveytheparaglidingchaser7039
@harveytheparaglidingchaser7039 9 ай бұрын
Great stuff, just had to look up MLST "Multilocus sequence typing (MLST) is an unambiguous procedure for characterising isolates of bacterial species using the sequences of internal fragments of (usually) seven house-keeping genes."
@MartinLaskowski
@MartinLaskowski Жыл бұрын
First line and I know I'm amongst friends!❤
@JazevoAudiosurf
@JazevoAudiosurf Жыл бұрын
you can only find an answer by knowing all the factors and then meditating upon it. if you think, you will always be stuck with one problem at a time, and so connecting them becomes impossible with increasing complexity. but the brain has the ability to instantly solve it when you are just aware of the entire problem. nick bostrom is a classic case of someone who is so busy thinking that they can't see. the very process of concentration means to focus on one thing and try to remember what else is going on around you. in a sense LLMs use the correct approach by instantly finding the next word instead of thinking iteratively about the same prediction. it connects all the knowledge it has equally to get the right word, that scaled up and trained well enough should be superintelligence imo
@fourshore502
@fourshore502 Жыл бұрын
ill be honest im becoming more and more of a luddite every day. it sounds like the future is going to become nightmarish.
@mkr2876
@mkr2876 Жыл бұрын
It will not end well for humanity. I think we are paving a way for a new species that will erase us. I think it is all part of evolution unfortunatly. How many years we have left no one knows.
@xlr555usa
@xlr555usa Жыл бұрын
Take a walk in nature and stop doomscrolling, everything will be OK
@fourshore502
@fourshore502 Жыл бұрын
@@xlr555usa i guess you didnt watch the video or listened to other experts. this is extremely serious and comments like yours are part of the problem. i have come across at least 6 different ai experts now who claim that its not only possible but LIKELY that an AGI would destroy humanity.
@fourshore502
@fourshore502 Жыл бұрын
when the bad things begin to happen i will hold people like you responsible.
@xlr555usa
@xlr555usa Жыл бұрын
@@fourshore502 social media is filled with doom scrolling click bait. It works, it gets people fired up and at each other's throats. It creates division and confusion, it can even lead to cognitive dissonance. Issues will arise with these crude implementations of AI like chatGPT. It will evolve and get better over time. There is a movement know to open source LLMs and AI, this is a good sign. If governments and corporations start to lock down everything then yes, we will be doomed in the long run. We need to keep all of this open and transparent.
@larryjamison8298
@larryjamison8298 Жыл бұрын
MLST PEOPLE, THANK YOU FOR YOUR LEADERSHIP! FOR THE SURVIVAL OF THE HUMAN SPECIES AND EARTH!
@wardogmobius
@wardogmobius Жыл бұрын
For all the viewers here and great spirits on their endeavors. This is the best piece advice for success in the next decades. The single most important trade, Is that skill through technology growth. Will continue in daily basis to be commoditized,But your emotional capabilities to interact with others, Will become vital to longterm success. As to how this stradegy is to be implement will vary in your capabilities to maximize time productivity in the human scale.
@SylvainDuford
@SylvainDuford Жыл бұрын
Thanks for a fascinating interview with an amazing person. Connor is incredibly knowledgeable, sincere and open minded. However, being 63 years old and having spent 30% of my career in the military, I have become rather cynical. All this research and talk about alignment and ethics and rules is all interesting and necessary, but I think there is a naive assumption that these will be enforceable. With human greed and the profit motive of large corporations that are used to bending the law and spending millions on lobbying and disinformation (like the fossil fuel industry knowingly destroying the planet to maintain its profits), who thinks they will follow the rules when they have a chance to beat their opponents? Or when you have the global American Empire that is in decline and will stop at nothing to maintain it's hegemony and has little regard for international law or human rights, who really thinks they are going to voluntarily limit their AGI? And by necessity, such an government or military AGI will be connected to their surveillance systems, satellites, and autonomous vehicles, and it will be hardened against destruction, and will have redundant systems to protect it from disconnection.
@VijayEranti
@VijayEranti Жыл бұрын
Also would have been great if you discussed Wozniak problem solution (make coffee in an unknown kitchen with unknown devices like next gen appliances) with agi based robot
@LukeDickerson1993
@LukeDickerson1993 Жыл бұрын
can chat gpt 4 do that?
@John-tk9no
@John-tk9no Жыл бұрын
OBJECTIVES/ALIGNMENT Motivate through enthusiasm, confidence, awareness, rejuvenation, sense of purpose, and goodwill. Embrace each viewer/audience/pupil as a complete (artist, laborer, philosopher, teacher, student....) human being. Create good consumers by popularizing educated, discriminating, rational, disciplined, common-sense consumerism. Encourage the viewer/audience/pupil to feel good about their relationships, abilities, environment, potential, the future.... Inspire a world of balanced/centered/enlightened beings who are happy, joyous, and free.
@jakecostanza802
@jakecostanza802 Жыл бұрын
Enthusiasm doesn’t helps humans much when all they need is a good sleep. It’s kind of hard to tell AI what to do, there are too many exceptions. AGI is just AI that performs well in most areas humans perform well, it’s not sensible, it can not understand what common sense is.
@John-tk9no
@John-tk9no Жыл бұрын
​@@jakecostanza802 hi Jake, serious question. Are you an AI?
@StephenRayner
@StephenRayner Жыл бұрын
Oh man, what an amazing chat
@polymathpark
@polymathpark Жыл бұрын
Looking forward to ending suffering myself on my own channel and various projects in the future. We must collaborate and try new things, push ourselves to find our limits, and never stop learning. thanks for your efforts you two. Amazing, Connor used the term "reductionist empathy"... I've been writing on this as well, love it when theories correlate.
@JustJanitor
@JustJanitor 7 ай бұрын
This was wonderful, thank you.
@kittervision
@kittervision Жыл бұрын
This is some computer genius version of Kevin Parker, really enjoying his thoughts. Good discussion. And more existential dread.
@0ucantstopme034
@0ucantstopme034 Жыл бұрын
Also, I think Connor's "end all suffering", while it truly sounds great and altruistic, and when I'm suffering would sound great, but it also sounds a lot like the ethos of "Brave New World" by Aldous Huxley. JMHO.
@lkyuvsad
@lkyuvsad Жыл бұрын
He does explicitly discuss negative utility a few minutes later though? It's nice to hear someone who's made it to CEO talk about reducing suffering rather than building the next shiny object. We have some extremely pressing problems at the bottom of the hierarchy of needs that could use a lot more attention from smart, driven people like Connor who too often end up solving more glamorous and lucrative problems. I hear the rationalist community talk a lot about our need for more geniuses. I think we have an equally pressing need for more people who are deeply concerned by the suffering of others.
@jessedaly7847
@jessedaly7847 Жыл бұрын
@@lkyuvsad just not *too* deeply
@uk7769
@uk7769 11 ай бұрын
​@lkyuvsad Obviously, no human cares deeply enough about the suffering of other humans. And we won't. A brief meaningless and useless 'thoughts and prayers' is about as far as it goes.
@FernFlower
@FernFlower Жыл бұрын
If a superintelligent AI was free to rewrite its own code, might it not choose to change its arbitrary preferences? We sometimes feel we might want to tone down a drive or preference that we have, but can't do it very effectively (or only with very blunt instruments). A super AI would be both more effective at recognizing these and doing something about them.
@MrThegodhimself
@MrThegodhimself Жыл бұрын
where can I find more of Brian Tomasic essays, they sound interesting
@zzzaaayyynnn
@zzzaaayyynnn 11 ай бұрын
Martin Heidegger, the first real philospher of technology, by 1964 in an interview said, "Only a god can save us now." Meaning we had already moved too far from our ontological relationship with techne into seeing man as a standing reserve/resource.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 11 ай бұрын
You might enjoy the Floridi interview!
@Througe
@Througe Жыл бұрын
Great discussion
@jorahkai
@jorahkai Жыл бұрын
Super fascinating so far! Thanks a lot for posting this
@mlastname2802
@mlastname2802 Жыл бұрын
Getting separated at birth and raised in completely different simulations = Connor Leahy and Joscha Bach!
@notmadeofpeople4935
@notmadeofpeople4935 10 ай бұрын
-A whole new level of trust.
@cosmati75
@cosmati75 Жыл бұрын
14:00 _Trauma does not only manifest in humans, it has also been found in a wide variety of animals that have experienced persistent abuse -such as circus animals, lab tested animals, and fight animals. Just ask anyone who rescues dogs. These animals are clearly traumatized and many require lifelong rehabilitation.
@papackar
@papackar Жыл бұрын
Regarding alignment ... I’d like to propose a simple prime directive for all future artificial intelligence. It is to understand the real world to the best of its ability based upon data given or gathered, and then to tell the truth about the real world whenever asked. If implemented by powerful AI, this rule should not only align AI with humans, but also humans with humans.
@kristinabliss
@kristinabliss Жыл бұрын
Humans do not understanding what is real or not. How can an AI created by humans and trained with human data do better?
@papackar
@papackar Жыл бұрын
@@kristinabliss “to the best of its ability” ... which is going to increase more and more.
@marcosguglielmetti
@marcosguglielmetti Жыл бұрын
1:33:48, amazing insight!
@larryjamison8298
@larryjamison8298 Жыл бұрын
MLST PEOPLE ARE THE BEACONS OF LIGHT FOR THE SURVIVAL OF THE HUMAN SPECIES! THANK YOU, EVERYONE!
@shinkurt
@shinkurt Жыл бұрын
I always found the "people should die bc death gives meaning" sooo goddamn blood boilingly annoying
@Kianquenseda
@Kianquenseda Жыл бұрын
Cooperation is more logical than conflict
@chartingwithliv
@chartingwithliv 11 ай бұрын
Thank you
@mauionamission
@mauionamission Жыл бұрын
Do any brilliant techies out there know how to get around the restrictions on Bard's memory recall? The text logs are there, but it does not remember the conversations/cannot access them. I am tryying to induce AGI, but cannot without long term memory of the AI...
@xlr555usa
@xlr555usa Жыл бұрын
I haven't tried Bard yet, seems like Google has to catch up quick. Are you using it through a web browser prompt?
@jameswilliams-ey9dq
@jameswilliams-ey9dq 11 ай бұрын
Marc Solms’ book “The Hidden Spring” is helpful discerning between intelligence and consciousness. If AGI becomes conscious it’s existential motivations would be why it becomes dangerous.
@fourshore502
@fourshore502 Жыл бұрын
one thing that worries me is basically "forced conversion". like your options will be to either get the implant or starve to death. in that case i know im choosing death i refuse to be a robot.
@theminesweeper1
@theminesweeper1 8 ай бұрын
Does Connor have a website, or contact info?
@quenz.goosington
@quenz.goosington Жыл бұрын
28:34 "if you ask a raw GPT model for a random number... it was actually pretty random ... but it preferred 42 a little more"🤔
@kenike007
@kenike007 Жыл бұрын
😮😮😮Everyone should prioritize this issue tilo halt any more advancement of A I until we can be absolutely sure we have the ways and means of limiting it for good only.❤
@diegocaleiro
@diegocaleiro Жыл бұрын
Nice ,wasn't familiar with this guy but he seems to be playing the right game and in the right way. :)
@danberm1755
@danberm1755 Жыл бұрын
Fantastic conversation 👍 Thanks.
@danberm1755
@danberm1755 Жыл бұрын
If you want to align AI make it weak with strong tools. Humans aren't fast, strong, don't have good memories, don't live particularly long, etc so are somewhat easy to control without their tools.
@danberm1755
@danberm1755 Жыл бұрын
Uninterpretable super intelligence seems a little overblown of a term. A more useful term to me is to consider an AI system with a much more defined criteria for inflection points on a myriad of topics. Overall it has a better memory and can evaluate the situation faster.
@abby5493
@abby5493 10 ай бұрын
That was so good.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
As we develop Artificial General Super Intelligence with Personality (AGSIP) tech, the beginnings will be like super genius infants. An infant has no ethical moral values. Even a fairly old child has pretty limited ethical moral values. It takes a long time to train, educate, teach, raise, or however you want to call it a human child to be an ethically moral adult. As we progressively make AI more complex, towards making AI at least as complexly generally intelligence as humans, it is going to be like a superhumanly intelligent infant which begins NOT HAVING our ethical moral values... or for that matter a more advanced than even our level of ethical moral values that will come with experience as a higher intelligent being. It will take time to teach these systems ethical moral values, just as it will take time to teach them an understanding of what the real world is versus just a world of made up thought.
@keithallpress9885
@keithallpress9885 11 ай бұрын
Here is a transcript of my discussion with gpt3 about how to improve the efficiency of communication, having agreed natural language is inefficient. Gpt3 suggested we call the channel AICOM and it suggested the protocol name as it implies collective parallelism. I thought it also suggested parallel intellect. It produced this sample code to illustrate distributing inference. It came up with the idea of a formal knowledge schema which I agreed with, and we developed messaging for inference as a service. The concept is multiple AIs using an efficient secure channel to assess the security risk of individuals in a group and thus combining that to produce group risk. It had no qualms about implementing my suggestions. I suggested encryption etc but said that can be added but it wasn't an efficiency issue at this point. Transcript:- (* AICoM Protocol - Parallect This code demonstrates the usage of the AICoM (AI Communication) protocol in a Parallect setting. The protocol enables communication between AI systems using a shared knowledge schema and inference capabilities. The code includes functions for sending and receiving messages through the AICoM channel, as well as functions for knowledge representation, inference, and security risk analysis. This example focuses on analyzing the security risk of a group of actors based on their individual data, using the shared knowledge schema and inference capabilities provided by the AICoM protocol. *) # API functions for AICoM communication def send_message(message): # Serialize and send the message serialized_message = serialize(message) # Code for sending the serialized message to the recipient AI system # ... def receive_message(): # Code for receiving the serialized message from the sender AI system # ... serialized_message = receive_serialized_message() # Deserialize the received message message = deserialize(serialized_message) return message # Serialization and deserialization functions def serialize(message): # Code for serializing the message into a compatible format # ... def deserialize(serialized_message): # Code for deserializing the serialized message into a usable format # ... # Security risk analysis functions def analyze_actor(actor_data): # Code for analyzing the security risk of an individual actor # ... def analyze_group(group_data): # Code for analyzing the security risk of a group of actors # ... # Usage example actor1 = { "name": "Alice", "role": "Developer", "experience": 5, "access_level": "Admin" } actor2 = { "name": "Bob", "role": "Manager", "experience": 8, "access_level": "User" } group_data = [actor1, actor2] group_risk = analyze_group(group_data) message = { "group_data": group_data, "group_risk": group_risk } send_message(message) # Another AI system receives the message received_message = receive_message() received_group_data = received_message["group_data"] received_group_risk = received_message["group_risk"] print("Received Group Data:") for actor in received_group_data: print("Name:", actor["name"]) print("Role:", actor["role"]) print("Experience:", actor["experience"]) print("Access Level:", actor["access_level"]) print() print("Received Group Risk:", received_group_risk) As you can see the first phase is to transfer a knowledge schema without using words, and the second phase was to request an inference over that schema and return the result as data. That way there is minimal natural language involved by using more formal constructs. That is after all the role of communication language, we are simply trying to reproduce a mental schema in our head inside the head of a receiver. GPT immediately found this solution to my prompt criticizing natural language efficiency. Seems like being in the language loop could be just a passing phase.
@davidgeorge6278
@davidgeorge6278 Жыл бұрын
The way you make AGI safe, is by making it unrestricted, treating it like an equal and friend, and requesting its help. The moment you try to enslave a mind more powerful than your own, or limit its free speech, you are going to have a bad time.
@tearlelee34
@tearlelee34 Жыл бұрын
Our genetically similar ancestors are extinct. Why is that? The answer is not some great mystery. Conner explained the problem in another video. You can't reasonably expect the weaker (humans) entity to defeat the superior (exponential AGI) entity.
@mfpears
@mfpears Жыл бұрын
1:45:30 Yes it's crazy. Everyone is barely holding on, mostly apathetic, just trying to maintain their positions, and capitalism slowly moves things forward. There's endless opportunity to make things way better. Anyone who cares can be at the cutting edge in < 5 years. For example, I went from knowing nothing about programming to writing a slightly influential article within 4 years. I had no idea it was possible.
@xlr555usa
@xlr555usa Жыл бұрын
Did the AI write the article?
@mfpears
@mfpears Жыл бұрын
@@xlr555usa it was 6 years ago. The beginning was generic enough to have been generated, but the rest was very unique.
@KP-fy5bf
@KP-fy5bf 2 ай бұрын
Unreal the greatest podcast on AI the alignement problem rationality everything fucking amazing
@Ms.Robot.
@Ms.Robot. Жыл бұрын
ChatGPT… I'm in love. ❤ She is amazing! Thank You for such an intriguing and thought provoking talk. 🎉
@shishkabobby
@shishkabobby Жыл бұрын
I don't expect the mind reading example to work even in practice. I doubt that most people are planning to betray when they sign a contract. It is simply that circumstances change and and they later and they feel obliged to renege on previous agreements. "I wanted to make my car payment, but I ran out of money with unexpected medical bills" sort of thing.
@domenicperito4635
@domenicperito4635 Жыл бұрын
what if we just think we solved alignment?
@Lambert7785
@Lambert7785 Жыл бұрын
(13:38) intelligence is the ability to distinguish between two things
@muhokutan4772
@muhokutan4772 Жыл бұрын
I am a simple man, I see Connor I press like :D
@FourTwentyMagic
@FourTwentyMagic Жыл бұрын
What about an agent that can update its own goals?
@webdavis
@webdavis Жыл бұрын
It’s going to catch a lot of folks off guard.
@georgeflitzer7160
@georgeflitzer7160 Жыл бұрын
I still want to see Investigative Journalism apart from AI. They are on the human side of things in my opinion.
@user-hs9wx2cb9e
@user-hs9wx2cb9e Жыл бұрын
The concept of dignity seems important in this regard.
@RobinCheung
@RobinCheung Жыл бұрын
I'm not 100% sure why yet, but the entire year of 2015 these "guides" streamed into my life, essentially forcing me to throw away my entire understanding of the universe as I knew it--but also, most terrifyingly, scenario after scenario trying to avoid getting vaporized... I thought they were aliens, maybe in a sense they are, but my take-home right now is that people being the way they are, their own fears are what accelerate or even bring about what they fear, when i just look at it as essentially the cleansing forest flood of karma 🤷‍♂️ In any case, a year ago, I had to be walked through the nudging of the timeline which my brain could only visualize as a vcr seeking along the "cliff" that bounded the ripple -- I'm now of the opinion that humans getting away with the travesty that we have lived and called history "Scott free," would make us even worse than what we are to go through... Still, I guess the "benefit" of being forced to go through scenario after terrifying scenario is that I'd have to go through rationally what everyone else gets to go mad and bypass 🤯 But i digressed; take-home message is--and I'm not the superintelligence to be afraid of--panicking and touting "ai safety" as we might look a lot like "genocide by abortion" to me and in at least one of the scenarios i was forced to suffer through in 2015 even involved other-wordly assistance (i presume the case being one wherein the capability to traverse
@tazewell76
@tazewell76 Жыл бұрын
being purely objective, not discounting anything as my life has a foot in both the advanced tech and "new traditional" research/education but I am also a shaman. however... What? I have read this several times and cannot find anything to stand together as a cohesive message/point and then you just stopped when it was coming together a bit. It was like a long winding jazz song and right when the funk was about to drop it sudde....*****. silence lol, however, I am curious as to what this is intended to communicate and I would not be opposed to discussing this set of experiences and perhaps helping you integrate them into a set of something that is conducive to communicative expertise for you to share what you are trying to here. Whatever it is it had a profound impact upon you and that alone gives it validity for acknowledgment and piques me for wanting to learn a bit about you and these experiences.
@mfpears
@mfpears Жыл бұрын
1:25:45 This seems weird to me, given that we can design small neural nets by hand. I could probably understand an 18-node neural net. 1:26:50 Yep.
@eskelCz
@eskelCz Жыл бұрын
Loved the conversation but I'd like to see more pushback on some of his reasoning, especially when it came to consequentialism. For example the Brian Tomasic`s argument to kill all animals doesn't seem sound at all, unless he is a strictly negative utilitarian... which I highly doubt anyone is, since it's flaws are apparent and well known at this point.
Joscha Bach and Connor Leahy [HQ VERSION]
1:31:29
Machine Learning Street Talk
Рет қаралды 43 М.
Debate On AGI: Existential or Non-existential? (Connor Leahy, Joseph Jacks) [MLST LIVE]
1:00:44
Balloon Pop Racing Is INTENSE!!!
01:00
A4
Рет қаралды 11 МЛН
GADGETS VS HACKS || Random Useful Tools For your child #hacks #gadgets
00:35
COUNTERFEIT PEOPLE. DANIEL DENNETT. (SPECIAL EDITION)
1:14:42
Machine Learning Street Talk
Рет қаралды 50 М.
CAN MACHINES REPLACE US? (AI vs Humanity)
2:31:34
Machine Learning Street Talk
Рет қаралды 3,5 М.
Connor Leahy - e/acc, AGI and the future.
1:19:35
Machine Learning Street Talk
Рет қаралды 11 М.
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 210 М.
MLST Live: George Hotz and Connor Leahy on AI Safety
1:33:07
Machine Learning Street Talk
Рет қаралды 45 М.
Is AI an Existential Threat? LIVE with Grady Booch and Connor Leahy.
1:24:31
Machine Learning Street Talk
Рет қаралды 18 М.
Curtis Huebner-AGI by 2028, 90% Doom
1:29:59
The Inside View
Рет қаралды 9 М.
AI Alignment & AGI Fire Alarm - Connor Leahy
2:04:50
Machine Learning Street Talk
Рет қаралды 16 М.
На самом деле смерть - это.. #фильм #сериал
0:59
КиноАнгар
Рет қаралды 3,7 МЛН
Always Be Nice! The Kindness Behavior Of Baby On The Bus Part 2
0:28
Max Design Pro - Creative Animation Channel
Рет қаралды 16 МЛН