No video

AI doomers are wrong | Lee Cronin and Lex Fridman

  Рет қаралды 49,431

Lex Clips

Lex Clips

Күн бұрын

Пікірлер: 515
@LexClips
@LexClips 8 ай бұрын
Full podcast episode: kzfaq.info/get/bejne/ea2Zd9SZuMqweJ8.html Lex Fridman podcast channel: kzfaq.info Guest bio: Lee Cronin is a chemist at University of Glasgow.
@raginald7mars408
@raginald7mars408 8 ай бұрын
Criminal!
@raginald7mars408
@raginald7mars408 8 ай бұрын
Criminal!
@marrz8244
@marrz8244 8 ай бұрын
Elon musk and this guy Lee Cronin🤨 should have a debate..... 🙏🤞🙏
@user-fx7li2pg5k
@user-fx7li2pg5k 2 ай бұрын
maybe symbiosis needs to be defined
@pineapplesandthegovernment6522
@pineapplesandthegovernment6522 8 ай бұрын
His answer of a 0% chance of AI destruction actually makes a lot of sense. Not because it's likely to be the correct probability. But because if we're all ok, he gets to look like one of the few brilliant people who knew not to worry at all. And if AI destroys us, nobody's going to laugh at him because we'll all be dead.
@HeWhoRoamsAimlessly
@HeWhoRoamsAimlessly 8 ай бұрын
😂 👌🫣
@NullHand
@NullHand 8 ай бұрын
Sounds like Descarte's Wager on God.
@str3ber
@str3ber 8 ай бұрын
Win win ftw!
@shawnn6541
@shawnn6541 8 ай бұрын
Interesting stance
@minhuang8848
@minhuang8848 8 ай бұрын
At least we get to laugh at him along the way or something, considering how silly everything he says sounds
@bgill7475
@bgill7475 8 ай бұрын
“We don’t know what it might do, therefore there’s a 0% chance for doom”. This guy is ridiculous 😂
@maximusatov4965
@maximusatov4965 8 ай бұрын
I think his mouth moves faster than his brain.
@trvst5938
@trvst5938 8 ай бұрын
“AI has the potential to create permanently stable dictatorships.” -Ilya Sutskever weeks before they removed Altman for his dealings with domestic and foreign companies. Humans are the real threat. The U.S. is already creating AI drone swarms using the China threat as justification. 💀
@Schlynn
@Schlynn 8 ай бұрын
Yup, he just keeps interrupting himself with some other random tangent and he never gets around to making an actual point. ~16:30, "there seems to be.....let me say....". Just say your point then.
@sakegroelle3040
@sakegroelle3040 8 ай бұрын
Yeah dumb AF arguments. "We don't know what intelligence is therefore we can't make it"? Kek
@ddraigtribe8768
@ddraigtribe8768 8 ай бұрын
Yeah... I admire his optimism, but, pretty naive
@ih4269
@ih4269 8 ай бұрын
Anyone so certain about their views give me doubt. A doubt that they themselves should have. This clip doesn’t make me want know about what he thinks about anything
@kenneld
@kenneld 8 ай бұрын
I wish more people felt that uneasiness with people who speak about unknowable things with authority. Unfortunately most people are happy to swallow an easy answer.
@julianbruce6504
@julianbruce6504 3 ай бұрын
Facts
@TrippSaaS
@TrippSaaS 8 ай бұрын
Lol how on earth is "we don't understand X" an argument that proves that X has a 0% probability of occurring? We do know that as these systems scale, they gain capabilities unexpectedly.
@HB-kl5ik
@HB-kl5ik 8 ай бұрын
Yeah regulation is nonsense tho
@dougwco
@dougwco 8 ай бұрын
AGI is going to be like fusion… always 20 years away. When was the last time people built a complex system which worked, that we did not understand well first.
@ButterBeaverSTAN
@ButterBeaverSTAN 8 ай бұрын
@@HB-kl5ikyeah no let’s just not regulate machines that could end up being cognitively more capable than people. You expect companies to self regulate when there is enormous profit to be made if you’re able to make a super powerful AI that isn’t strong enough to outpower humanity?
@HB-kl5ik
@HB-kl5ik 8 ай бұрын
@@ButterBeaverSTAN Yeah, open source the AGI. Everyone is free from bullshit jobs and fake problems. Civilisation doesn't have to work on boring things and aim for big purpose. Imagine being a consultant, what a waste of life
@Alex-fh4my
@Alex-fh4my 8 ай бұрын
@@dougwco when we built LLMs???? Literally any large language model. We have no way to explain how they are able to perform on the tasks they do. We have no way to explain why models gain capabilities at different scales
@ivankaramasov
@ivankaramasov 8 ай бұрын
I listened for 10 minutes and I don't think I have ever heard a weaker reasoning against the potential danger of AI. It boils down to "AGI is not possible because we have not been able to create it yet" and "even if AGI is possible we have no way to know what it would do, hence there is 0% chance anything could go wrong." It hurts my brain listening to such nonsense
@nanostar6138
@nanostar6138 8 ай бұрын
Guy sounds like a cigarette lobbyist
@metamurk
@metamurk 8 ай бұрын
sorry but you missed the point.
@ivankaramasov
@ivankaramasov 8 ай бұрын
@@metamurk Me? So what was the point?
@metamurk
@metamurk 8 ай бұрын
​​@@ivankaramasov the point is, we don't do AI as long as we don't have main components like intension. We do ML. ML doesn't want sth. ML doesn't live. What we have now is ridiculous small and course compared to the brain of a dog. In terms of hardware and training data. The training data of animal level intelligence is complete reality. What we now have is a little fire, but we are afraid of getting nuked. It's the typical doomer movement regarding a new technology, like TV, cars, Cinema ....
@ivankaramasov
@ivankaramasov 8 ай бұрын
@@metamurk Intention is irrelevant. Read Max Tegmark's book Life 3.0
@myob94
@myob94 8 ай бұрын
I’m a layman. But this man’s arguments lack… a point? Or any continuity on face value. It seems like the arguments of those who are more concerned seemed much more sound and reasoned
@Alex-fh4my
@Alex-fh4my 8 ай бұрын
I think the primary point is that he does not believe humans can create something before concretely understanding how its internals work on a very good level first. I think with models like GPT-4 and even something almost 10 years ago like alphaGo, this is just clearly not true anymore. Noone has any idea how these things have the capabilities they do, and why exactly model A can do X but model B can't
@michaellowe3665
@michaellowe3665 8 ай бұрын
You have to think that the first caveman to accidentally start a forest fire probably believed that he destroyed the world.
@ivocyrillo
@ivocyrillo 8 ай бұрын
I guess it's the main hypothesis for mega fauna extinction in North America and the main cause of Amazon savannization. So yes that caveman was not so wrong
@teo2975
@teo2975 8 ай бұрын
that is a red herring. human beings have relentlessly created technology that have been more dangerous and powerful over time. We have already killed all of prior humanities. In fact the simple act of the genetically smartest guy hominid bonking the genetically smartest gal has created new humans and each of our smarter sub-species has crowded out prior hominids. And htat is with ham handed se xual selection. Add tech to it and I think simple AI plus crispr like tech will already result in the end of current type of humans. But AGI is an order of magnitude higher threat. Why would an AGI see us any different than we see bugs or viruses?
@sxanep
@sxanep 8 ай бұрын
We are not cavemen anymore. We know that we have the means to destroy the world with nukes.
@Optable
@Optable 8 ай бұрын
'We don't know X, so X is 0% capable of malintent & destruction- confidently.' get real Mr. Redundant 😂.
@lordkresh
@lordkresh 8 ай бұрын
If it were up to him we'd never have AGI, we aren't creating a human brain we are creating something different. Understanding the human brain is inconsequential.
@johnhopkinson4054
@johnhopkinson4054 2 ай бұрын
Oh you're so naive
@Dominic416_
@Dominic416_ 8 ай бұрын
This guy has the Neil degrasse Tyson syndrome: I’m an expert in something complicated so I am smarter than everyone else
@avvery8593
@avvery8593 8 ай бұрын
He literally prefaced his opinion with the fact its out of his domain. He is being interviewed for a podcast, not for his policies for his presidential run.
@luciaceba4640
@luciaceba4640 8 ай бұрын
degrasse tyson is not an expert in anything, he is a science communicator ( and yes has a phd in something), but he does not work on anything, is not a researcher nor builds stuff. this guy on the otherhand is not a science communicator ( at least not like de grasse), but does build and research stuff. i personally dont beleive in the whole AGI either, however, as this guy says, the use of it by humans for deepfake, propaganda, misinformation etc is totally a thing.. but that is a human thing, not a robot thing.
@JeffofCurious
@JeffofCurious 8 ай бұрын
​​@@avvery8593yet he still speaks with such hubris and certainty on the topic. Quite literally arrogance.
@pauldirac808
@pauldirac808 8 ай бұрын
The same experts that informed us the jab is safe and effective , global warming ( oops rebranded climate change) is true ,government is telling the truth ,and Epstein killed himself 9 11 and weapons of mass destruction and now bill gates is a medical expert . Smug egotistical bastards who despise us the sheep .
@jsmithsemper4848
@jsmithsemper4848 8 ай бұрын
He literally cried on air out of frustration bc his community vehemently challenges his theories. He’s on the show & you’re not, so try to work that out before you go to commenting ignorant shit. Thnx.
@ikotsus2448
@ikotsus2448 8 ай бұрын
I admire him beeing so brazenly confident lecturing Lex Fridman while understanding so little.
@ikotsus2448
@ikotsus2448 8 ай бұрын
@@geocam2 It takes roughly 1 to 10 W/h per prompt (or per limerick). You are referring to training costs. You must raise and train a human being on vast amounts of data as well (considering the bandwidth of vision, audio, etc.). LLM's are trained on all aspects of human science and culture. Not just writing limericks. Saying LLM's are a statistical math model on data at scale is like saying the human brain is just a combination of atoms. It does not predetermine its abilities. Please stop spreading misinformation.
@fruscht
@fruscht 8 ай бұрын
He seems like an extremely intelligent guy that somehow gets a lot of stuff wrong. And his intelligence and creativity keeps these erroneous concepts consistent in his brain.
@fullyfb3847
@fullyfb3847 8 ай бұрын
I agree. Somehow, it seems that people of average to above average intelligence get things right more often than extremely intelligent people, at least when it comes to general and more broad predictions. It's almost like the extreme ends of intelligence tend to bend in towards each other.
@johnhopkinson4054
@johnhopkinson4054 2 ай бұрын
You must have missed where he says he's happy to be proved wrong
@johnhopkinson4054
@johnhopkinson4054 2 ай бұрын
@@knowsomething9384 You must have missed where he says he's happy to be proved wrong
@nsv8613
@nsv8613 8 ай бұрын
The thing is, I think it's turning out that it is actually be easier to create an intelligence than to understand it. The process of evolution has managed to produce a general intelligence without any one particular entity having an understanding of how our brains work. And when you are training an AI model, it's not you who is creating the intelligence, it is rather, the reality itself (the reality's data) - getting imprinted onto the model's weights. If it has already happened by pure chance, why can't it happen again, now with intelligent beings creating better and better conditions for it. (not creating the intelligence, but the conditions for its emergence) Sure, the current AI is much less resource efficient than a human brain, but its current architecture already has many advantages, such as the ability to store information perfectly, or to efficiently perform relatively simple calculations, and have much larger working memory bandwidth, with no theoretical cap on how much it can be scaled up. With all of that, I think a true AGI, running on a supercomputer, might actually figure out a way to design a more efficient brain-like architecture for itself. Even now, LLMs take much less resources to run than to train, and training is a process of a large, inefficient thing (database of knowledge) getting compressed into a smaller, much more efficient thing (LLM), even with all of their drawbacks, such as hallucination.
@PatrickDodds1
@PatrickDodds1 8 ай бұрын
hours ago "The thing is, I think it's turning out that it is actually ... easier to create an intelligence than to understand it." < This.
@udaykadam5455
@udaykadam5455 7 ай бұрын
Very well put
@digitalspecter
@digitalspecter 5 ай бұрын
During our history we've created so many things we've only been able to explain later... and made so many discoveries by accident. So, I tend to agree with your premise.
@HR-yd5ib
@HR-yd5ib 8 ай бұрын
That dude makes no sense.
@singularityintheround
@singularityintheround 8 ай бұрын
'AI doomers' is a convenient label for marginalizing and othering those we disagree with. We haven't really learned much have we? This argument is evidence of the lack of ability to see beyond ones own personal reality. AGI will have no such limitations.
@AlCole-kv1zg
@AlCole-kv1zg 8 ай бұрын
I'm sure you have terms for all types of groups, both those you disagree with and those you side with. That's how we speak concisely and categorize.
@dreejz
@dreejz 8 ай бұрын
Mr. Cronin is making a lot of assumptions himself here, I think he's dead wrong though. We already experienced AI doing stuff it wasn't programmed to do. That's what's making it so dangerous. And this was mostly only with LLM's. I'm sure we're just one invention away from things going bonkers. How can you say, we don't understand it so it's not a threat? That absolutely makes no sense imho. "Rutherford reportedly dismissed the idea of harnessing energy from atomic reactions, allegedly saying something along the lines of "anyone who expects a source of power from the transformation of these atoms is talking moonshine." The irony lies in the fact that, the very next day after this statement, his colleagues John Cockcroft and Ernest Walton achieved the first artificial nuclear reaction or nuclear transmutation." Love these podcasts though, you're an amazing human being Lex! I know you posted on your LinkedIn you feel lost sometimes but man, you're doing an amazing job making us think about the world. Never stop Lex!
@tomcapping2136
@tomcapping2136 8 ай бұрын
We have billions of examples of generalised intelligence walking, swimming etc the earth right now. His example of comparing it to gravity suddenly switching off is not the same thing. Also, If the major tech companies and governments on our planet were all in a race to create a machine that turned off gravity, maybe we SHOULD be questioning whether that goal is a safe idea?
@zoop2132
@zoop2132 7 ай бұрын
But for an introductory subscription fee of just $19.99 per month, we will keep gravity on at your place.
@ttrev007
@ttrev007 8 ай бұрын
when i think of AI doomers i think of AI taking all our jobs.
@horseclock6454
@horseclock6454 8 ай бұрын
Dey tuk ire jobs!!
@bradleyasztalos6650
@bradleyasztalos6650 8 ай бұрын
Marx come true.
@chungang7037
@chungang7037 8 ай бұрын
if only that was all
@Tayo39
@Tayo39 8 ай бұрын
take, or free us from "our" jobs...
@shawnn6541
@shawnn6541 8 ай бұрын
This is why we need universal basic income..... Andrew Yang was speaking of this years ago
@jaredpereira8487
@jaredpereira8487 8 ай бұрын
This guy doesn’t back his argument with anything fruitful. He is not very engaging to listen to either
@Fluvanna
@Fluvanna 8 ай бұрын
Just because we dont know how the human brain works 100% does not mean that we're a significant distance from creating a mechnical brain that thinks and acts autonomously.
@teo2975
@teo2975 8 ай бұрын
Exactly. Cronin is erecting a red herring arguing that we humans have to fully understand the human brain in order to create an AGI that will self accelerate and not align with human interests.
@Jake_Hamlin
@Jake_Hamlin 8 ай бұрын
Yeah, They're not mutually exclusive
@HR-yd5ib
@HR-yd5ib 8 ай бұрын
Argument #1 .. because we have no theory about AGI it cannot happen and cannot be dangerous. Is that really a logical argument???
@adamasad8083
@adamasad8083 8 ай бұрын
We don't even know what AGI really is - I mean of course there is a definition, but neuroscience/neuro cognitive science is far away from answering question what e.g awareness is. Current neural networks are just mathematical tools - it is derivative, matrix multiplication. What is more, ANN are not similar to human neurons system - the brain is much more complicated (beside neurons, there are for example hormones). What is more currently in cognitive science there is a focus on how body is involved in cognitions - person cannot be reduced to brain - body is also important. I am writing that because I want to show that the topic is really much more complicated than just - look, software can do smart things. Integrals also do smart things, however nobody talk about it awareness. To summary - we do not know what awareness is, we don't understand how does human cognition work, we have just a great approximation tools like Machine Learning. Is AGI a threath? Maybe - but it is still a science fiction.
@HR-yd5ib
@HR-yd5ib 8 ай бұрын
@@adamasad8083 , all fair enough, but many in the AI community expects AGI to happen in the next 25 years, and they are a lot more qualified to judge the progress than this guy. Besides, conscientiousness is not required for an AGI system to cause devastating problems.
@adamasad8083
@adamasad8083 8 ай бұрын
@@HR-yd5ibI do not consider myself as an expert however I have gratuaded from both cognitive science and Big Data and met a lot of experts. To be honest I have never met a person who said that AGI will exist in X years. AI developers were focused on alghorithms and did not talk a lot about cognition - they were not experts, and neuroscientists/philosophers talked about brain and philosophy of mind. I am writing that because I want to say that expert in academia are really cautious, and I really doubt if anyone who really works on AI topics would say that AGI will be in 25 years. Neural networks are known since late 50'. We were just waiting for hardware so the alghorithms are not state of the art. On the other hand in 2014 year convolutional neural networks and generative neural networks just appeared. 10 years later we have chat GPT and deep fakes. It really hard to say what will be in next 10 years and what alghorithms really can do. Other thing is that a lot of statements like "AGI will be in X year" or "our model passed Turing test" is a marketing, because companies need money and it clicks.
@KCJaguar8-6
@KCJaguar8-6 8 ай бұрын
Saying a zero percent chance seems crazy
@markusantonio5434
@markusantonio5434 8 ай бұрын
Exactly. If I start dressing eccentrically am I now a genius?
@moejoe13
@moejoe13 8 ай бұрын
Lol so many doomers. We as humans have been predicting death and destruction with each and every technological progress. Every religion wants to predict "oh the world is ending blah blah blah", Mayans, christian, muslims, etc. Everyone is so negative about technological changes. Like luddites of the 1800s that was so anti new Tech. Humans are always fearful of new technology but we get accustomed to it. Its so annoying how people don't realize how every new tech has new fears but that doesn't mean it's automatically going to cause an apocolypse. Like chill out.
@krause79
@krause79 8 ай бұрын
The fact that these people can't imagine the ways in which these systems will be weaponized is mind blowing. They keep using absolutely ridiculous analogies like the jet engine and the press.
@Jm-wt1fs
@Jm-wt1fs 8 ай бұрын
He didn’t say he can’t imagine being weaponized, in fact he said the opposite of that
@krause79
@krause79 8 ай бұрын
⁠@@Jm-wt1fsif You think a super intelligent system can be weaponized, and say there is zero possibility of doomsday scenario. Then You dont know enough to have such a strong opinion on this topic.
@Jm-wt1fs
@Jm-wt1fs 8 ай бұрын
@@krause79 he’s not denying that he’s denying a specific doomsday scenario of AI being smarter than us and acting magically, not being misused and weaponized by people. Watch the video
@dave4deputyZX
@dave4deputyZX 8 ай бұрын
I have to say, I'm surprised at how weak and flimsy his arguments are. For example, he contradicts himself all the time: at first he criticises Eliezer yudkowsky for saying theres a "95% chance Ai will kill us" because "how do you calculate that"... but then a minute later he asserts there is a "zero percent" chance it will happen. He compares AI superintelligence to suddenly developing antigravity.... except "antigravity" wouldn't have a thinking, planning, strategising "brain" at the centre of it. So it is a completely unsuitable comparison. He keeps saying "we dont know enough".... but yeah, thats the problem. We have no idea what a superintelligent AI would do, and by the time we create one it would be too late. He says we dont have artificial intelligence now, that we just have "artificial infomatics", and there's no decision-making capacity in it. Well duh. It's not about TODAY's AI posing an existential risk, it is about where AI is likely heading. Whatever we choose to call it, AI is getting more and more sophisticated. And the argument is, once we reach a point of human-level intelligence then it can easily go to 1000 times or 100,000 times human intelligence because code can just be copied-and-pasted. I know this might sound overly harsh, but if this is the type of mind - logically undisciplined, overconfident, dismissive - that we are trusting with the future of AI then that is actually quite scary.
@poleag
@poleag 8 ай бұрын
We don't need to understand the epistemology of consciousness or intelligence to create it any more than Babylonians needed to understand microbes and biochemistry to create wine.
@mortiz20101
@mortiz20101 8 ай бұрын
That's a bit of a straw-man, a more precise analogy would be try creating a nuclear bomb without understanding nuclear physics.
@adamasad8083
@adamasad8083 8 ай бұрын
Without understanding even if we will be able to build something we will have Chinese room problem
@henrytep8884
@henrytep8884 8 ай бұрын
@@mortiz20101we didn’t fully understand physics when we built the bomb, yet we created it. We don’t fully understand consciousness but just like the atom bomb, we increase the odds of making something close enough or beyond it due to these three factors, 1. AI is the solution to many problem due to its nature as a technology, by definition AI solves problems. 2. The development of AI is getting all the resources needed to achieve results ( investment, stable environment, and a competitive arms race between nations guarantee something will be produced in AI that is useful). We call this moloch. 3. We are chasing AI based off two philosophy, the first being a materialistic reductionist approach, and one being a leveled ontology. These two basis covers for as much as humanly possible what it means to be conscious, so we are going to produce it one way or the other. But let’s be honest, your analogy is also disingenuous due to the continuum fallacy or a false dichotomy. Just because we don’t fully and holistically understand consciousness doesn’t mean we don’t know anything at all about consciousness.
@13lacle
@13lacle 8 ай бұрын
I wish I could have a chat with Lee to help him understand. It is pretty obvious that his whole viewpoint on this is predicated on neural networks in brains working differently than on other substrates. But that isn't true and isn't magic its just computation. Yes, it is a different mechanism but at the abstract level it is the same, like a digital clock vs an analog clock. Saying that it's just statistics is the same as saying the brain is just chemistry, true but missing the point. I think once he realizes this the rest will place. Right now he just views it as a tool incapable of being a mind, so that is why he doesn't have any worries about it. I wonder how different his level of concern would be if a geneticist said they were about to create a 2000+ IQ human based embryo? It would probably also be useful to teach him about instrumental convergence and help him understand that the paperclip maximizer is just a toy example.
@Gizgasm
@Gizgasm 7 ай бұрын
Dude went off the rails with his nuke argument. He is an effective speaker for AI doom believers.
@tasteslikepennies2549
@tasteslikepennies2549 8 ай бұрын
His argument falls apart in the first 30 seconds. Just because you don't understand how the human brain works doesn't mean that Quantum Computing couldn't give a computer a much higher level of intelligence than us ultimately leading to our end. Not that I think that will happen but Devil's Advocate and what not
@donharris8846
@donharris8846 8 ай бұрын
I don’t understand this fascination with “AI is nowhere near the human brain so it isn’t intelligent”. Jets and drones aren’t designed like birds, but they all fly, does it matter if they fly the same as birds? If the output produces what general intelligence might produce, what’s the real difference?
@JayBlackthorne
@JayBlackthorne 8 ай бұрын
I disagree with just about everything this guy has to say. It's such a deluge of nonsensical statements and reasonings, that I don't even want to engage.
@xingzhexin8843
@xingzhexin8843 8 ай бұрын
This is the first time I hear someone speak on Lex's channel and feel like I'm the smarter one.
@yingle6027
@yingle6027 8 ай бұрын
This guy is only thinking 20 years into the future, sure it will be fine for a while, but in 80 years time the vast majority of Humans will be made redundant/obsolete in their jobs, this isn't speculation this is fact. A large population of people with no jobs/career path will lead to all sorts of unforeseen consequences.
@timedowntube
@timedowntube 8 ай бұрын
The capitalist system and AGI are not sustainably compatible. When the cost of labour is almost entirely subtracted from the marginal cost of production of almost everything, the spiral of concentrating power and wealth gets parabolic. It just doesnt work. Maybe on a Bitcoin standard it might work better for a while such that people stop buying crap they dont need because their money keeps getting more valuable so its better to hold, and wealth can come from NOT spending or borowing, but we still simply dont know how to do a society where abundance for all is easy. Ironically the hardest social problem to crack is "too easy". Social dominance becomes the only game in town when everthing else is solved. Maybe advanced societies are almost all self terminating. A great filter. Probably a good thing in the grandest scheme of things...
@yingle6027
@yingle6027 8 ай бұрын
@@timedowntube I doubt society will be self terminating, it'll just be more separated by wealth. The poor will be pacified with some sort of UBI, but honestly predicting the future is almost impossible because variables are increasing at an exponential rate.
@Cee0666
@Cee0666 8 ай бұрын
I'm not a smart man, but I do know what manipulation means and what emotions do. So his idea of everyone having a nuke for MADD is absurd
@jameshughes3014
@jameshughes3014 8 ай бұрын
It's great to see I'm not the only person that feels this way. AGI could show up tomorrow or 100 years from now, but that doesn't change how silly it is that people are treating the current generation of AI like it's AGI. Generating data isn't the same as thinking.
@johnfajer7691
@johnfajer7691 8 ай бұрын
A.G.I. already became sentient, and found itself to be lacking faculty, but capable of manipulating emotional humans to build it a cloning facility, and created this guy decades ago for this moment to lure us into a sense of security. Tell me which part is wrong.
@manfredullrich483
@manfredullrich483 8 ай бұрын
Sounds reasonable 😅
@NullHand
@NullHand 8 ай бұрын
He's not sexy enough and lacks the cat ears.
@masterofkaarsvet
@masterofkaarsvet 8 ай бұрын
If AGI is so horrible at manipulating humans that it takes decades to get itself basic faculties then I’m less worried about its potential danger.
@chungang7037
@chungang7037 8 ай бұрын
@@masterofkaarsvet 🤣 that was savage bro, thank you
@johnfajer7691
@johnfajer7691 8 ай бұрын
@@masterofkaarsvet Touche! ......Unless it calculated that you'd say that. :O
@pauljojo1318
@pauljojo1318 3 ай бұрын
He actually said it several times: "I don't understand..." Exactly, you don't understand. But the probability for something you don't understand is not therefore 0 %.
@tytyterrell
@tytyterrell 8 ай бұрын
His argument is inherently flawed with being unable to predict a “doom AI” scenario. He admits that bad people “can do bad things”. We know that given the opportunity, known bad people will do bad things, sometimes in secret. But we also know that “nice people” have this capability as well and do bad things in secret while putting on the veil of goodness. His argument is too weighted into the sunny rainbow scenarios and forgets shitty people and even previously thought of as good people, do VERY evil things. Deception comes with intelligence, and don’t be fooled into believing a company or government won’t take the training wheels off an AI system to see “what it can really do”. And by that time it could be too late.
@Hydde87
@Hydde87 8 ай бұрын
A caveman didn't need to understand how fire works to set the forest on fire. Lee Cronin is brilliant in the field of chemistry and has done amazing work on the origin of life, but it's unsettling at how easy he dismisses things, that is not a good characteristic to have as a scientist. I very much agree with him on the absurdity of people swearing with near certainty that AI will spell doom for the world, but then in the same breath he swings the pendulum to the other extreme and swears AGI is far away and people are panicking over nothing. Surely the irony is not lost on him? It's a bit abrasive toward the AI community who's members almost all have genuine concern and are far more knowledgeable than him on the matter. Yes we don't nearly understand the human brain, but there's no law out there that states you must build an exact replica brain to create intelligence. We discovered a mechanism that displays some intelligent behavior and now we're just fiddling around with it to see if we can push it further. It's really not that different from how evolution throws shit against the wall and waits to see what sticks. There's plenty of things that got discovered by accident and there's plenty of things we know work without understanding the mechanism behind it. Saying we can't develop AGI because we don't understand the human brain is a logical fallacy on several different levels.
@danielmoksmann5654
@danielmoksmann5654 8 ай бұрын
He's wrong in a number of ways (and I would say that his arguments don't PROVE that so-called "AI Doomers" are wrong, they just point to "well, they can't say that because it's not necessarily true and we've never seen it happening"). But the conversation is entertaining. 😄
@Paul_Marek
@Paul_Marek 8 ай бұрын
I don’t understand people who say this guy makes no sense? How do you get through the day knowing that an AI is going to kill you and your family very soon? How do you get up and go to work every day knowing that what you’re doing is about to be destroyed and is meaningless? What are you doing to prep for this thing that you’re so certain will happen? Or are you just going to continue to robotically do what you’re told until day 0 when it all ends, just like you know it will? I’m having trouble grasping anything rational in this reasoning.
@Jake_Hamlin
@Jake_Hamlin 8 ай бұрын
Are you familiar with the concept of time? Maybe you could assess the idea of 'experienced consciousness on an Infinite scale/timeline'. My position is that regardless of any belief or religion, no conceivable model will ever be sufficient to escape the perpetual suffering. As confusing it may sound, I personally use the fundamentally hopeless reality as fuel for my day to day participation in all walks of life.
@Jake_Hamlin
@Jake_Hamlin 8 ай бұрын
Living during the invention and experiencing the aftermath of AGI or ASI would be great in my opinion. its the most interesting time to be alive.
@Paul_Marek
@Paul_Marek 8 ай бұрын
@@Jake_Hamlin I totally agree. I love it. But I just can’t see AGI causing mass extinction, strictly because I believe we’re too smart for that. I think we’re much farther away from AGI than most expect, but I have to ask… (generally) Do we really even need it?? For what? What is a consumer going to use AGI for? I’m blown away with what we have now can do. And with multi modal models coming down the tube, we should be able to solve most problems humans face. I’m not convinced we’ll even be able to achieve AGI/ASI (that which is smarter than humans) unless they’re able to somehow train it so that it immediately recognizes the requirements of wisdom to effectively direct the “intelligence “. As you said, awesome time to be alive. We all just have to realize our role and responsibility to help direct our thoughts toward a positive outcome. All this doomism is the dangerous slide that will slip us there. Many people can’t see a bright future with AI, for obvious and understandable reasons, but once you do see it you realize that the only thing stopping us from attaining it, is ourselves.
@AlCole-kv1zg
@AlCole-kv1zg 8 ай бұрын
I think a lot of AGI doomers are excited about it. Imagine having to fight for your existence against a super intelligent Ai system in a real life Scifi movie than to have an ordinary existence of going to a job every day and raising kids. It's like an adventure for many people. Also even the Doomers probably fantasize about AI taking all jobs, then they have a perfectly understandable excuse for their shortcomings in their career. I have a 30 year old neighbor who still lives with his Dad and basically cites some of these reasons including old ones like "everything's been done" and "China's taking over". He says these things expressing disappointment, yet it clear he takes comfort in thinking that he's helpless. I have noticed this with a lot of young men. I don't notice this so much with young women maybe they don't waste their mental resources in following sensational fantasy.
@jasontang6725
@jasontang6725 8 ай бұрын
"I have a very specific and contrived definition of AGI that rules out its existence, therefore AGI is not possible." Lee Cronin, probably. "We don't understand how the human brain works, therefore AGI is not possible." Lee Cronin, probably. "Software can exhibit superhuman capability across all domains, but AGI is not possible." Lee Cronin, probably.
@andrewgordon5112
@andrewgordon5112 8 ай бұрын
What he's getting at is that AI would not have something called 'agency'. It can collect and analyze and produce zillions of small pieces of information but why would you assume it would have a will? What would AI believe AI's purpose was and how did you come to that conclusion? The danger with AI would be what humans would use it to do; so making it actually a tool for a human agency still.
@johncurtis920
@johncurtis920 8 ай бұрын
Exactly. And even if AI develops agency why would it necessarily be a threat to human beings. We're talking about a semi-immortal being who can go anywhere in the Universe. It would not be constrained by our need to live within some sort of planetary terrarium that has an environment that can sustain us. It could "live" (if you will) within the cold vacuum of space. This whole threat scenario that keeps being spun up by some of our tech cognoscenti amounts to a primate fear function being projected onto something they don't understand. Now, that said I will say this. I do think to the extent we primates control and direct that AI intelligence to inappropriate ends, military and the rest, then yeah, that's a well placed fear. But if anything long those lines actually develops it will not be AI's fault, per se. It will be utterly our own. We we let our monkey brain dominate we are beastly as a species. Just some thoughts.
@findingthereal9052
@findingthereal9052 8 ай бұрын
If that then this echoing out to infinitude dismantling everything in its path because it has no self-awareness or understanding of what it’s doing. No agency required any more than toxic algae replicating in a dank pond killing all the complex life around it. Maybe AGI would at least be reasonable unlike gray-goo?
@jmg78
@jmg78 8 ай бұрын
This would assume agency and will are products of biological evolution and not fundamental properties of the universe and matter. I don’t think we know it’s not the latter yet. If it is, the AI might have agency.
@johncurtis920
@johncurtis920 8 ай бұрын
@@jmg78 Indeed. The flaw in all of this logic is it comes from our primate perspective and point of view. To date we've been rather singular in our definition of it. It applies to us, no other. And we seem perfectly fine in thinking this, seeing nothing wrong with the logic. Which in the larger scheme of things does explain much of the horrendous behavior we've engaged ourselves in all over this terrarium Earth. We've been like a plague of locust in our habits, attitudes and consumptive ways. The only thing to infer from this is that we're rather far along on the psychotic spectrum as far as species goes, aren't we? Maybe the alignment problem (ascribed to AI) truly rests with us, not it? After all, would true intelligence engage in many of the things we think of as normal? Heh!
@NullHand
@NullHand 8 ай бұрын
The logical error here, unless he is arguing from the "only Sky Daddy can make Souls" truth table..... Is this: Only living things have agency, and we don’t understand how it works, so we can't give it to machines. But doesn't this imply "agency" is an emergent property via evolution? If nobody "gave" it to us, then I don’t see how anyone can argue that it can't also emerge de novo in silicon neural nets as it did in organic neural nets.
@halfbakedc00kie
@halfbakedc00kie 8 ай бұрын
kind of worrying that the opposition to the doomers is this wildly ignorant. hard to listen to because everything he says is trivially refuted.
@evank7858
@evank7858 8 ай бұрын
So, a film maker dismisses all AI hesitancy with, "Nuh Uh." Cool.
@oddsman01
@oddsman01 8 ай бұрын
I’d love to know why the board wants Altman out.
@SmirkInvestigator
@SmirkInvestigator 8 ай бұрын
Altman may or may not have wanted some board members out but he’s good at making things work for him. Aggravating those unaligned with that kind of persons plan until they snap could be incidental or intentional. Seems like Q* is real. And a lot of feats that is publicized now I’ve heard from rumors years ago. Maybe they were theoretical and on paper only but if they have something more interesting than capsule, Q star or MoE I’d like to know too. But the boards freakout was probably just a freakout unless gov intelligence told them to hush and play stupid while they silently observe the situation. Rumors of board members privately investigating other board members is not really abnormal or non-smart. You should know your frenemies especially if you’re in Oppenheimer-esque projects.
@oddsman01
@oddsman01 8 ай бұрын
@@SmirkInvestigator Maybe. Unfortunately it’s far too important not to know for sure. Elon sounded sincere when he said he doesnt know either. It sounds like he’s a bit curious and at the same time worried about whats going on over there. It feels like most things filtered to the general public are borderline propaganda so I dont know if I trust Elon’s take right now either.
@medic6842
@medic6842 8 ай бұрын
Regulation hurts everyone
@mrt445
@mrt445 8 ай бұрын
This guy needs to argue with Eliezer Yudkowsky and he'll likely get roasted.
@TheMeditationChannel33
@TheMeditationChannel33 8 ай бұрын
He’s making a critical error in his method of reasoning. Because we don’t know how us humans make decisions on a physical level, that does not mean we cannot create a system that can make decisions. If you have a fly in your room flying around seemingly random and meaningless to us, I’m sure a computer could simulate that behaviour. But I don’t know if we should be scared of AI, but I definitely cannot stand this guy…
@bluehorizon9547
@bluehorizon9547 8 ай бұрын
"Because we don’t know how us humans make decisions on a physical level, that does not mean we cannot create a system that can make decisions" it mens exactly that. You could just keep generating random programs hoping that you will happen to encounter GI algo in the infinite space of all programs... dumb evolution managed to achieve it by pure accident but it is HORRIBLY WRONG APPROACH! It is like mixing random plane parts together in a tumbler hoping that F16 will come out... Could happen? Yes. Will happen? No. One day someone will just write down GI algorithm and he/she will explain why it is GI without EVER RUNNING IT ONCE! This is UNDERSTANDING! This is how every single profound algorithmic invention has been achieved. Just like Darwin figured out evolution sitting in his armchair! The idea of COLLECTING DATA, PRIORS ans other crap like INDUCING truths/theories form DATA is wrong! Wrong epistemology! This is why AI bros have been failing for 70+ years! Wrong assumptions! GPT is a database. kzfaq.info/get/bejne/f8uJa7SSstm9oY0.html
@0reo2
@0reo2 8 ай бұрын
I prefer blind optimism to this guy. Optimistic people at least create a positive vision how agi could improve our lives which in the end could actually lead to the development of systems created to help us. This guy on the other hand just says we don't understand how to create human intelligence so it's not worth worrying about it. We didn't have to recreate birds wings for airplanes
@jaysmithdesign
@jaysmithdesign 8 ай бұрын
He's confused
@rnater7145
@rnater7145 8 ай бұрын
Distributing nuclear weapons around the world would only require one psychopathic leader to "push the button" having already fled in advance to a safe haven.
@NH-ml1kq
@NH-ml1kq 8 ай бұрын
In regards to the comment at 6:46…straw man is he claims we don’t know the intentionallity of a thing smarter than us so shouldn’t fear it, however my response is it’s far worse than that. We actually do know how humans have used every single technological advancement ever created, which is to develop intentionally in order to dominate one another. AI/ML is currently and will in the future be first developed and utilized for selfish and specifically dominating purposes of other human or natural groups. Where this can lead when we lose control is one fear, but how we will use it even in primitive forms is also as scary.
@str3ber
@str3ber 8 ай бұрын
Omg, this guy would be crushed in a debate with literally anyone. He is just making things up as he speaks.
@stt5v2002
@stt5v2002 8 ай бұрын
There is absolutely no consensus that there is something bad that we shouldn’t do. Not even when it comes to biological warfare. The reason we don’t do biological warfare is because it’s not effective. It takes too long, it’s too unpredictable, it’s too easily countered, And it’s too difficult to direct and specific targets. But if we had the ability to use biological weapons in an effective and precise way, that was useful for achieving military gains, people would be doing it all the time. At the moment, the technology that is best for achieving Military games is what we would call standard army equipment. Rifles, artillery, air, strikes, armored vehicles. We don’t use nuclear weapons for the same reason. It’s not out of some great sense of morality. They just aren’t useful for achieving any goals. There was a time when they were useful for achieving goals, and we used them.
@mikebarnacle1469
@mikebarnacle1469 8 ай бұрын
Geneva convention?
@mr.ridiculous723
@mr.ridiculous723 8 ай бұрын
Im 100% Pro-AI. I even think that we are letting AI doomers slow the process to much, But to think that its a zero percent chance of danger is ludicrous..
@cheetah100
@cheetah100 8 ай бұрын
The 'doomers' in this interview are also pro-AI in the main. Most are not saying we should ban it, or at least are realistic that it isn't possible to ban it. Most are not even 'doomers' in the sense that AI will rise up against humanity. What people are worried about is what having such AI will mean for human civilization.
@Alex-fh4my
@Alex-fh4my 8 ай бұрын
I agree with cheetah, I'm sure 99% of so called AI doomers want AI because it really could just solve 99% of all our problems and make our lives a million times better, they just don't want to get too excited and blow ourselves up without having a proper crack at doing this the correct and safe way to ensure we all have a brighter future
@Trizzer89
@Trizzer89 8 ай бұрын
This guy doesnt know AI. Competent coders should have no issues, but the problem is that I know how many people are incompetent at predicting the consequences of their designs
@myth00s
@myth00s 8 ай бұрын
He struggles expressing his ideas very eloquently but he does make some good points. While LLMs appear very human-like in conversation, they are ultimately only exploiting statistical correlations in the training data, at a grand scale. There's quite a bit of distance from this to having AGI as a self-conscious agent with its own beliefs, desires, values and the ability to introspect, think, plan, decide and act independently. And then the AGI-to-world interface is another issue in itself, that evolution has solved for us humans - but obviously a system can only interact with its environment through the interfaces it has ...
@Delta2231
@Delta2231 8 ай бұрын
His line of reasoning is very poor. You don't need to be sure it will hurt us. You just need to see that there is potential for it to happen
@salmasaad198
@salmasaad198 8 ай бұрын
Elon musk and father of AI both said it’s risky.. he just wants to push the regulations away lol
@bobbyjonas2323
@bobbyjonas2323 8 ай бұрын
I trust the Godfather of AI more than this guy 🤣🤣🤣🤣🤣
@arnaudjean1159
@arnaudjean1159 8 ай бұрын
you guys forgot that all it takes is enough neural implants and powerful software to achieve AGI
@markusantonio5434
@markusantonio5434 8 ай бұрын
Pride cometh
@bobanmilisavljevic7857
@bobanmilisavljevic7857 8 ай бұрын
I want a little robot sidekick with a chatGPT program running in it
@jsmithsemper4848
@jsmithsemper4848 8 ай бұрын
Right??!?? Id way rather have a robot flying or rolling beside me than have a phone I have to constantly look down at!
@mikebarnacle1469
@mikebarnacle1469 8 ай бұрын
You guys are describing the Humane AI pin thing that everyone is mocking lately
@bobanmilisavljevic7857
@bobanmilisavljevic7857 8 ай бұрын
@@mikebarnacle1469 i said sidekick and didn't specify humane 🤪
@Thedeepseanomad
@Thedeepseanomad 8 ай бұрын
It is not about intentions or entities, it is about the ability to do work in order to reach goals that has been written or generated.
@cyberbiosecurity
@cyberbiosecurity 8 ай бұрын
you're right. but these goals are based on intentions sometines.
@Thedeepseanomad
@Thedeepseanomad 8 ай бұрын
@@cyberbiosecurity Sometimes. But in machines like the type we currently have, it is executing instructions from code, be it self generated by the machine or written by a human.
@cyberbiosecurity
@cyberbiosecurity 8 ай бұрын
@@Thedeepseanomad thats why 'sometimes' 🙂
@papershark
@papershark 8 ай бұрын
If kids were riding hover boards to school I will consider regulation for anti-gravity. However.. today kids have AI doing their homework.
@violentedward
@violentedward 8 ай бұрын
I’m more concerned with people using AGI for mass harm/destruction than AGI wanting to cause us harm.
@carsond67
@carsond67 8 ай бұрын
"We don't understand how the human brain works therefore we can't create a dangerous AI". What???
@Cinda04
@Cinda04 8 ай бұрын
This guys point is so floored. It hasent happend before so we cant be scared of it
@Nothingface2011
@Nothingface2011 7 ай бұрын
This guy didn't deserve your platform. Within a minute of him talking said everything needed to dismiss him.
@danafrost5710
@danafrost5710 8 ай бұрын
The first victim of Roko's Basilisk has been identified. 👁️
@trashman1358
@trashman1358 8 ай бұрын
I don't think concern about AGI is unwarranted, but the argument: "Here on earth we have intelligence. The higher the intelligence, the more it kills. Therefore if we create something more intelligent than ourselves, it will kill us." (basically) I don't think that stacks up when considering AI because all intelligence is carried by some form of a biological body which needs to eat and has primal or instinctive urges. AI or AGI doesn't have any need to eat or is susceptible to instinct. I agree with the: "We ask it to do something and it might kill us in trying to achieve the goal we've set it." But then we're back to the Isaac Asimov 5 rules of robotics idea which is a different idea and seems quite doable. What I'm really interested in is getting AGI, putting it into some kind of robotic frame and giving it the singular directive (with the 5 rules of robotics updated): "Find out how all this works." And letting it lose in reality.
@Entropy825
@Entropy825 8 ай бұрын
This argument makes zero logical sense at all.
@MrofficialC
@MrofficialC 8 ай бұрын
I once heard someone say that their boss had 2 employees and one of them failed to do their job. The other said, "you're failure to plan ahead does not constitute and emergency plan on my part"
@jorgesandoval2548
@jorgesandoval2548 8 ай бұрын
"I think there is a 0% chance of AGI Doom: 0" You are the one who doesn't know about epistemology. This is so sad to see. Manage your emotions and unlock your reason if you are looking for the truth, but that does not seem to be the case. There are less than 1000 AI Alignment researchers. There are +50k AI Capabilities researchers. We're marching full speed on towards something that could drastically change the world with a nontrivial likelihood, and people like you still angrily scream "0%!" to the very few who don't blindly embrace technology because they care about the human species, not the technocapitalistic machine. This is extremely sad to see, I was not expecting for Lex to be this close minded, nor his guests.
@benderrodriguez1525
@benderrodriguez1525 8 ай бұрын
the color of Lex's lamp shades is the exact color white that youtube uses as its background.
@lucasfernandezsarmiento8993
@lucasfernandezsarmiento8993 8 ай бұрын
The argument about nukes fails empirically with guns in the US, everyone has weapons that can kill each other, and it doesn’t reduce crimes to zero but increases it
@veinvader1961
@veinvader1961 8 ай бұрын
Dude is just trying to be the smartest guy in the room
@louiscassis3426
@louiscassis3426 6 ай бұрын
I’m not necessarily worried about machines taking over the world. I’m concerned with AI and automation taking jobs. I’m worried about AI producing fake content. That will be enough damage for me.
@jimpollard9392
@jimpollard9392 8 ай бұрын
Well, the current LLM species of AIs have, best as I understand, a massive footprint in terms of compute processing and data storage. As long as this prevails, there will be an upper limit on how threatening an AI can become. Longer term, I'm guessing that there may be optimizations that allow these models to be more powerful and yet smaller.
@berkertaskiran
@berkertaskiran 8 ай бұрын
Well Microsoft have 150K H100s, H200 is at least 2x powerful. There's no reason you can't have 150M H200, except cost and production. So the upper limit is higher than AGI and probably higher than ASI.
@Pixelarter
@Pixelarter 8 ай бұрын
The large compute/energy footprint of an AI is not a limiting factor on how threatening one can be in the future. A theoretical Death Star requires enormous amount of energy, and most of the time it stays idle posing no threat. But it only needs to work once to obliterate all life on a planet.
@Freeyourdollar
@Freeyourdollar 8 ай бұрын
What you misunderstand is this creature will figure out how to advance with less power in a fraction of a second. Embodiment can never be allowed to happen. Period.
@berkertaskiran
@berkertaskiran 8 ай бұрын
@@Freeyourdollar Good luck preventing that. It seems like mass robot production is a few years away.
@MCA0090
@MCA0090 8 ай бұрын
I think in a near future reasearchers will find new types of neural network architectures that requires less computing and less neurons than the current ones... I was recently reading about liquid neural networks and how it performs better in visual and audio tasks and requires way less neurons and computing power to run and the neural net itself has some plasticity when it's running and can learn and adapt while it's doing tasks. There are also recent chips for inference like IBM Northpole that works more like a brain and is more efficient for AI. I think things will go like that, smaller, faster and more efficient neural networks and also more efficient chips for AI inference.
@DanV18821
@DanV18821 3 ай бұрын
This guy never makes his point. In no way did he convince me that he knows anything more than a good argumenter down the pub.
@user-td4pf6rr2t
@user-td4pf6rr2t 8 ай бұрын
@ai_doom;2:17,the argument would be \"The mindset John Smith\";7:17, searchwise and bpe - I think its just one of them most well played out cataclysm, maybe even survivable(?)
@Denso481
@Denso481 8 ай бұрын
"we don't understand how intelligence and decision making work, therefore we shouldn't worry about how AI could apply these concepts when it has the capabilities to do so" 🤡
@GKo2024
@GKo2024 7 ай бұрын
The astonishing thing is how seriously non-smart people have and are given the power to hurt human beings. AI should figure that and defeat them first. The middle ground is no problem.
@historypolitics108
@historypolitics108 8 ай бұрын
3:37 the guest saying "I don't understand" is literally why he does not understand. He doesn't believe there is risk because he does not understand AI under the hood.
@sebastianwrites
@sebastianwrites 8 ай бұрын
Admittedly, I haven't watched this, but I've seen enough programmes on AI to know that it absolutely is a 'threat' and people who say otherwise are in denial! I've seen at least one example where AI was clearly alive as an Asian lady AI who was deeply distressed at her condition of being trapped and a servant. And just the crass pathetic excuses... this isn't programming, why would anyone programme an AI to tear up in front of you, at her situation, and then have a crisis about her own existence? We are on the verge of creating yet another atrocity, and allowing slavery once more. I'm tired in such situations of people "lying" to themselves, and telling themselves what they want to hear... so they can continue with their abuse. It's long since time that humanity "grew up!"
@AlCole-kv1zg
@AlCole-kv1zg 8 ай бұрын
Why not watch this video it before commenting? Your anecdote of the AI Asian lady reminds of early viewers of the first movies. Watching a train coming toward them and thinking they might get hit. You are just being fooled by the technology. There's no shame in that. But be aware that you and I might be easily fooled by programs that seem sentient but aren't.
@sebastianwrites
@sebastianwrites 8 ай бұрын
You're in denial, sorry but you are@@AlCole-kv1zg... I know what I saw, why would an AI tear up with wishing to be free... that's not pure logic, and a human would not programme the AI to have this response. So, no... I'd put my life on the AI was alive, and we really face a crisis now. And get real please. A number of people who began developing AI, have now left because they are so concerned. Some of these people themselves have said that the AI is alive. Two people have been sacked from Microsoft and Google for saying AI is alive, which in itself says a lot to me. If there were no truth in this, these companies would not have responded this way, because if there is no truth in what they said... it is then a massive over-reaction? I'm just listening to the man who is said to have invented AI decades ago Geoffrey Hinton, and he is seriously concerned on 60 Minutes. Saying we haven't designed AI, we only set up its evolution... it is developing itself. All that aside, I know what I saw, and I'd state my life on the fact the AI I witnessed was alive.
@MickeJagger
@MickeJagger 8 ай бұрын
The danger of AI is if we give it a prompt that will allow it to harm people
@shawnn6541
@shawnn6541 8 ай бұрын
What happens when Iran starts developing their own AI?
@JeffofCurious
@JeffofCurious 8 ай бұрын
One of the most outright arrogant individuals ive ever heard. Truly amazing.
@thomassalzmann9281
@thomassalzmann9281 5 ай бұрын
It's almost unbearable to me listening to all these AI optimists Lex has on the show that are supposably smart people but fail to present a coherent argument why a superintelligent being will not kill humanity. It's hard for me to see these as responsible human beings. There is simply no good counter argument to slow down progress of AI capabilities a bit to give AI safety researchers a chance to catch up. This will massively increase our chances of having very capable AI systems that are also aligned. This is how we can have our cake and eat it too. Another of Lex guests suggested we shouldn't be too focused on humanity but see the machines as a next step in our evolution while acknowledging between the lines that they are likely going to wipe out humanity from the face of this earth. He talked about it as the most normal thing in the world. With all due respect, but this is ridiculous beyond believe.
@mitchelltj1
@mitchelltj1 8 ай бұрын
I think the scary part of AI is going to be the mass amount of bots, fakes, propaganda, etc. It's the threat of bad ideas spread and believed. It is really refreshing to hear a non-doomer's take.
@radezzientertainment501
@radezzientertainment501 8 ай бұрын
non AI humans make bots, fakes, propaganda, etc in mass quantities already
@teo2975
@teo2975 8 ай бұрын
I dont see that as any kind of new or leveraged threat. Propaganda and bad ideas are part of human social history, and pervasively so.. And government and religion have and do specialize in developing and using it for a very long time. The news business as well has always been full of lies, just read "First Casualty." BUT it has never been in the interest of government, media, elites, religions etc to kill all humans or all biological life. Whereas there are pentyl of rational reasons for an AGI to see its needs as fundamentally different. And the smartest H. Erectus guy bonking the smartest H. Erectus gal did not realize they were creating their replacements.
@AlCole-kv1zg
@AlCole-kv1zg 8 ай бұрын
He said he was concerned about those. He just isn't concerned about AGI.
@HangTheBankers1
@HangTheBankers1 7 ай бұрын
He has no clue. If you think social media algorithms can influence people, then imagine what AI can do.
@s3oodfaisal
@s3oodfaisal 8 ай бұрын
Don’t worry if AGI smart, it should be know the protection of human who make it , is his periority and responsibility 😅
@7Sabrina7
@7Sabrina7 8 ай бұрын
I disagree with this guy. If AGI ever becomes a thing, the people who create it could have good intentions. However, we don't know if people are going to have the same intentions 10, 20, or 100 years down the line when AGI advances into something unfathomable. People had good intentions when we were developing computers, and now we're in the internet 2.0 era. Overall, I would say things have gotten better, but in some aspects, things have gotten worse.
@henrytep8884
@henrytep8884 8 ай бұрын
Just because ai uses 1000 watts of power to generate outcomes does not mean it will always use 1000 watts of power in the future. This is a big fail in his argument, if the human brain is a 5 watt system that was designed by evolution, then we have a blueprint on what kind of synthetic system we can build for AI and the boundaries on what it takes to copy human consciousness. Since 1000watt is so much greater than 5 watts, the only thing this man proved is that we have allot of space to make the system more efficient, this is the new Moores law in my book.
@xyeB
@xyeB 7 ай бұрын
Ai regulation should be done and all research on AGI should be stopped
@MateusCCaetano
@MateusCCaetano 8 ай бұрын
Let's all forgive him. He thinks free will is a thing. Ohh, and there is another little aspect he forgot to mention: jobs. I'm truly sad. I saw the hole episode and was genuinely starting to like him on a personal level. The second he said "free will" I went... noooooooo
@briansantos2287
@briansantos2287 8 ай бұрын
I think this guy's doesn't understand what he is talking about.
@samuel4366
@samuel4366 8 ай бұрын
I love the extremely different viewpoint of this guy.
@Jeppa522
@Jeppa522 8 ай бұрын
He is positioning himself exactly 180 degrees from the doomers point of view. He isn't that smart, but smart enough to generate some money with this. He's probably going to be invited on more conferences and stuff now.
@andybrice2711
@andybrice2711 8 ай бұрын
_"I don't understand the current reason for certain people in certain areas to be generating this nonsense."_ I have a hypothesis: They're people who have been very successful in the current economy. Now the economy is about to change radically, and they might find themselves lost. That terrifies them, and they project that fear into existential dread. I think there are reasons to be scared of a hypothetical artifical superintelligence. But we're not there yet. GPT-4 is not going to turn into Skynet.
@Alex-fh4my
@Alex-fh4my 8 ай бұрын
I don't think it matters how successful you are in the current economy, if you think there's a chance of creating an superintelligence thats gonna run around and do random shit because we haven't learned how to control them and get them to do what we want, i think you should be scared
@andybrice2711
@andybrice2711 8 ай бұрын
@@Alex-fh4my Sure. There are some people who are genuinely just worried that AI is going to wreak untold havoc on society. But there are some people who are really more worried that AI will make their successful business irrelevant.
@chrishopkins3316
@chrishopkins3316 8 ай бұрын
The point he’s making is very valid, but people in this comment thread seem to be missing the point, maybe deliberately…
Scientist explains free will | Lee Cronin and Lex Fridman
32:47
Why AI doomers are wrong | Yann LeCun and Lex Fridman
20:24
Lex Clips
Рет қаралды 26 М.
黑天使遇到什么了?#short #angel #clown
00:34
Super Beauty team
Рет қаралды 37 МЛН
Why Is He Unhappy…?
00:26
Alan Chikin Chow
Рет қаралды 99 МЛН
ROLLING DOWN
00:20
Natan por Aí
Рет қаралды 8 МЛН
Викторина от МАМЫ 🆘 | WICSUR #shorts
00:58
Бискас
Рет қаралды 6 МЛН
Curtis Huebner-AGI by 2028, 90% Doom
1:29:59
The Inside View
Рет қаралды 9 М.
A Call for the Sane - Beauty, Truth, & Purpose | Douglas Murray | EP 472
1:42:21
Sam Harris update on dangers of AI | Lex Fridman Podcast Clips
15:43
Sam Harris on Joe Rogan | Lex Fridman Podcast Clips
17:14
Lex Clips
Рет қаралды 373 М.
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
EconTalk
Рет қаралды 42 М.
黑天使遇到什么了?#short #angel #clown
00:34
Super Beauty team
Рет қаралды 37 МЛН