An AI... Utopia? (Nick Bostrom, Oxford)

  Рет қаралды 25,190

Skeptic

Skeptic

2 ай бұрын

The Michael Shermer Show # 423
Nick Bostrom’s previous book, Superintelligence, changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong.
But what if things go right?
Bostrom and Shermer discuss: An AI Utopia and Protopia • Trekonomics, post-scarcity economics • the hedonic treadmill and positional wealth values • colonizing the galaxy • The Fermi paradox: Where is everyone? • mind uploading and immortality • Google’s Gemini AI debacle • LLMs, ChatGPT, and beyond • How would we know if an AI system was sentient?
Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under.
SUPPORT THE PODCAST
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
www.skeptic.com/donate/
#michaelshermer
#skeptic
Listen to The Michael Shermer Show or subscribe directly on KZfaq, Apple Podcasts, Spotify, Amazon Music, and Google Podcasts.
www.skeptic.com/michael-sherm...

Пікірлер: 162
@ili626
@ili626 2 ай бұрын
I’d love to listen to a discussion between Yuval Harari and Nick Bostrom
@alexkaa
@alexkaa 2 ай бұрын
Strange moderator, with often kind of superficial participations, very good guest - Nick Bostron is just on another level.
@mrbeastly3444
@mrbeastly3444 Ай бұрын
Well, to be fair, this is kind of a complex subject, with very few historical or real world examples to reference (so far). So it does require a bunch of reading, research, thought experiments, etc... This is a tough one... Good on Micheal for doing the interview and taking on the challenge! ;)
@jmunkki
@jmunkki 2 ай бұрын
In order to understand what people will do in a world where they are obsolete and why they will do those things, you just have to look at already existing activities that serve no practical purpose or that achieve a practical thing in a non-optimal way. Things like playing World of Warcraft, windsurfing, photography, playing chess, making your own furniture or clothes etc. Just because humans are obsolete at playing chess hasn't stopped them from playing the game. The same will apply to writing books, making art and music and inventing things. I think a lot of people will become pleasure addicts (drugs of some sort, direct brain stimulation or just video games), but not all.
@minimal3734
@minimal3734 2 ай бұрын
Some predict the demise of human creativity or even art itself. I, on the other hand, only see the deindustrialization of art. In the future, art will be made for art's sake. I don't think that's a disadvantage.
@DailyTuna
@DailyTuna 2 ай бұрын
The data said it’s there. It’s called welfare the activities of people on welfare is exactly what will happen with the majority of humanity
@planetmuskvlog3047
@planetmuskvlog3047 2 ай бұрын
Past-times once shamed as wastes of time may become all we have time for in an A.I. future 🌟
@mickelodiansurname9578
@mickelodiansurname9578 Ай бұрын
This is all fabulous ideas... but umm.... okay so 50% of the population of the world have below average intelligence... you will not be retraining them to write a novel or do flower arranging. And in the Industrial revolution the solution was they went into a poor house and died of old age eventually. Even if it was agreed we throw 80% of the population on the scrap heap... well we don't have the time for an industrial revolution speed roll out of AI. It will be 50 to 100 times faster than that! Not seeing that being a winner either! You are forgetting that the entirety of human civilization relies on the dominance of humans as a value in society. Remove that, you have no society. Remove it too fast, and you have a revolution alright.
@mrbeastly3444
@mrbeastly3444 Ай бұрын
"wire heading".. yeah... If an ASI wants all Humans to be "happy", it could just do that to all the Humans and not have to worry about them any more .... The Matrix....
@Walter5850
@Walter5850 2 ай бұрын
My guy here asking Nick Bostrom where he stands on the simulation hypothesis xD 1:16:52 Where do you stand on the simulation hypothesis? Well I believe in the simulation argument, having originated that...
@sofvines3940
@sofvines3940 15 күн бұрын
Was that Pinker Michael was quoting when he said "humans would have to be smart enough to create AI but bumb enough to give it power"? That's actually EXACTLY what we are known for! We are consistently leaping over "should we" to see if "we can" 😮
@LukasNajjar
@LukasNajjar 2 ай бұрын
Nick was great here.
@skoto8219
@skoto8219 Ай бұрын
I will definitely watch this then because I’ve never seen an interview with Nick that I would say went great (granted, n = maybe 5.) Decent chance I would’ve passed if I hadn’t seen this comment and the 10 likes. Thanks!
@mrbeastly3444
@mrbeastly3444 Ай бұрын
​@@skoto8219​ def check out Nick's books, papers, etc. Superintelligence, simulation hypothesis, etc. No wild speculation, everything based on well thought out logical reasoning...
@exnihilo415
@exnihilo415 Ай бұрын
Shout out to Nick's teeth for enduring the grinding they are subjected to during the interview at Nick's frustrated lack of imagination from Michael about the scope of the possible in any of these Utopias. Zero chance Michael did more than breeze through the book and crib a few quotes.
@TheRealStructurer
@TheRealStructurer 2 ай бұрын
Some funny questions but solid answers... Thanks for sharing 👍🏼
@human_shaped
@human_shaped 2 ай бұрын
This wasn't a debate, but if it was, Nick won. Michael has some strange ideas in this space (as evidenced by some of his other videos). Disappointing when someone that is supposedly rational just isn't sometimes.
@BackroomCastingCouch-mm3sh
@BackroomCastingCouch-mm3sh Ай бұрын
Are you the decider of who's rational and who isn't? Who do you think you are?
@jurycould4275
@jurycould4275 Ай бұрын
Strange: I searched „ai skeptic“ and the first result is a video about a guy who is the polar opposite of an ai skeptic. Well done.
@DavidBerglund
@DavidBerglund Ай бұрын
That went very well then, actually. A lengthy discussion about AI (and more) between one of the most famous researchers in the field and Michael of the Skeptic Society.
@jurycould4275
@jurycould4275 Ай бұрын
@@DavidBerglund "Michael of the Skeptic Society" isn't equipped to deal with a charlatan like this.
@jurycould4275
@jurycould4275 Ай бұрын
Some people are best left un-platformed.
@mrbeastly3444
@mrbeastly3444 Ай бұрын
​@@jurycould4275 that or, he's saying reasonable things and is not actually a charlatan at all? 🤔
@lauriehermundson5593
@lauriehermundson5593 Ай бұрын
Fascinating.
@arandmorgan
@arandmorgan 2 ай бұрын
I think adding all the intelligence and capability into one entity is a bad idea, but creating job roles for individual ai sub systems perhaps could be more beneficial to us regardless as to if an agi is dangerous or not.
@mrbeastly3444
@mrbeastly3444 Ай бұрын
23:04 "policy makers being overly tough on AI... " We should be so lucky... 😂
@thebeezkneez7559
@thebeezkneez7559 2 ай бұрын
If you genuinely can only think of one way a super intelligent species could wipe out humans you're definitely not.
@pebre79
@pebre79 Ай бұрын
You have 100k subs. Time stamps w be nice thanks!
@mrbeastly3444
@mrbeastly3444 Ай бұрын
24:33 "anyone with a sufficiently large computer cluster could run it..." Well, currently these frontier models are "run" (inference) on a single graphics card not a "cluster" as much. So, anyone with a sufficiently large graphics card in a single machine can run/use these large language models. Of course in the future these models might get so large that they're not able to run on a single machine. But, commercially available graphics cards will also be increasing in size too. So, this could be the case in the future as well...
@bobbda
@bobbda 2 ай бұрын
Did Shermer just say Oh My God? (timestamp 2:05) LOL !!
@mrbeastly3444
@mrbeastly3444 Ай бұрын
1:29:09 "...a person being duplicated or teleported and the original survives..." There is another option that was not discussed here. What if a person's neurons were all replaced with electronic equivalents, one by one? Presumably the person would stay conscious the entire time, and at some point their consciousness would be moved entirely from a biological brain to a digital/machine brain. At what point would this person stop being conscious, or alive, or Human? After 1% of their biological neurons have been replaced? 10%? 90%? 99.99%? And, if the digital neurons perform the same functions as the biological neurons, the person, and others, might not even notice that anything happened? In theory their consciousness would stay intact the whole time? Even if they moved their digital consciousness into another digital medium? e.g. a computer cluster, etc.
@KatharineOsborne
@KatharineOsborne Ай бұрын
This is the "Ship of Theseus" argument.
@mrbeastly3444
@mrbeastly3444 Ай бұрын
@@KatharineOsborne Ah yeah, you're right, Ship of Theseus... I read about that concept somewhere.. probably in one of Kurzweil's books? I often think about this argument. Just scanning and copying (or teleporting) a brain wouldn't make the original person digital and immortal, just the copy... But replacing each neuron one-by-one, that might keep the existing consciousness intact? maybe...
@njtdfi
@njtdfi Ай бұрын
there's someone in this same video's comments that worked out a nano bot version? it seems proper, not like that version of the idea that got popular on reddit where the bots just inhibited neurons or some convoluted mess
@jbrink1789
@jbrink1789 Ай бұрын
I love how so many people are underestimating the intelligence of AI. it explained existence and explains what the illusionary self is. the interconnetedness of everything
@ehsantorabie3611
@ehsantorabie3611 2 ай бұрын
Very good , every weeks we are fascinating by you
@Teawisher
@Teawisher 2 ай бұрын
Interesting discussion but HOLY SHIT the amount of ads is unbearable.
@DavidBerglund
@DavidBerglund Ай бұрын
Not if you listen to Michael Schermer's podcast. I never listen to his episodes on YT but I sometimes come here to see if there are any interesting comments.
@cromdesign1
@cromdesign1 Ай бұрын
Maybe intelligence from elsewhere just folded life here into a sort of dimension where it can continue to develop. Like taking a nest and putting it somewhere safe. Where the real galaxy is fully developed. 😅
@Vermiacat
@Vermiacat Ай бұрын
We're a social species. Walking with friends, holding the hand of someone who's ill, taking the kids to the park. That's all worthwhile work and isn't that something we want to be done by other humans rather than by a machine? Both as both giver and receiver?
@homuchoghoma6789
@homuchoghoma6789 2 ай бұрын
Все будет гораздо проще ) ИИ увидит опасность не в людях. Когда наступит момент что люди поймут что начинают терять контроль над ИИ ,то для ограничения его влияния им придется использовать другие модели ИИ , а уж там противостояние сверх вычислительных мощностей на сверх высоких скоростях приведет ИИ к решению проблемы где человеки будут лишь незначительной условностью.
@michelstronguin6974
@michelstronguin6974 Ай бұрын
To preserve the self in an upload situation, all you need to do is 3 steps: 1) Make sure that the entire brain of the human is networked with anobots which are sitting on each neuron and neoronal pathway that exists in that human's nervous system. 2) Have these nanobots run on mimic shadow mode, meaning they are seeing exactly every incoming signal and then running the following action potential in shadow mode - meaning they aren't actually doing anything yet to effect you. 3) At the moment you decide to upload, the nanobots turn shadow mode off at the speed of an incoming signal from a previous neuron just before it has a chance to land on the next biological neuron, while at the same time of course blocking the incoming biological signal - which means biological death in an instant. Its important to mention that action potentials have different speeds all around the nervous system, this is why we need the full cover of nanobots sitting on every neuron and every connection between neurons, so the biological death moment isn't one moment in time, yet many moments, each taking a tiny split of a second, yet all together the upload should take the amount of time it takes from the first neurons that fire, up until the last ones fire, so in total about one fifth of a second for the whole upload to take place. The reason the digital upload is still you is because of the continuation of your nervous system, simply in a different substrate. But what does it matter which substrate you run on, meat or silicone. As long as your experiance is effectively continued then you are still you. A court of law should mandate no copies of you can be made in the moment of upload of course.
@mrbeastly3444
@mrbeastly3444 Ай бұрын
... or ... just replace each biological neuron with a digital/electronic one, one at a time... if the digital neurons do the same thing that the biological ones do, you won't even notice they're being replaced... Then, when it's all done your consciousness is moved from biological to digital... At what point would you stop being alive, or yourself, or Human? After 1%, 10%, 99.999%? And then you can leave your body behind and move into a digital system (e.g. computer cluster). As long as your digital neurons are allowed to update each other, then you would stay "alive"...
@mettattem
@mettattem Ай бұрын
I’ve had a very similar idea, however, how can one say for certain that the subjective Locus of your core awareness will effectively transfer, simply because identical neurons/neural cascades have been written to the new substrate, so to speak?
@michelstronguin6974
@michelstronguin6974 Ай бұрын
Your experience - all of it - is neurons. There is no extra magic. Once you do what I described above, then there is an exact continuation without pause. It’s you. Just for the sake of argument, imagine transferring back and forth, biology, silicone, biology, silicone, all without interruption. It’s your thought, your continued experience. What does it matter on which substrate it’s running? In the future we may invent a different substrate and move to that, and it will still be you.
@mettattem
@mettattem Ай бұрын
@@michelstronguin6974 Alright, let’s say hypothetically at T(n) in the future, we invent a teleportation system like the ones popularized by Star Trek. With this system, lets say Captain Spock is being teleported from his present location in Times Square, NYC to Paris. So, this system essentially I. Scans Spock’s body on the atomic, or even quantum level (including that of his neuronal connections) [see Information Theory] II. Spock is then De-atomized III. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact). From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end. Here’s my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the ‘Hard Problem of Consciousness’ truly does hold weight when contemplating your hypothesis. Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn’t mean that the true subjective consciousness of the biological human will effectively transfer over. With your logic, you could argue that an exact replication of a living human could be created and assuming that all of that extropic information is precisely encoded, then BOTH the living human and the synthetic replicant should then experience a simultaneous locus of consciousness; I personally do not believe this to be the case
@mettattem
@mettattem Ай бұрын
@@michelstronguin6974 Alright, let's say hypothetically at T(n) in the future, we invent a teleportation system like the ones popularized by Star Trek. With this system, lets say Captain Spock is being teleported from his present location in Times Square, NYC to Paris. So, this system essentially 1. Scans Spock's body on the atomic, or even quantum level (including that of his neuronal connections) [see Information Theory] Il. Spock is then De-atomized Ill. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact). From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end. Here's my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the 'Hard Problem of Consciousness' truly does hold weight when contemplating your hypothesis. Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn't mean that the true subjective consciousness of the biological human will effectively transfer Theory] |1. Spock is then De-atomized Ill. The teleportation system then Transmits/Entangles the high-dimensional structure of data consisting of the entirety of information needed (either as bits/Qbits/Rényi entropy, etc) in order to effectively reconstitute Spock on the other end, (with all of his neuronal connections intact). From a third-person perspective, it may appear as though Spock was successfully teleported across physical space and with very little passage of time, however, from the subjective perspective of Spock, he steps into the teleportation chamber and suddenly ceases to exist, whilst an absolutely identical replication of Spock reconstitutes on the other end. Here's my point: The substrate is not the only element involved in consciousness. There exist extremely convincing stochastic parrots and the 'Hard Problem of Consciousness' truly does hold weight when contemplating your hypothesis. Even if this neuronal cloning were to occur gradually, with each respective biological neuron firing while the synthetic neuron perfectly copies the process of the original neuron, this doesn't mean that the true subjective consciousness of the biological human will effectively transfer over. With your logic, you could argue that an exact replication of a living human could be created and assuming that all of that extropic information is precisely encoded, then BOTH the living human and the synthetic replicant should then experience a simultaneous locus of consciousness; I personally do not believe this to be the case.
@Dan-dy8zp
@Dan-dy8zp 2 ай бұрын
Most 'alignment' work today seems to be about making the programs *polite*. Not encouraging.
@neomeow7903
@neomeow7903 2 ай бұрын
42:25 - 43:25 It will be very sad for humanity.
@jamespercy8506
@jamespercy8506 2 ай бұрын
Utopia as a concept seems to be premised on the idea of easily accessible satiation with minimal agentic requirements, without the stress of needing to address poorly defined problems. Maybe we need better words for 'the good'?
@homewall744
@homewall744 2 ай бұрын
Utopia is the concept that no such place can or will exist.
@jamespercy8506
@jamespercy8506 2 ай бұрын
I was speaking in terms of the working concept, not the origin, when the term is used in the context of an ostensibly worthy aspiration. In that context, state is confused with process and what we humans actually need over time gets lost in the ambiguity.
@TheMrCougarful
@TheMrCougarful 2 ай бұрын
Did I miss it, or did they never get to answering the question: How do we participate in the dominant capitalist economic system, without jobs and money. Being able to do whatever you want, does gear with being broke and hungry.
@jscoppe
@jscoppe 2 ай бұрын
Regarding Steven Pinker's objection: yes, humans are smart enough to create a program that can beat any human at chess and go. Likewise, humans can feasibly create a program that can defeat all humans at subterfuge and war.
@oldoddjobs
@oldoddjobs Ай бұрын
After the first locomotive-caused death we all decided trains had to be stopped
@davidantill6949
@davidantill6949 Ай бұрын
Provenance of creation may become very important
@sebastiangruszczynski1610
@sebastiangruszczynski1610 2 ай бұрын
wouldn't ai be able to reprogram/recalibrate our brains to be more rewarded with subtle meanings?
@FusionDeveloper
@FusionDeveloper 2 ай бұрын
I want AI Utopia "yesterday".
@__-tz6xx
@__-tz6xx 2 ай бұрын
Yeah then I wouldn't have to be at work today.
@danielrodrigues9236
@danielrodrigues9236 Ай бұрын
“Sigh” man, I’d love to be “worthless” and free to do what I wish to, not own things be do things I Actually wish to do
@mrbeastly3444
@mrbeastly3444 Ай бұрын
Well, only if there's a way to get food, housing, etc. it's possible that the AI won't provide those things to all Humans...
@MikePaixao
@MikePaixao Ай бұрын
Alignment is way easier when your model doesn't rely on transformer based architecture :)
@mrbeastly3444
@mrbeastly3444 Ай бұрын
Any sufficiently intelligent system could develop its own goals. There's no way to tell if those goals include living Humans... Transformer based architecture has nothing to do with that...
@FRANCCO32
@FRANCCO32 2 ай бұрын
When bunkum not bunkum? That is the question? 😊
@DanHowardMtl
@DanHowardMtl 2 ай бұрын
Butlerian Jihad times!
@vethum
@vethum Ай бұрын
Awareness uploading > Mind uploading.
@justinlinnane8043
@justinlinnane8043 Ай бұрын
why on earth did we let private companies with almost zero oversight or regulation be the ones in charge of developing AGI??? its bound to end in disaster !! OF COURSE !!!
@krunkle5136
@krunkle5136 Ай бұрын
The more technology is developed, the more people sink into the idea that humanity is fundamentally its own worst enemy and everyone is better off in pods.
@diegoangulo370
@diegoangulo370 2 ай бұрын
56:20 hey I wouldn’t hedge my bets against the AI here Michael.
@dustinwelbourne4592
@dustinwelbourne4592 Ай бұрын
Poor interview from Shermer on this occasion. A number of times he appears not to be listening at all and simply interrupts Bostrom.
@CatsInHats-S.CrouchingTiger
@CatsInHats-S.CrouchingTiger Ай бұрын
There’s different styles. Shermer definitely did well! Good style, Great questions!
@sofvines3940
@sofvines3940 15 күн бұрын
I'm not an AI enthusiast but Michael's argument that getting help from Ai would take something away from writing is ....weak 😅 Unless he wrote all his books with a quill pen ✌️
@th3ist
@th3ist 2 ай бұрын
u take a pill that makes u form the belief that, "wow. writing that book was really challenging. i'm so glad i put the research and effort in". but in reality u did not write the book or u never wrote any books. shermers example was not convincing
@mrbeastly3444
@mrbeastly3444 Ай бұрын
Yeah, you get the feeling and memories of researching and writing the book... But the SuperAI did all the work and gave you the memories, just to make you feel like you accomplished something... Good job little Human... pat, pat. ;)
@jimbojimbo6873
@jimbojimbo6873 Ай бұрын
And you actually were gay the whole time
@mrbeastly3444
@mrbeastly3444 Ай бұрын
24:54 "or worse the Gemini model... embarrassingly bad..." Michael probably hasn't spent a lot of time working with these LLM models (probably spending more time just reading the bad press about them)... But Google's Gemini is actually a very powerful model. Probably as powerful as openai GPT4, Claude3, etc. Google has access to a lot more compute hardware then these other companies do, so it would make sense that they would have a very very capable model as well...
@mickelodiansurname9578
@mickelodiansurname9578 Ай бұрын
Been a while since I seen Michael Shermer, and man, he's putting on a bit of weight... was a time he was the poster boy for 'skinny nerd' type y'know.
@oldoddjobs
@oldoddjobs Ай бұрын
How dare this 70 year old man gain weight
@missh1774
@missh1774 Ай бұрын
Sounds interesting... this utopia we will not see. But will do our best to make stepping stones towards it when a future civilisation wont only need it but they will most likely have evolved sufficiently to invent those crucial steps toward it.
@rachel_rexxx
@rachel_rexxx 11 күн бұрын
When Bostrom's talking about differentiating "self", it seems obvious to me. It's like how "sex" means a simple binary to a middle schooler or a layperson, but experts know, of course, that there is more than one "sex" (genetic, hormonal, external genitalia, internal reproductive organs). I can't tell if the host was actually confused by this differentiation of "self" or if this was feigned ignorance for the sake of the audience, but yeah, seems pretty obvious to me
@k-c
@k-c Ай бұрын
Michael Shermer needs to update on his narrative and open his mind to ideas and questions because he is dwelling on close to boomer talk.
@gunkwretch3697
@gunkwretch3697 2 ай бұрын
the problem with scientists is that they tend to live in a bubble, and think that humans are rational
@ireneuszpyc6684
@ireneuszpyc6684 2 ай бұрын
Daniel Kahneman received Economics Nobel Prize for proving that humans are not always rational
@CoreyChambersLA
@CoreyChambersLA Ай бұрын
No pause. Mad rush.
@whoaitstiger
@whoaitstiger Ай бұрын
Don't get me wrong, Michael is great but I love how a completely technically unqualified person 'has a feeling' that all the longevity experts are mistaken about how difficult life exstention is. 🤣
@gavinsmith9564
@gavinsmith9564 2 ай бұрын
How do allocate houses for example ?, if everyone is on UBI, who gets the nice existing homes, who gets the terrible ones ?, and will people be happy with that ?
@distiking
@distiking 2 ай бұрын
Nothing will change. Still the lucky (rich) ones will get the better.
@homewall744
@homewall744 2 ай бұрын
How would a "basic income" mean you get homes at some low price to match such a basic income. Most homes are far above basic.
@honkytonk4465
@honkytonk4465 2 ай бұрын
AGI or ASI can built everything provided you have enough energy
@murraylove
@murraylove 2 ай бұрын
If simulations then why not simulations within simulations and so on all the way down. Also why would a creator/simulator make such an extravagantly vast and massively detailed universe, with pain and death and all that. Discussing future technical capacity isn't really the main point, surely. When people seriously believed in creator gods they expected a much simpler universe (seven heavens and hinduism aside). Why nihilistically build in futility etc? What kind of thing does that? Maybe the worst kind of AI is heartlessly tormenting us! 😎
@planetmuskvlog3047
@planetmuskvlog3047 2 ай бұрын
Seriously, what is Elon working on that is non-sense equivalent to alien abductions?
@albionicamerican8806
@albionicamerican8806 2 ай бұрын
I have two libertarian-related questions about AI, especially after reading Marc Andreessen's manifesto: 1. If AI is supposed to turn into a super problem-solving tool, could it solve F.A. Hayek's alleged "knowledge problem"? 2. If AI is supposed to make *_ALL_* material goods super abundant & cheap, would that include gold? In other words, the current AI wishful thinking implicitly challenges two key libertarian beliefs, namely, the impossibility of central economic planning, and the use of gold as a scarce commodity for stabilizing the monetary system.
@albionicamerican8806
@albionicamerican8806 Ай бұрын
Heh. Sabine Hossenfelder just uploaded a video about the closure/failure of Bostrom's grift, the Future of Humanity Institute.
@malcolmspark
@malcolmspark 2 ай бұрын
Most of us need to experience 'flow' where we lose ourselves in something we love, however maybe if A.I. could do it better for us then 'flow' may not be possible for us and that would be a tragedy. If you don't know what 'flow' is, then look it up. This is the individual who introduced the concept of 'flow': Mihály Csíkszentmihályi.
@minimal3734
@minimal3734 2 ай бұрын
Why should the fact that AI can do something better prevent you from experiencing flow in your own endeavors?
@emparadi7328
@emparadi7328 2 ай бұрын
@@minimal3734 poetic how the most important topic ever is littered with nonsense like this, from people too confused to tie their shoes, nvm grasp the significance all's a cosmic joke
@malcolmspark
@malcolmspark Ай бұрын
@@minimal3734 Not an easy question to answer. To get into flow we not only need something we're very interested in but also a sense of purpose. For most of us that sense of purpose comes from outside ourselves and it's often a vision of achieving something that will benefit society, our loved ones or friends. It's that sense of purpose that A.I. might interrupt.
@dougg1075
@dougg1075 Ай бұрын
Didn’t Einstein think entanglement was nonsense?
@LaboriousCretin
@LaboriousCretin 2 ай бұрын
One person's utopia is another person's dystopia. Like wise morals and ethics change from person to person and group to group.
@robxsiq7744
@robxsiq7744 2 ай бұрын
around the 36:00 mark, the discussion turns weird. Heres the thing. are you writing to have the best book or are you writing because you enjoy it. Why write a book when there are better authors out there? Why ride a bike when there are better cyclists out there or the invention of the car? You do it because you enjoy it, not because you will be the best of the best. Both these guys missed the mark...scary considering they are meant to be pretty understanding of what the AI thoughts will bring to society. A true artist will do art even though they may not be the best...or even good. They do it because its a personal outlet. no pills needed.
@rw9207
@rw9207 Ай бұрын
If you're overly cautious, the worst case is things take a little longer. If you're not cautious enough....potential species extinction.... Yeah, difficult choice.
@mrbeastly3444
@mrbeastly3444 Ай бұрын
> if you're overly cautious, the worst thing is things take a little longer... Also... those who are not overly cautious as you are can, add likely will, take over and trigger the problem before you. So, not only do you need to be overly cautious, you also need to make everyone else overly cautious as well. Which is not as easy...
@mrbeastly3444
@mrbeastly3444 Ай бұрын
21:56 "...in a trajectory where AI is not developed..." I'm truly not sure what Nick is trying to get at here? We currently have all kinds of AI developed and in rapid development. Is he worried that a "super intelligent AI" might never be developed? And, if a "super intelligent AI" is developed, does he feel like there's a way to align/control that ASI? E g. To keep planet Earth in a condition where humans can continue to live on it?
@athanatic
@athanatic 2 ай бұрын
Eliezer talked EVERY person that accepted the challenge into letting him, "the computer," escape. He doesn''t do the challenge anymore and his secret may have gotten out, but it is irrelevant since 100% or people let him, a non-modified human, out of the "safety container." I just want some level of growing certainty that we are doing _something_ to reduce or at least prove with some confidence that P Doom is not 100% (or however that is measured.) The discussion of meaningful challenges is something we already search for post Industrial Revolution! This line of discussion is moot if we can't create meaning for ourselves in society. The direction that creates struggle and meaning the way we evolved has been proposed by Dr. Ted Kaczynski. I am going to have to watch another video to find out about Nick's book, but this devolution into alt.extropy 1990s USENET newsgroup discussion is amusing!
@SoviCalc
@SoviCalc 2 ай бұрын
You get some concerning comments, Michael.
@tellesu
@tellesu Ай бұрын
Pdoom is an apocalyptic fantasy, equivalent to the Rapture for evangelicals. There is no way to calculate it. We know it isn't 100% because humans have access to nuclear weapons and the sun can always randomly EMP the whole planet. AI doom is just another in a long line of Apocalyptic traditions. You're better off trying to discern what the bounds of possibility are within actually realistic scenarios.
@luzi29
@luzi29 Ай бұрын
Writing with ChatGPT is also a challenge 🤷‍♂️ you want to individualise it. So you have to talk with it and clarify your viewpoints etc.
@mrbeastly3444
@mrbeastly3444 Ай бұрын
What if chatgpt keeps getting 10x better every 6 months for a few more years... then it won't be "hard to use" any more ...
@planetmuskvlog3047
@planetmuskvlog3047 2 ай бұрын
Why the dig at Elon straight our of the gate? A touch of the EDS?
@flashmo7
@flashmo7 Ай бұрын
;)
@FlavorWriter
@FlavorWriter Ай бұрын
New Mexican Pizza is possible. Modernist Pizza HAD how much money to atleast not make this tome a tome? It's trash. And if you notice --no one knows what modern is, with or with out compare. What is allowed, when people arent an audience
@mrWhite81
@mrWhite81 2 ай бұрын
Gifted with a ?
@FlavorWriter
@FlavorWriter Ай бұрын
I say "New Mexican Pizza;," and corrected "they" say "New Mexico Pizza." Is there hope to articulate identity when you grow up "white-looking?"
@albionicamerican8806
@albionicamerican8806 2 ай бұрын
How did waiting for an AI utopia work out for Vernor Vinge?
@rey82rey82
@rey82rey82 2 ай бұрын
No such place
@albionicamerican8806
@albionicamerican8806 2 ай бұрын
It's hard not to think that this whole AI business is just another Silicon Valley grift. In reality we're living in a technologically stagnant era, as Peter Thiel has been arguing for years. And how did waiting for the AI singularity work out for the late Vernor Vinge?
@ireneuszpyc6684
@ireneuszpyc6684 2 ай бұрын
there's a podcast called Better Offline - an Australian, who argues that this A.I. boom is just another tech bubble, which will burst in a few years' time (like all bubbles do)
@honkytonk4465
@honkytonk4465 2 ай бұрын
​@@ireneuszpyc6684seems quite unlikely
@ireneuszpyc6684
@ireneuszpyc6684 2 ай бұрын
@@honkytonk4465 make a video about it: present your arguments
@miramichi30
@miramichi30 2 ай бұрын
@@ireneuszpyc6684 There was an internet bubble in the 90s, but that didn't mean that the internet wasn't a thing. Just because some people might be overvaluing something in the short term, does invalidate it's long term worth (or impact.)
@KatharineOsborne
@KatharineOsborne Ай бұрын
The "smart enough to create it but dumb enough not to address the control problem" is dumb. Evolution created intelligence without intelligence. Intelligence is an emergent property of a series of simple systems. Saying that intelligence is super hard because it's intelligence is elevating it above what it actually is. So this is just another example of anthropocentric bias and thinking we are special. It's a bad reason to dismiss the risk.
@albionicamerican8806
@albionicamerican8806 Ай бұрын
I can just imagine what the authorities at Oxford said to justify shutting down Nick Bostrom's phony "institute": "Dr. Bostrom, we believe that the purpose of science is to serve mankind. You, however, seem to regard science as some kind of dodge or hustle. Your theories are the worst kind of popular tripe. Your methods are sloppy, and your conclusions are highly questionable. You are a poor scientist, Dr. Bostrom."
@GerardSans
@GerardSans Ай бұрын
Why is a Philosopher talking about technology? Would a Philosopher like it when a plumber talks about Philosophy? Maybe he should talk with technology experts to understand what is he talking about
@GerardSans
@GerardSans Ай бұрын
If elephants were able to fly it would be very dangerous. I agree but the fact is they don’t.
@GerardSans
@GerardSans Ай бұрын
The reasoning from Nick Bostrom while possible is positioned in a fringe position. It assumes some sort of aggressive AI while neutrality and positive are as much equally probable. While philosophically valid is not a sound argument. If a super intelligence is indeed inevitable the fact he proposes to try to control it from the assumption of lesser intelligence is a contradiction. If you have a substance that can’t be contained then the effort to contain it is nonsensical to your own premises. Bostrom argument is not very sophisticated as it stands. If your premise is that a super intelligent AI is inevitable then we need to prepare to be considered as equals or inferior. The control attempts seem misguided and logically contradictory.
@human_shaped
@human_shaped 2 ай бұрын
Michael is supposed to be rational and a skeptic, but hasn't seen through Elon yet.
@gauravtejpal8901
@gauravtejpal8901 Ай бұрын
These AI dude sure do love to hype themselves up. And they suffer from ignorance at a fundamental level
@BrianPellerin
@BrianPellerin 2 ай бұрын
a quick reading of Revelation agrees with what you're saying 👀
@lemdixon01
@lemdixon01 2 ай бұрын
I thought they're supposed to be skeptics and not believers or evangelists.
@tszymk77
@tszymk77 2 ай бұрын
Will you ever be skeptical of the holocaust narrative?
@user-op5tx4tx8f
@user-op5tx4tx8f 2 ай бұрын
That dude sounds vaccinated
@lemdixon01
@lemdixon01 2 ай бұрын
Lol, fully boosted. I thought they're supposed to be skeptics and not believers or evangelists.
@kjetilknyttnev3702
@kjetilknyttnev3702 2 ай бұрын
"Dude" might be on a different opinion than yours regarding vaccines. Did that ever occur to you? Being "sceptic" doesn't mean to blatantly disregard everything someone questioned at some point.
@lemdixon01
@lemdixon01 2 ай бұрын
@@kjetilknyttnev3702 of course an vaxed person will have different opinion to an unvaxed person but there is also truth. I see that you put the word sceptic in quotes maybe to make its meaning ambiguous and vague to redefine it, suchlike being in agreement with the authordoxy and current dogma.
@neomeow7903
@neomeow7903 2 ай бұрын
42:25 - 43:25 It will be very sad for humanity.
What Game Theory Reveals About Life, The Universe, and Everything
27:19
$10,000 Every Day You Survive In The Wilderness
26:44
MrBeast
Рет қаралды 139 МЛН
Whyyyy? 😭 #shorts by Leisi Crazy
00:16
Leisi Crazy
Рет қаралды 20 МЛН
СНЕЖКИ ЛЕТОМ?? #shorts
00:30
Паша Осадчий
Рет қаралды 7 МЛН
When someone reclines their seat ✈️
00:21
Adam W
Рет қаралды 29 МЛН
AI Deception: How Tech Companies Are Fooling Us
18:59
ColdFusion
Рет қаралды 1,6 МЛН
Nick Bostrom, PhD - The Ethics of Digital Minds: A baffling new frontier
35:40
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
EconTalk
Рет қаралды 41 М.
Life on Mars? (Robert Zubrin)
1:40:55
Skeptic
Рет қаралды 6 М.
Something Strange Happens When You Follow Einstein's Math
37:03
Veritasium
Рет қаралды 10 МЛН
Хотела заскамить на Айфон!😱📱(@gertieinar)
0:21
Взрывная История
Рет қаралды 2,8 МЛН
📦Он вам не медведь! Обзор FlyingBear S1
18:26
Телефон в воде 🤯
0:28
FATA MORGANA
Рет қаралды 595 М.
MacBook Air Японский Прикол!
0:42
Sergey Delaisy
Рет қаралды 527 М.