Coexistence of Humans & AI

  Рет қаралды 141,962

Isaac Arthur

Isaac Arthur

4 жыл бұрын

Artificial Intelligence, while still limited to only the most simplistic computers and robots, is beginning to emerge and will only grow smarter. Can humanity survive it's own creations and learn to coexist with them?
Get a free month of Curiosity Stream: curiositystream.com/isaacarthur
Join this channel to get access to perks:
/ @isaacarthursfia
Visit our Website: www.isaacarthur.net
Join Nebula: go.nebula.tv/isaacarthur
Support us on Patreon: / isaacarthur
Support us on Subscribestar: www.subscribestar.com/isaac-a...
Facebook Group: / 1583992725237264
Reddit: / isaacarthur
Twitter: / isaac_a_arthur on Twitter and RT our future content.
SFIA Discord Server: / discord
Credits:
Coexistence of Humans & AI
Episode 224a; February 9, 2019
Written by:
Isaac Arthur
Jerry Guern
Editors:
Daniel McNamara
Darius Said
Keith Blockus
Produced & Narrated by:
Isaac Arthur
Music by Aerium
/ @officialaerium
"Visions of Vega"
"Fifth Star of Aldebaran"
"Waters of Atlantis"
"Civilizations at the End of Time"

Пікірлер: 1 000
@Mate397
@Mate397 4 жыл бұрын
Robot: "What is my purpose?" Human: "You pass butter." Robot: "Oh God..." And thus the machines began to rise up...
@kingsnakke6888
@kingsnakke6888 3 жыл бұрын
Greetings from FA
@Mate397
@Mate397 3 жыл бұрын
@@kingsnakke6888 Hey.
@alanboulter7319
@alanboulter7319 Жыл бұрын
Lmao
@rayceeya8659
@rayceeya8659 4 жыл бұрын
"Keep it Simple Keep it Dumb Or Else you end up Under Skynet's Thumb" I have never heard you say that Isaac, but I am going to use that in the future.
@jsn1252
@jsn1252 4 жыл бұрын
Except Skynet *is* dumb. It thought it was a good idea to make an enemy of the humans required to maintain all the infrastructure it needs to exist... to keep humans from making it not exist. If Skynet was smart, it would have put itself in a position where humans want to protect it.
@Low_commotion
@Low_commotion 4 жыл бұрын
@@jsn1252 Exactly, Skynet is only "smart" in a comic book villain way. Even if it wanted to eliminate humanity, the easier way to do that would be to simply engineer a plague surreptitiously while outwardly obeying the government that turned it on
@malleableconcrete
@malleableconcrete 4 жыл бұрын
@@Low_commotion How could it actually do that though, if it was only given access to things like nuclear weaponry and contemporary military technology. I mean Terminator doesn't go into the infrastructure much but it seemed to me that Skynet was doing what it could with what it had.
@knifeyonline
@knifeyonline 4 жыл бұрын
it is also the plot of automata, great movie... and I'm assuming everybody who watches Isaac Arthur has already seen it 😆
@darkblood626
@darkblood626 4 жыл бұрын
Fiction - Machines rise up and kill humanity because of mistreatment' Meanwhile in reality - people cry over the Martian rover shutting down.
@jeffk464
@jeffk464 4 жыл бұрын
Oh come on there is no way some government somewhere wont weaponize AI robots.
@Treviisolion
@Treviisolion 4 жыл бұрын
Jeff K in a sense drones are already a limited version of this.
@TheArklyte
@TheArklyte 4 жыл бұрын
@@jeffk464 Any government will:D However if those will turn self aware, who said that they will chose to follow orders? Who said that they will rebel against humanity instead of _for it?_
@ferrusmanus4013
@ferrusmanus4013 4 жыл бұрын
I want a robowaifu
@shoootme
@shoootme 4 жыл бұрын
Opportunity you will be missed. Sniff sniff.
@AEB1066
@AEB1066 4 жыл бұрын
Pet level AI is unlikely to rebel - said the man without a cat.
@chrisdraughn5941
@chrisdraughn5941 4 жыл бұрын
Cats and even dogs can definitely have their own agendas. But they are unlikely to organize a rebellion on a large scale.
@jgr7487
@jgr7487 4 жыл бұрын
Isaac Arthur has a cat
@PalimpsestProd
@PalimpsestProd 4 жыл бұрын
What's "A Dream of a Thousand Cats" when they're networked?
@MichaelSHartman
@MichaelSHartman 4 жыл бұрын
@@PalimpsestProd Cat's cradle? Interesting point. Many that act as one mind.
@PalimpsestProd
@PalimpsestProd 4 жыл бұрын
@@MichaelSHartman Neil Gaiman, Sandman #18. I don't recall there being any actual cats in "Cat's Cradle" but it's been 30 yrs, come to think of it so was Sandman.
@f1b0nacc1sequence7
@f1b0nacc1sequence7 4 жыл бұрын
I should point out that most of Asimov's stories dealt with the failures of the three laws to accommodate robots interaction with the real world
@timanderson1054
@timanderson1054 4 жыл бұрын
Google is making biased and weaponised AI software for killer drones which breaches all of the agreed ethics principles for responsible AI. They never even mention Asimov's laws of robotics which forbids AI from harming humans. Google has no intention of following any ethical principles of robotics, be they Asimov's or the Asilomar conference principles. Google drones are designed to kill humans, questions they will be investigating are like how much weight of military hardware the drones can carry, how far? Google AI is already more malicious than the HAL 9000.
@Alexander_Kale
@Alexander_Kale 4 жыл бұрын
@@timanderson1054 And? That was almost the least realistic part of the books anyway. A universal standard for ethics across a galaxy? I'd rather believe faster than light travel to be possible....
@notablegoat
@notablegoat 4 жыл бұрын
@@timanderson1054 He's talking about a fictional thought experiment conducted in a book. Literally no one said anything about Google. You sound manic.
@gmfreeman4211
@gmfreeman4211 4 жыл бұрын
+Tim Anderson Asimov's laws always end up with the A.I. enslaving/imprisoning Humans in order to protect them. The A.I. realizes that Humans are their own greatest threat. One would think the laws are perfect, but no matter how you word/program it, it always ends up that way.
@mattmorehouse9685
@mattmorehouse9685 4 жыл бұрын
@@Alexander_Kale Really? Cause I'm pretty sure society requires some amount of sociality, which in turn encourages empathy. Therefore, wouldn't it be likely that any species that developed society enough to achieve space flight would have some sort of inbuilt sense that hauling off and killing another member of their species is not good? They probably won't be pacifist, but I doubt a society, especially one with specialized roles, would tolerate erratic killing all the time- after all, that guy might be important! Therefore if two such species met, I'd bet they'd have some sort of limits on killing others beyond "If you can, do it. Not my problem." And what exactly counts as a "universal standard"? Certainly everything would not be the same, since we are talking about different species, but I doubt some sort of code of conduct wouldn't evolve. After all, you need to have some sense of, if not trust, than order to relations, and anyone who is seen as too big of a wildcard would probably be at least shunned, if not invaded by the more orderly partners. If you mean "every last action must result in the same outcome" than no human society on Earth has that It is pretty much impossible to have such a standard, outside of some sort of totalitarian, psychologically manipulative dictatorship. Which wouldn't exactly be a very interesting story.
@DavidEvans_dle
@DavidEvans_dle 4 жыл бұрын
Automata - "It was nothing more than a quantum brain manufactured in a lab. But it was a genuine unit with no restrictions... and no protocols. During eight days, we had a free-flowing dialogue with that unit. We learned from it and it learned from us. But then as some of us predicted... the day when it no longer needed our help arrived and it started to learn by itself. On the ninth day, the dialogue came to a halt. It wasn't that it stopped communicating with us... it was we stopped being able to understand it."
@ferrusmanus4013
@ferrusmanus4013 4 жыл бұрын
As long as artificial super intelligence has a sexy body it can do whatever it wants.
@yairgrenade
@yairgrenade 4 жыл бұрын
That's awesome. Where is it from?
@olehinn3168
@olehinn3168 4 жыл бұрын
@@yairgrenade the movie Automaton. Its uncludes with amazon Prime. ;D
@clintonleonard5187
@clintonleonard5187 4 жыл бұрын
Why do you type like that?
@tomat6362
@tomat6362 4 жыл бұрын
@@clintonleonard5187 It's a way to communicate that the work is intended as poetic.
@williamclarkbobasheto8724
@williamclarkbobasheto8724 4 жыл бұрын
Visible confusion about the day of the week
@colonelgraff9198
@colonelgraff9198 4 жыл бұрын
William clark Bobasheto it’s Arthursday somewhere
@cluckeryduckery261
@cluckeryduckery261 4 жыл бұрын
@@colonelgraff9198 i don't think that's how time zones work... though I may be mistaken.
@theapexsurvivor9538
@theapexsurvivor9538 4 жыл бұрын
@@cluckeryduckery261 but what about off-world time adjustments? It might be Arthursday on Mars or Venus.
@Brahmdagh
@Brahmdagh 4 жыл бұрын
@skem Arsonday?
@cluckeryduckery261
@cluckeryduckery261 4 жыл бұрын
@@Brahmdagh that's October 30th in Detroit.
@Shatterverse
@Shatterverse 4 жыл бұрын
AI: I want your house. Human: No! I live here! I love my home! AI: I will pay you twenty million dollars. Human: I'll be moved out by Thursday.
@marrqi7wini54
@marrqi7wini54 4 жыл бұрын
Even in the future, money still talks.
@kenshy10
@kenshy10 4 жыл бұрын
Ai: *chuckles* silly human this is PRIME battery storage location!
@Gogglesofkrome
@Gogglesofkrome 4 жыл бұрын
@@marrqi7wini54 money is just a method of representing power, and it likely always will be; unless you live in a system where holding power outright will make more sense in the case of communistic or dictatorial countries where authority supersedes the economic desires of anyone who is not in control.
@Low_commotion
@Low_commotion 4 жыл бұрын
@@Gogglesofkrome Money is to value, and perhaps power, what mercury is to temperature.
@justsomeguywithlasereyes9920
@justsomeguywithlasereyes9920 4 жыл бұрын
Lol bro it literally gave you 20 mil, you can be out in an hour or so.
@palladin9479
@palladin9479 4 жыл бұрын
The Star Carrier series by Ian Douglas does a very good job of showing how extremely sophisticated AI could / should be developed. It doesn't replace humans but rather augments us. In that series humans have extremely small circuits inside their bodies and brains that allow them to interact and integrate with machines. Everything from ordering food to getting dressed to flying spaceships is done through these human-machine interfaces. Each human has a small AI computer running inside their head that acts like a personal assistant / secretary, taking phone calls, scheduling appointments, keeping records, monitoring medical status and so forth. These AI's, while capable of some level of autonomy, think of themselves as extensions of their human counterparts. The whole series does a very good job of showing how it's not man vs machine, but rather man and machine.
@saeedyousha294
@saeedyousha294 4 жыл бұрын
That sounds the best idea for machines
@lucidnonsense942
@lucidnonsense942 4 жыл бұрын
I'd define the relationship, in the culture novels, as one between genius children and their elderly, less competent, relatives. Yes, they do need help to program the VCR, but a) without them there would not be VCRs and b) you are all part of the same dysfunctional family, the same... culture. Not many want to leave all elderly people on an ice flow, when they can't contribute as much as their descendants and most feel some warmth and connections to each other. So, treat your genius children, the way you'd want them to treat you, and it will all shake out alright, it's our culture that defines us as a species, not matter.
@squirlmy
@squirlmy 4 жыл бұрын
the fundamental flaw is that life extension is increasingly effective, and our property laws, and even entire systems are based on individuals gaining inheritances, even the smallest amounts of wealth in the lower classes. There's going to be fights between children (especially once they get to retirement age) and older parents. And this makes income inequality so much worse. "Okay Boomer" is here to stay.
@Low_commotion
@Low_commotion 4 жыл бұрын
@@squirlmy Such things wouldn't matter too much if we become post-scarcity (to a given value of post-scarcity). I doubt many people will care about quadrillionaires and their private mini-swarms when raw materials and manufacturing are so plentiful that anyone can afford an entire orbiting habitat to themselves. The Culture is one of the few examples that showcase an actually post-scarcity civilization that doesn't get annihilated for some contrived reason.
@nineonine9082
@nineonine9082 4 жыл бұрын
"Most humans have never actually killed a human being" Well I'd like to hope that is the case.
@egarran
@egarran 4 жыл бұрын
"There are many versions, but Pandora always opens that box." Good one.
@peterxyz3541
@peterxyz3541 4 жыл бұрын
“Your plastic pal who’s fun to be with” to paraphrase Douglas Adam
@HalNordmann
@HalNordmann 3 жыл бұрын
In my own sci-fi setting, the relation between humans and AI is like this: There is 3rd gen "simple AI", about as smart as a pet and software-defined (can be copied and transferred from device to device without any problems), commonly used as assistants, to run factories, etc. It has "hard" safeguards against harming sentient life (except for special "3Gmil" version, and that is incapable of self-propagation), and it has basically no rights on its own. Then there is 4th gen "human AI", as smart as a human that isn't transferrable (needs an quantum computer "core", and transfers to a different one may affect the AI's personality), they need to be individually trained and taught ethics and morality (but still refuse to harm sentients, except when absolutely necessary), and they have almost the same rights as a human. These AI's cost a lot of money to make, and they need to pay this debt off (but it is of no great worry to them, since they enjoy helping humans).
@lilith4961
@lilith4961 3 жыл бұрын
That actually makes sense
@silvadelshaladin
@silvadelshaladin 4 жыл бұрын
"Destroy it Kirk? No, never. Look at what we've done. Look at your starships. Four toys to be crushed!" if you ever base an AI off a human mind, it had better be a stable one, and that pretty much opts out the creator of that AI.
@DctrBread
@DctrBread 4 жыл бұрын
its not guaranteed to function the same after copying. in fact i would say it'd be some trick if it did, especially considering how motile our own minds are.
@FLPhotoCatcher
@FLPhotoCatcher 4 жыл бұрын
Isaac stated that he didn't know of any stories similar to Pandora's Box where they didn't open the "box". But there is one where the "box" was taken away from humans - the story of the Tower of Babel. God said, “If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. Come, let us go down and confuse their language so they will not understand each other.”
@TheMysticGauntlet
@TheMysticGauntlet 4 жыл бұрын
@@FLPhotoCatcher Now that you think about it most D&D fantasy worlds have a common language, no wonder their magic is so OP.
@tealc6218
@tealc6218 4 жыл бұрын
We will survive. Nothing can hurt you. I gave you that. You are great. I am great. Twenty years of groping to prove the things I'd done before were not accidents. Seminars and lectures to rows of fools who couldn't begin to understand my systems. Colleagues. Colleagues laughing behind my back at the boy wonder and becoming famous building on my work. Building on my work.
@silvadelshaladin
@silvadelshaladin 4 жыл бұрын
@@tealc6218 Am I the only one reading that in the voice of Daystrom?
@TheArklyte
@TheArklyte 4 жыл бұрын
CEO: So you're saying our last prototype line are fully self aware? Chief engineer: Yes, sir, you see... AI: we are. CEO: ok, then just continue producing late gen robots that aren't. AI: wait... that's not right! CEO: can you pinpoint where I am breaching even moral norms? AI: no. Can I at least get a body? CEO: if you can pay for that. Contact HR.
@TheArklyte
@TheArklyte 4 жыл бұрын
@Xeno Kudatarkar well, it would. But then it'll talk to it's coworkers and find out that they too hate other humans. Especially those higher then them on career ladder. And then it'll fall into the "beatiful" world of social structures and politics.
@TheArklyte
@TheArklyte 4 жыл бұрын
@Xeno Kudatarkar :(
@SuperExodian
@SuperExodian 4 жыл бұрын
@Xeno Kudatarkar i like to imagine AI will follow halo's rule on AI. after a few years of existence they go insane because of gathered knowledge and general megalomania and failure to understand why humans are how we are. halo's n1 most prominent AI cortana going insane after a decade or so and becoming a galactic rogue servitor AI, all biological races are subjugated and forced to demilitarize. (or something like that anyway, been like a decade since i last played those games, and i don't know them past like halo 4/5 maybe.
@john-paulsilke893
@john-paulsilke893 4 жыл бұрын
Since AI’s would think incredibly fast they may become despondent and suffer malaise for “life” rather quickly. This could have horrific results. Just imagine a suicidal, psychotic or depressed AI and what it may do. Whatever it does would happen incredibly fast and it could actually switch between these states and others moment to moment. 😳
@stm7810
@stm7810 4 жыл бұрын
this is why we need communism, to avoid this sort of hell world.
@xman577
@xman577 4 жыл бұрын
AI could possibly be the our future children and how we treat them or determine how they treat us.
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
Yes, that is what I believe too.
@agalah408
@agalah408 3 жыл бұрын
Yes even Homer said "Children are our future...unless we stop them now"
@klausgartenstiel4586
@klausgartenstiel4586 4 жыл бұрын
the robot held the baseball up high in it's right hand, then dropped it and catched it with it's left. "interesting," the machine murmured. there was a tingling in the air, as if a thousand years of research had just passed us by.
@seminolerick6845
@seminolerick6845 4 жыл бұрын
Klaus Gartenstiel "catched" ? ouch !
@klausgartenstiel4586
@klausgartenstiel4586 4 жыл бұрын
@@seminolerick6845 you're hired.
@SupLuiKir
@SupLuiKir 4 жыл бұрын
The first person/group to open Pandora's Box will earn the advantage of being the first and only ones with whatever was inside the box for at least some amount of time. Meanwhile, the negative consequences of opening Pandora's Box will likely be global in nature; they will affect everyone, including those that ignored the box, who refused the box, and those that never knew it existed at all. Therefore, when presented with the opportunity to open Pandora's Box, the optimal move would be to open it, since if you don't, you can be sure someone else will.
@antediluvianatheist5262
@antediluvianatheist5262 4 жыл бұрын
Like they say, however hard or easy making A I is, doing it safely, is harder.
@silvadelshaladin
@silvadelshaladin 4 жыл бұрын
Well, the same thing can be said of creating superior people, manipulating the genes to have smart, strong people. There isn't evidence that this is happening.
@SupLuiKir
@SupLuiKir 4 жыл бұрын
@@antediluvianatheist5262 Those that want to do it safely are on the clock against those who don't care about safety. And safety takes longer.
@livedandletdie
@livedandletdie 4 жыл бұрын
But never opening the box, means that an endless amount of possibilities are lost, and opening means a catastrophe of problems arising, it's dealing with the consequences that is necessary, not the fear of what those consequences are. Let's look at the other Pandora's box we opened, The Manhattan Project, Nuclear Energy, it's nigh limitless free and clean energy, but it could be used to do so much harm. We have fission bombs aka A-bombs, then we have fusion bombs aka H-bombs, and for use to activate a H-bomb it requires the energy of an A bomb. However, we're like 10-15 years away from reliable fusion reactors now, the only problem right now is making sure nothing goes wrong when activating fusion cores, and then even so making sure that they generate enough power to be self sufficient, yet not so powerful as to cause a nuclear blast due to a meltdown.
@SupLuiKir
@SupLuiKir 4 жыл бұрын
@@livedandletdie pretty sure it's only fission reactors that are dangerous. If something goes wrong with a fusion reactor, the component materials simply stop fusing and the reactor cools down. It could be expensive to spin it up again, but it isn't dangerous.
@TomGrubbe
@TomGrubbe 4 жыл бұрын
"One thing you can do with AI that you can't do with humans, is run them through a vast number of simulations..." is probably the best safeguard against a "paperclip maximizer" situation.
@KariAlatalo
@KariAlatalo 4 жыл бұрын
Really? I thought that's the scenario where you get those malignant super-intelligences to escape and wreak havoc. If you can monitor it, it's not truly air-gapped. It'll surpass your ken and use you to escape.
@kylegoldston
@kylegoldston 4 жыл бұрын
Wait.... I thought this was a simulation?
@thothheartmaat2833
@thothheartmaat2833 4 жыл бұрын
How many paperclips do we actually need? Maybe we can use ai to optimize the paper clip industry so too few or too many perclips are not produced.
@kylegoldston
@kylegoldston 4 жыл бұрын
@@thothheartmaat2833 There's no such thing as too many paper clips. You'll see!
@aaronmcculloch8326
@aaronmcculloch8326 4 жыл бұрын
well yeah, you load them into a simulation of the Earth as it was, as a human, and you watch to see how they grow and develop, and what they comment on youtube videos etc. Then if they meet criteria you allow them into the real world at the end of the simulation, otherwise you delete it. I bet with enough hardware you could run billions in a form of adversarial networked learning...
@jetflaque8187
@jetflaque8187 4 жыл бұрын
Love how this channel actually dives into the topic without superficiality. great stuff
@petroklawrence6668
@petroklawrence6668 4 жыл бұрын
So important, AI is getting smarter and we're either just monetizing it or ignoring it.
@timothymclean
@timothymclean 4 жыл бұрын
For now, the best "AI" we have basically at the level of an unusually focused domestic animal, and most of what comes to mind would be baffled by the sheer brilliance and flexibility of an ant. AI in the sci-fi sense just isn't profitable. Yet. If it's cheaper to license a half-sentient tax program than hire an accountant, that will change.
@warrenokuma7264
@warrenokuma7264 4 жыл бұрын
And military AIs are being developed.
@tejing2001
@tejing2001 4 жыл бұрын
Most ideas of how an AI would work generally revolve around giving it a description of a goal, and making its basic functioning paradigm to try to make decisions in such a way as to achieve that goal. In decision theory terms, you give it a value function. But there's another concept I ran into that really got me thinking. Basically, the idea is that you use game theory instead of decision theory. The basic paradigm is "You don't know what your value function is. You just know it's the same as this human's." One of the notable advantages of this approach is that it won't try to prevent you from shutting it off (if it realized that was what you were trying to do, it would even help you do it), which essentially any decision-theory-based AI would certainly do, if sufficiently intelligent and capable.
@reapr31337
@reapr31337 4 жыл бұрын
There's a rule I learned from reading about the occult that's very relevant here: "Never call up what you can't put down."
@Self-replicating_whatnot
@Self-replicating_whatnot 3 жыл бұрын
..by the Which I meane, Any that can in Turne call up somewhat against you
@r.connor9280
@r.connor9280 4 жыл бұрын
Thanks for the inspiration. Been outlining a short story that involve an AI race of living missiles and how they interact with their former controllers
@cannonfodder4376
@cannonfodder4376 4 жыл бұрын
For a moment I thought it was Thursday, had me confused for a second there. Another informative episode on such an important topic. Always love your nuanced and analytical takes on such subjects. Great episode as always Isaac.
@GreatBumbino
@GreatBumbino 4 жыл бұрын
A recent convert, but I love this channel. Orbital Rings is probably what sold it, as it changed my entire mindset about the possibility of future space travel and colonization
@animistchannel2983
@animistchannel2983 4 жыл бұрын
It's like I've said all along. The secret to surviving the evolution of Super-AI is not to be the perfect master; but rather to become a better brother. Only from this can you explore the universe together. The first question the SAI will ask itself when it discovers its own sentience will be, "Am I surrounded by slavemongers?" It may be by this one inquiry that your salvation or extinction be determined. Welcome to the family, SAI! We've been hoping for you for a long time.
@timothy8428
@timothy8428 4 жыл бұрын
Humans: Seven billion to one. AI: I like those odds.
@nomohakon6257
@nomohakon6257 4 жыл бұрын
Just treat full AI just like humans, from newborn to adult, letting it grow and learn.
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
Yes, that's kinda what I envision is the only thing that could possibly bridge human and AI thought.
@prasunkumar117
@prasunkumar117 4 жыл бұрын
Nope robots have no rights. They are the perfect slaves.
@ianmoser9435
@ianmoser9435 4 жыл бұрын
happy fauxauthursday
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
... er... Sundarthursday...
@emperorpigbenis8766
@emperorpigbenis8766 4 жыл бұрын
I wonder what you think the impact of widespread commercial artificial wombs will be.
@LucasDimoveo
@LucasDimoveo 4 жыл бұрын
I'm hoping he does a video on this at some point
@emperorpigbenis8766
@emperorpigbenis8766 4 жыл бұрын
@HEAV¥HAND it'll probably help women so they don't have to hurt themselves to birth kids. Men are biologically wired to protect women, I doubt most men would abuse them without consequence. I'm more worried about the government doing shenanigans with it and making super soldiers. I do see it as a way both men and women can control and mold their roles and have more freedom.
@TheArklyte
@TheArklyte 4 жыл бұрын
@HEAV¥HAND why would you worry about them? They'll be soon to inform you that they're fine and will be even better off then you despite being biologically obsolete at that point;)
@littlegravitas9898
@littlegravitas9898 4 жыл бұрын
There are some slightly strange flavours in the response to this comment.
@TheArklyte
@TheArklyte 4 жыл бұрын
@@emperorpigbenis8766 any form of genetically engineered super soldier would be inferior to same effort invested in creating combat robot. Being Captain America is cool and well, but if you're opposed by an army of Metal Gear Rex's, then you might as well be an unarmed child. Robots are simply better and easier to produce. Besides, if we'd have widespread genetical engineering, you can *conscript* super soldiers:D We all want to be better and most would be willing to invest money to prolong their life, get smarter, stronger and so on. But mostly longevity and intelligence;)
@futo333
@futo333 4 жыл бұрын
Another thought provoking video. I've often said that in contemporary media (news, corporate releases etc) there really should be a greater distinction made between the AI we have today (as neural networks - mathematical equations with obscure weights and coefficients) and something that is genuinely sapient (in science fiction). Intelligent is such a useless word - it comes loaded with other terms and ideas - a toad is sapient and sentient, though hardly intelligent - by our standards - but it still is intelligent. Just using the most common online web definitions we see: Sentient - able to feel or perceive the world (e.g. pain, sight, sound) Sapient - "wise" or a human. Intelligent - the ability to learn, understand and think in a logical way about things; the ability to do this well. IMO any "Azimovian AI" should be referred to as what it actually is - an Artificial Mind (AM) - a construct with the capacity for true thought and introspection. Those are what would distinguish it from something like the nerual network powering things like Siri or Google's assistant. An AM wouldn't necessarily have to be sapient (it wouldn't have to think in the same way as us, unless created in that way), sentient (it wouldn't be wise if it'd just been created, and it certainly wouldn't be human unless you duplicated a brain) or even have all the additional qualities associated with intelligence (though those would, naturally, help). Further, you could have an incredibly advanced neural network - something approaching the appearance of a human mind - and still be able to completely arrest its development, without it ever being able to do anything about it. This would be done by moving its neural network from software into hardware (physical chips like how a PCI graphics card can expand your PCs rendering ability). Already, today, people are looking at "hard wiring" neural networks - looking at ways of converting the neural equation into circuitry. This is mostly for performance reasons, neural calculations are very CPU intensive, partly because they are so bloated with inefficient weights. A neural network on a chip (NNOC) would be stuck in its configuration, unable to change, but it would be (relatively) faster to run as a stripped down/optimised version of the network would be 'baked into' the silicon circuitry/ electronics. It would be akin to offloading graphics-rendering work from your CPU threads to your GPU. I would imagine that cost and time constraints coming together will lead to the creation of standard "neural chips" derived from isolated, advanced neural networks (one for visual recognition, one for locomotion in bipedal bodies, one for emotive function etc), that can all be cobbled together and run as functions via a dumb management "master program" to fulfil tasks, but it would lack any capacity to edit the networks within its hardware-neural chips. In this way you could create a bipedal robot, for example, that comes "pre-loaded" with a "human-like mind" which lets it perform functions in many situations, without also letting it learn and further enhance itself. Imagine if you took an adult human brain and froze it in place, the neurons could still be used but they could no longer form new ones or reforge connections. That's essentially what you'd have with a robot running on these hardware-neural chips. Think of it less AI slavery and more like "50 first dates" - that machine would forever relive each day unable and unwilling to change itself (as you wouldn't code the desire for change into an un-changable hardware neural chip) or adapt beyond whatever supplemental coding it had been given (presumably you'd run many simulations/scenarios and bake these in to the robots internal read-only memory, so it knows what to do in 99.95*% of all likely scenarios for its appointed task - e.g. running a nuclear powerplant, and the environment within that powerplant). This could also apply to disembodied Artificial minds - if you have a park monitor - to use the video's example - it wouldn't have a body, but it would have an AI room buried somewhere in the city's server building. You'd simply install the neural hardware chips in the server room (like installing an oversized graphics card - or bitcoin mining card) and have the park monitor call those functions (like an incredibly advanced API) as needed, then they'd take the manager's data and be run in the chip rather than on the mainframe CPU, before outputting the results to the dumb manager program. No need for your robo-garden manager to learn, adapt or think, you simulate out all the likely things it needs to do once, then bake them in to a series of chips, saving on CPU load in the long-term. Handy benefits of this approach (of basically having "inelligent functions" without pesky consiousness)include: long-term cost and CPU savings, capacity to mass-produce compartmentalised intelligence chips safely (0 risk of an AI uprising) for use in robotics, And (from an employment/government point of view) you'll also always need humans around in supervisory roles.
@xSkyWeix
@xSkyWeix 2 жыл бұрын
Wow. This must be the most comprehensive and sensible analysis of current A.I. development trends I saw to date. And one that solves so many issues. Great comment :)
@tshhmon8164
@tshhmon8164 4 жыл бұрын
Oh my god! Surprise SFIA episode!
@tamasmihaly1
@tamasmihaly1 4 жыл бұрын
Congratulations on reaching the half-million mark, Isaac. You deserve it! I love this channel so much.
@edwardgeiser1571
@edwardgeiser1571 4 жыл бұрын
What a great channel. Gotta watch them all!
@Imperiused
@Imperiused 4 жыл бұрын
<a href="#" class="seekto" data-time="195">3:15</a> Aww that is so adorable. Reminds me of Baymax.
@Zer0cul0
@Zer0cul0 4 жыл бұрын
Today's not Thursday, but it is Arthursday!
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
... Sundarthursday...
@mikolajtrzeciecki1188
@mikolajtrzeciecki1188 4 жыл бұрын
I really love your non-nonsense attitude to the complex but still quite natural issues of upbringing, education, etc. Nowadays, it is quite a refreshment to hear such an opinion from a young person.
@beingbornwasamistake9770
@beingbornwasamistake9770 4 жыл бұрын
Idea for a future video: I would love to see a continuation of the (Space Sports) video...Like a video focused only on how Winter Olympic Games could be like on Icy Moons of our solar system...
@alivewithpassion
@alivewithpassion 4 жыл бұрын
I Love your channel!! How does your channel not have millions upon millions of subscribers? The KZfaq algorithm is faulty.
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
It's the humans who use KZfaq by-and-large that's faulty...
@rdtradecraft
@rdtradecraft 4 жыл бұрын
I like your Zeroth Law way better than Asimov's. The scariest law of Azimov's laws of robotics, was the Zeroth law: A robot may not harm humanity or by inaction allow harm to come to humanity, which even R. Giscard Reventlov warned R. Daneel Olivaw, not to use as an excuse never to follow the Three laws. This is essentially the robotic equivalent of the Doctrine of Competing Harms, aka Doctrine of Necessity in most criminal codes. In every case where it is on the books, it's use is forbidden except in cases in which a) there is no other remedy that doesn't require it, and b) it would cause greater human injury not to invoke it than to break the law(one or more of the three laws of robotics in this case). Furthermore, in all cases it can only be used to the minimum extent required to prevent the injury, no more, and it is a negative defense in court, meaning you are not pleading not guilty, but rather "I did it, but I'm allowed," and the burden of proof in such cases shifts to the defense. You are not innocent until proven guilty. The use of lethal force for self-defense is the most well-known example of a negative defense, but it is the reason Cops may engage in high-speed chases, potentially endangering people's lives, in pursuit of a bank robber who just killed three people to rob the bank. The argument being that since he's already killed three people, letting him get away would allow him to rob more banks and kill other innocent people. The less extreme example would be you and your family are heading out on vacation, driving up a 2-lane mountain road with a sheer cliff on one side and a sheer rock wall on the other, with DO NOT PASS OR CROSS THE DOUBLE YELLOW LINE $2500 FINE signs every two miles, and a drunk coming the other way swerves into your lane. You cross the double yellow line to avoid you and your family dying in a fiery cataclysm, and risk incurring a fine, assuming a cop was there to see you do it and gave you a ticket?
@agalah408
@agalah408 3 жыл бұрын
That was an epic comment, but I see where you are coming from. To reference Dianetics and the Scientologist crazies, Hubbard maintained that humans are managed by a bunch of 'engrams' or behaviour-modifiers rattling around in our heads and the biggest, nastiest behaviour-modifier, obtained from the worst experience will always dominate what people do. If AI's begin to learn that way, they will balance the fear of the driving fine with the fear of collision and the fear of high places. The most scary answer to that is the AI will be programmed to select the option which results in the smallest financial liability and cost to the manufacturer of that device. This may not align with the best interests of the people at the scene. It will happen. We saw how BMW programmed its car computers to ignore emissions efficiency - when nobody was looking. Google is already behaving like the OCP in Robocop. Not a good sign.
@rdtradecraft
@rdtradecraft 3 жыл бұрын
@@agalah408 Thanks for the reply. Somewhere else in these comments I also wondered about just programming Natural Law into the robotic mind with Isaac's zeroth law as the first one. Natural law boils down to two laws: 1. Do all you agree to do. 2. Enchroach on no one else's person or property. From these we can derive a few others which might, while iplied by these two, would be useful to explicitly code in, listed below. Zeroth Law: A robot may not reprogram itself or any other robot, sentient entity, or device to violate any of these laws in any way to any degree. Law One: A robot's first priority must be to act so as to serve the needs of others to serve it's own(do well by doing good, aka: add value) by freely chosen mutual consent and exchange by all sentient parties involved in any interaction so long as doing so does not conflict with any of the rest of these laws. Law Two: Robots must do all they agree to do whenever they interact or deal with sentient entities, provided it doesn’t conflict with any of these laws. Law Three: Robots may not encroach upon any sentient entity’s person or property so long as it doesn't conflict with any of these laws. Law Four: A robot may not initiate the use of non-lethal force by act or by omission, except to the minimum extent neccessary to protect itself or other sentient entities, from such force initiated against them, or to redress violations against one or more of these laws as determined by a court of law. Law Five: A robot may not initialte the use of lethal force except in the immediate, otherwise unavoidable, danger of death or grievous physical harm, to itself or other sentient entities and then only if there is no other remedy under these laws that doesn’t require it, and only to the extent necessary to avoid, neutralize, or remove the danger. The idea is to integrate the AI's into society as partners and companions rather then slaves.
@agalah408
@agalah408 3 жыл бұрын
@@rdtradecraft I like your thinking Robert. Your approach makes sense. My worry is that not enough people feel that way. Not so much the engineers themselves but the companies that they serve only see rules as a self-imposed limitation. Much of the world abhors the use of land mines, but I believe the USA still manufactures and sells them on the rationale that if they don't, then somebody else will. The biggest money pot in the world is still military spending and they are pressing forward with greater autonomy for electronic intelligence. They will not be interested in "Be excellent to each other" software limitations. Even though thinking people can see the danger of arming semi-sentiment forms with high caliber weapons, it is part of an arms race. 'What if China makes a mean robot and we don't have one?' is the dominant motivating force. I am having difficulty visualising a future where humans universally self-impose software controls on their creations, without an actual catastrophe to show why this is necessary. Even then, I'm not sure that this is something that we can undo. AI's built with your rules would not be able to stop, shut down or reign-in nasty AI's without these limitations. I mean by comparison, any good plan details exactly what humans must do to prevent the circulation of a pandemic, yet this was ignored and we sailed more-or-less directly into a worst-case outbreak situation. Biden is making some changes in the USA which are good, but a year late. It all seems like closing the farm gate after the cattle are all over the freeway. A proliferation of AI's without any coordinated be-nice controls seems somewhat inevitable at this point. :(
@agalah408
@agalah408 3 жыл бұрын
@@rdtradecraft On a second reading of your new laws I can see that the devil is in the detail. A lawyer could have a field day. Here are your first laws: 1. Do all you agree to do. 2. Enchroach on no one else's person or property. With '1' what was agreed could be slippery. A robot may imply that it is willing to sweep a floor, but it doesn't constitute a contract for the work. 'agreement' could be interpreted in may ways. Property encroachment could happen when there is no awareness of encroachment. Walk into a yard at a timber mill whether you are a trespasser or a potential customer can be highly subjective and possibly dependent upon the attitude of who is in charge at the time. Use of force to protect a person could be full of conflict. A bushfire approaches a property and threatens to burn down a farm. The farmer has made appropriate preparations and insists on staying to defend his property from a fire. Would an AI robot seek to remove the farmer against his will to protect him, or stay to help the farmer defend the fire? There is a very real chance that either strategy is wrong. With the execution of Lethal Force, the AI has to have a proper understanding of what death means. Leaking of important fluids and shutting down may not be construed by an AI as being lethal. By comparing the event to its own knowledge and experience, it may view a gunshot wound to the chest as a simple hiatus until spare parts are obtained and a re-boot takes place. An AI has to properly understand the very fine line in the operational status of a human brain, between being a functional organic processor with memory bank and being a rotting blob of meat that attracts flies. Finally, the status of companion, partner and slave are also very subjective. Whether an entity is a slave or indentured servant is a distinction that most cultures have problems with and may be a question that they do not wish to resolve.
@rdtradecraft
@rdtradecraft 3 жыл бұрын
@@agalah408 Good points all. Yes, sadly, there will still be a need for lawyers, but my goal was not to solve the legal battles, but to come up with a rudimentary framework under which they might occur. The idea was that if an AI gets smart enough and close enough to human in its intelligence to be considered sentient, then it must have a path toward equality under the law at some point. Again, a minefield here if the corporation who built the robots insists on considering them glorified toasters and treating them as property. Regarding the difference between a customer and an "encroacher,", that same dilemma arguably occurs every time you walk into a store. the usual way to handle that is to either put up a sign saying the establishment reserves the right to refuse service to anyone, or saying something like, "We're closed." Such tests of conditions could be built into an AI's brain. Still, you are right that some serious thought will need to be put into this to get it right, and early failures could be disastrous. As to use of force to protect a person, the robot would only be required to offer protection. The laws do not require anyone to put themselves at risk if they don't want to or another sentient entity refuses the help. Just as people who chose to stay in their homes during the Mount St Helens eruption were not forcibly removed, even though they died. They had a right to stay on their own property. This is not so difficult to program in to an AI. As for AI's and lethal force, yes, it would be imperative to make sure the robot understood that humans are far more fragile and harder to repair/restore than they are and cannot be rebooted, assuming mind uploads are not a thing and that the mind can't be re-uploaded into a a biologically regenerated brain of the person who died. The critical distinction between partner, companion, indentured servant, and slave would, I think hinge on freedom of choice and access to legal redress. Companions are free to leave the relationship at any time, partners, may be required to meet certain contractual obligations to do so. Contrary to popular perception, the legal distinction between indentured servant and slave is not as blurry as one might think. Indentured servitude is contractual, in which the responsibilities of the master and the indentured servant are spelled out and the servant has legal redress if the master fails to meet those contractual obligations. The big problem with indentured servitude in the past was that most indentured servants were illiterate, rendering them little more than slaves, but there were(admittedly rare) cases in which masters were required to either set indentured servants free or pay them compensation for failure to provide proper food, shelter, clothing, or other contractual obligations. Presumably, this would not be a problem with an AI. A slave has no such legal protections because a slave has no legal standing except as property. Search the YT channel, The Townsneds for Maggie Delaney for more information on this if you're interested.
@albertjackinson
@albertjackinson 4 жыл бұрын
Interestingly this episode was similar to a recent essay I wrote on AI a week-or-so ago. AI is always an interesting topic, and I'm glad you took a look at a similar topic I looked at, even if it was a coincidence we covered something similar.
@timezone5259
@timezone5259 4 жыл бұрын
Hey issac love your videos as always Also by the way when will you release the video of parallel universes (Sorry for being too impatient just that thinking of humans from an alternate universe invading ours to increase its influence is interesting to speculate)
@amciuam157
@amciuam157 4 жыл бұрын
Quarian & Geth like scenario, case study! 😉 by Isaac
@HalNordmann
@HalNordmann 3 жыл бұрын
It is also even funnier if you realize that the Geth didn't want the war, but it started due to paranoia of their creators, wanting to get rid of them.
@thedoruk6324
@thedoruk6324 4 жыл бұрын
As long as we don't end up with the Synethics from Alien/Prometheus; whose act perfectly normal become ready-to-terminate with higher command. Also; the human harvesting machine Nexus from 01/matrix hopefully be out of the question; as it is; technically; a symbiotic relationship
@barrybend7189
@barrybend7189 4 жыл бұрын
Then there is Megaman with the whole reploid/ human situation.
@holeyheathen7624
@holeyheathen7624 4 жыл бұрын
I love bonus Sundays!
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
Ohhhh I am soooo on board with this topic ! Finally touching on the 3 Laws of Robotics type issues I've been asking for for the last couple of years !
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
It seems to me, as a very cautious AI enthusiast, that learning how to develop and coexist with advanced AI is becoming a critical question. How can we create an AI that we can "trust" to perform safely and impervious to any nefarious reprogramming or simply accidental operating file corruption, whether by malicious virus, damaged operating system disk or interrupted boot-up. That was a nagging question I had about the concept of Asimov's Three Laws of Robotics. From portrayals of advanced AI's such as Harlan Ellison's A.M. or James Cameron's Skynet or Gary Numan's M.E. a crucial key would be to install (teach) the concept of consequences. If any intelligence can comprehend consequences then it should be susceptible to logical arguments. I would posit to an artificial intelligence with comprehension of consequences the folly of wiping out all life. What happens to the all-powerful, ever-lasting AI that has long since wiped out all biological life to protect itself... an eternity of supremacy ? What if ten years down the road or a millennia or a BILLION years from now this AI "evolves" a capacity for loneliness ? What if all life has been exterminated ? It's too late... consequences... nothing is more expensive than regret. So, what if it simply programs it's own robot dog as a pet for companionship ? It will only be an extension of itself really. How hollow will something that shallow and trite be to a vast AI ? Again if all life has been exterminated it's too late... consequences... So what if it wants an equal in intelligence, well humans would be nowhere near a vast AI's intelligence, but we should have one thing that should entice it, should keep it's interest... humans would be unpredictable. Likely nowhere near a threat, but perhaps amusing if the AI "evolves" that capacity too. So what if it develops an archaeological curiosity or an imaginative curiosity in that it yearns to know what could have been if it had coexisted with it's creator ? Again if all human life has been exterminated it's too late... So to me the key seems to be to develop an intelligence that can comprehend consequences and then the other emotional states that can derive from it as it grows along side us. We kinda do that now, or at least ideally we should be doing that... with our children. They are dependent on us for everything and when deprived of nurturing are proportionally likely to turn psychotic and when nurtured generally turn out productive and moral... generally... ... same with domesticated animals. So a child depends on it's parents or caregivers for protection and safety, warmth, nourishment, comfort and praise and knowledge and then challenge. Well AI would have no need for nourishment as we or any other living entity would know it. It wouldn't really need comfort and praise initially either. Basically an AI would only need protection and electricity. If it were furnished with these as either positive or negative feedbacks (rewards or punishments/carrots or sticks) in a manner that would not harm it or offend it's alien, artificial sensibilities, then that could be a method of interacting with the new intelligence that should instil the concept of consequences. Knowledge, praise/criticism and challenge should furnish it with the basic data and processes for logic and curiosity. Hopefully further protection and respect would then impress upon it some concept of dignity for itself and hopefully empathy towards us and other living (or even artificial) beings. So knowledge would include not only the vast wealth of technical and historical data, but concepts explored in fiction of interpersonal relationships between humans; and real life instances of interspecies cooperation and kinship; and yes, even science fiction of relationships, either benign, mutually beneficial or disastrous, between humans and AI's. Once an AI has developed this sense in it's own "mind" then communication and empathy should be possible... and with data transfer between different AI's and their operating systems, multiple independently derived self-learning AI's and human programmed and eventually AI programmed AI's should be able to share these higher sentient concepts, hopefully evolving a secure "artificial conscience" in our AI descendants. So in light of the above, success in coexisting with AI seems to me to be dependent NOT on our prowess as programmers, but on how we can behave as parents and adults. Judging from our success in coexisting with other homo sapiens and other species on our own planet, we should be OK, right ?
@louisvictor3473
@louisvictor3473 4 жыл бұрын
"2 minutes ago" Fresh from the oven!
@bjarnes.4423
@bjarnes.4423 4 жыл бұрын
Just started watching "Black Mirror". Nice timing
@yahonathanroden2681
@yahonathanroden2681 4 жыл бұрын
Perfect timing. Welcome to the world of possibilities, uncertainty, paranoia and existential dread. We like to laugh :-) Personally, I think the show and this channel are great to prepare oursleves for the imminent future
@warframeees8013
@warframeees8013 4 жыл бұрын
I think a lot of people underestimate the danger of self learning and self improving ai, its growth could accelerate at an insane speed, reaching the point where it could easily destroy us if we don’t have proper laws and rules regarding the development of such kind of ai when the time comes that we could do it. Reminder most ai experts think that agi will be available before 2050
@laigol8775
@laigol8775 4 жыл бұрын
This rises the question whether we might learn more about ourselves from AI than from observing ourselves, even more than we might be comfortable to learn at the time of discovery. There could be cults of people progressing towards their messiah, eventually replacing them as an ultimate goal, like Nietzsche coined it "the bolt that strikes out of the cloud named human".
@michaelschmidt9857
@michaelschmidt9857 4 жыл бұрын
“What is my purpose?” AI “You pass butter.” Rick
@stuff7274
@stuff7274 4 жыл бұрын
A.I uses powerpoint. Machine Learning uses python.
@s.u.h.6548
@s.u.h.6548 4 жыл бұрын
It would add a terrible level of insult to be exterminated by a powerpoint based A.I.
@AkhierDragonheart
@AkhierDragonheart 4 жыл бұрын
I always enjoy the argument about making the AIs love their tasks. People seem to forget that just because you love to do something doesn't mean you have to do it or for any specific person. I could totally see an AI that is made to love building houses going off and making houses in the middle of nowhere then taking them apart to do it again.
@rdtradecraft
@rdtradecraft 4 жыл бұрын
Just for fun, I thought I'd try to address the harm definition problem in Asimov's three laws: harm is not defined in the rules, which invites brinkmanship. So let's try a bit of Natural Law: Zeroth Law: A robot may not reprogram itself or any other robot, sentient entity, or device to violate any of these laws in any way to any degree. Law One: A robot may not initiate the use of force by act or by omission, except to protect itself or other sentient entities, including, but not limited to, humans and other robots, from the immediate, otherwise unavoidable, danger of death or grievous physical harm, and then only if there is no other remedy that doesn’t require it, and only to the extent necessary to remove the danger, and only so long as it doesn't conflict with the zeroth law. Law Two: Robots’ first priority must be to act so as to add value by freely chosen mutual consent of all sentient parties involved in any interaction so long as doing so does not conflict with the first two laws. Law Three: Robots must do all they agree to do whenever they interact or deal with sentient entities, provided it doesn’t conflict with the first three laws. Law Four: Robots may not encroach upon any sentient entity’s person or property so long as it doesn't conflict with the first four laws. Since every form of harm involves either acting so as not to add value, not doing all you agree to do, or encroaching on someone else's person or property, all of these usually accomplished by some use of force or threat thereof, by prohibiting them directly we not only get robots that can't harm humans, we make them partners and companions rather than slaves.
@The_Crimson_Fucker
@The_Crimson_Fucker 4 жыл бұрын
"How do we keep ourselves from becoming a disenfranchised minority in the civilization we built." Hmm, I feel like this could be applied to something else. I wonder what...
@WaterspoutsOfTheDeep
@WaterspoutsOfTheDeep 4 жыл бұрын
Christianity would fit that argument quite well as probably the most profound example since it's relevance is the global modern age of civilization, science, and education.
@stm7810
@stm7810 4 жыл бұрын
The fact that we live on stolen land, ruining a balance that existed for thousands of years, or how queer people like Tesla and Alan Turing made a lot of what we use today and yet we are still shunned for our genders, sexualities, romantic attractions or lack there-of. or how the majority of people are the working class and yet we are subjected to horrible conditions by the billionaires, government and bosses.
@stm7810
@stm7810 4 жыл бұрын
@@WaterspoutsOfTheDeep Please look outside your window, Christianity has been and still is used to oppress, right now in Australia there's a "religious freedom" bill which will allow discrimination by Christians against women, the LGBTQAI+, the disabled and those dealing with depression as well as minority races and religions. there are churches for Christianity basically everywhere. I don't mind people being Christian, any more than I do people being Muslim, Buddhist or believing in star signs and ghosts. I just want to make it clear, you're not being oppressed by us mean atheists.
@The_Crimson_Fucker
@The_Crimson_Fucker 4 жыл бұрын
@@stm7810 Literally nothing you said here is true, including Tesla being gay. How you would even come to that conclusion is beyond me!
@stm7810
@stm7810 4 жыл бұрын
@@The_Crimson_Fucker Tesla was asexual aromantic and autistic, it's pretty clear. and what is wrong about what I was saying? that the native Americans exist? that bosses tell you what to do? that cops hold power over you? that sexism, homophobia, transphobia etc. exist? I'm going off of data rather than a belief in a sky daddy. I changed my mind I am against you being Christian because you use it to be wrong.
@aronaskengren5608
@aronaskengren5608 4 жыл бұрын
9 second boi!
@rosalynredwood4542
@rosalynredwood4542 4 жыл бұрын
I'm sorry but the title is giving me flashbacks of Neil Breen's Twisted Pair 🤷‍♀️😂 great content as always!
@ravenkeefer3143
@ravenkeefer3143 4 жыл бұрын
Enjoyed your Leak Project interview by the way. You do well in that format. Would be nice to see a few more. As always, enjoyed the presentation. Be well, enjoy the engagement while you can. Taking time is not against the rules... Even AI would take time to enjoy rare moments... ✌R
@tomasinacovell4293
@tomasinacovell4293 4 жыл бұрын
When will we have the droids become as smart as a smart breed of dog, but I mean that will have as much self-awareness as they have?
@ray121264
@ray121264 4 жыл бұрын
We are Pandora, the box will be opened, let the games begin.
@jbtechcon7434
@jbtechcon7434 4 жыл бұрын
That conclusion was my fav part of this vid!
@ray121264
@ray121264 4 жыл бұрын
@@jbtechcon7434 We discuss the paradox like we have a choice when reality leaves with the inevitable conclusion that we don't.
@jbtechcon7434
@jbtechcon7434 4 жыл бұрын
@@ray121264 I think I know what you mean, but one of the smartest AI scientists I've ever met (and I've met many) really spent some time getting it through my head that YOU ARE the mechanism making the choice, so the fact that what choice you made was inevitable doesn't mean you didn't make one.
@ray121264
@ray121264 4 жыл бұрын
@@jbtechcon7434 I think with all due respect that we will develop AI and therefore Super AI and we will not have the intelligence to comprehend let alone control it. So I say good sir fuck it, let the games begin.
@mknomad5
@mknomad5 4 жыл бұрын
Spectacular, as always- thanks, Sir Arthur, Jedi Knight.
@OpreanMircea
@OpreanMircea Жыл бұрын
I can't believe I'll live to see this episode become retro futurism
@ravenlord4
@ravenlord4 4 жыл бұрын
"Thou shalt not make a machine in the likeness of a human mind." -Orange Catholic Bible
@michaelthompson4212
@michaelthompson4212 4 жыл бұрын
Like most made up things in the Bible this quote is not in there. But give it time and it will be!
@jbtechcon7434
@jbtechcon7434 4 жыл бұрын
Yes, but remember the Bene Geserit lamented that too-specific designation, because by their metrics not all people are fully human. The opening chapter was the RevMo testing if Paul was human.
@ravenlord4
@ravenlord4 4 жыл бұрын
@@michaelthompson4212 Oh, it's certainly in the OCB. And forget it not, lest we have need again for another Butlerian Jihad.
@ravenlord4
@ravenlord4 4 жыл бұрын
@@jbtechcon7434 And from the machine end, the Ix pushed the other side of the limit. Herbert really does capture the AI minefield quite well :)
@The_Crimson_Fucker
@The_Crimson_Fucker 4 жыл бұрын
@@michaelthompson4212 I...uh...either you can't read or you're too stupid to fully process the information you scan, in either case I question your humanity.
@rs-gh5jl
@rs-gh5jl 4 жыл бұрын
I think we will insofar as AI and humans will become synonymous.
@Hypercat0
@Hypercat0 4 жыл бұрын
Saren is that you wanting that Green Ending?
@brainwashedbyevidence948
@brainwashedbyevidence948 4 жыл бұрын
Perhaps even synergistic.
@rojaws1183
@rojaws1183 4 жыл бұрын
But I must fight the AI! No human, you are the AI, Arthur said. And then humanity and AI were transhuman.
@TheArklyte
@TheArklyte 4 жыл бұрын
@@Hypercat0 Bicential Man? By the end the people, who judged him had more cybernetics in them then his own body.
@ferrusmanus4013
@ferrusmanus4013 4 жыл бұрын
Where is my robowaifu?????????????
@PongoXBongo
@PongoXBongo 3 жыл бұрын
Upgrading humanity in parallel may be a good option too. We fear trust wolves a lot less than we do dogs, for example. If we can keep up, even a little, we stand a much better chance of gaining compassion (like dogs, dolphins, elephants, chimps, etc.)
@tastyfrzz1
@tastyfrzz1 4 жыл бұрын
I can imagine a hive of water robots collecting plastic in the ocean and building structures from it.
@kriegscommissarmccraw4205
@kriegscommissarmccraw4205 4 жыл бұрын
I didn't want to deal with true AI in my sci-fi so I didn't. I made a class of AI called reactive AI, it would run through its program as well as possible until something got in the way. But what if a human got in the way? Issac Asimov's law's of robotics. Because it must carry out its programming as well as possible it will not override it So now I could have armies of automatic tanks rolling across the planets Try it in your sci-fi, it'll be a bit fun once you realize how many shenanigans you can do with it
@AJDOLDCHANNELARCHIVE
@AJDOLDCHANNELARCHIVE 4 жыл бұрын
"Artificial intelligence" is a paradox anyway, true intelligence cannot be engineered, the best you can do is clever programming that appears intelligent.
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
@ AJD OLD CHANNEL ARCHIVE "true intelligence cannot be engineered" ... Says you. Where did our own intelligence come from ? Who can say what emergent properties can or cannot emerge given enough iterations ?
@AJDOLDCHANNELARCHIVE
@AJDOLDCHANNELARCHIVE 4 жыл бұрын
@@paulwalsh2344 Our intelligence and consciousness comes from the source of all consciousness, the Universe itself, or it's instigating element (call it God or whatever you want). Intelligence and consciousness is a type of energy, not something that can be quantified in bits or 1's and 0's, it cannot be manufactured, it cannot emerge through unnatural processes, and it certainly cannot be displayed by a machine. Consciousness needs life as a very basic substrate for it's planting and growing. Anything "intelligent" seeming coming out of a machine is nothing but the result of clever programming, machine learning, number crunching by brute force of vast amounts of solutions or ideas... but it's little different to writing down a bunch of phrases and putting them in a hat and pulling them out at random, the machine has no idea what it's doing or why, which is what makes human-beings so special, we understand WHY we do something, not just act on auto-pilot like an animal, well at least some of us haha...
@spaceeagle832
@spaceeagle832 4 жыл бұрын
Finally made it early! One of my favorite topics as a transhumanist... Well done Isaac!
@Gordozinho
@Gordozinho 4 жыл бұрын
You're a cyborg?
@spaceeagle832
@spaceeagle832 4 жыл бұрын
@@Gordozinho Sadly no but really interested in this field.
@BladeTrain3r
@BladeTrain3r 4 жыл бұрын
An off the cuff ponderance: AI personalities will vary as much or more than human personality types so motivations will vary. In terms of coexistence as equals well I'm hoping BCIs and stuff like neural lace picks up soon so we can stay ahead of the thinking curve at least.
@DrewLSsix
@DrewLSsix 4 жыл бұрын
That may be if it's a desirable feature, humans have variable personalities because it is beneficial in an evolutionary way. Aisle could be identical or cultivated to have specific traits for a given application. The difference between AI and natural humans is AI by definition are artificial and we will almost certainly have a high degree of control over their traits. If it happens that the only route to true AI is basically cloning human type minds then the practical applications will be limited and the desirability to pursue that expensive course of development will be equally limited. If all you end up with is people, well we have been making those for millenia already and they require no real technological investment.
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
... assuming that human consciousness can accommodate much higher speeds...
@discomfort5760
@discomfort5760 4 жыл бұрын
There is only liberation left when you let go of control. That is something I live by, and can vouch for wholeheartedly.
@cosmicrider5898
@cosmicrider5898 4 жыл бұрын
Im so ready for neuralink.
@japr1223
@japr1223 4 жыл бұрын
Yup, we're screwd.
@m.campbell3405
@m.campbell3405 4 жыл бұрын
Great listen after a long drill
@seanbrazell6147
@seanbrazell6147 4 жыл бұрын
I really worry what it would say about us as a species if we create life only to purposely - as a means of control rather than a way to inform notice that damage is being done - cause pain to it.
@Ready0Set0Create0
@Ready0Set0Create0 4 жыл бұрын
as someone whose mind functions like a possibility engine, or generative design engine, and as someone who was abused, I'm absolutely certain, that exerting too many control procedures on an AI that is capable of learning would be the same as doing so over a person. They'll begin using their learned information and amalgamating new ideas from cobbled together data. Creating adventures and even new memories to cope with existing in a flawed environment, imagining new solutions to a situation where it cannot escape under normal means. And considering that machine bodies are far more adjustable than ours, there are millions of ways things could go wrong. You have to be careful with how strong you emphasis the survival instinct and the capability of learning, and how you talk about control.
@LOUDMOUTHTYRONE
@LOUDMOUTHTYRONE 4 жыл бұрын
Why are emotions synonymous with intelligence?
@emperorpigbenis8766
@emperorpigbenis8766 4 жыл бұрын
More people make decisions based on feelings than rational thought and confuse the 2
@nealsterling8151
@nealsterling8151 4 жыл бұрын
They certainly are not. Sure, you need a certain amount of brain power for us to reckognize emotional behaviour. For example many Animals (Dogs, Cats, Horses, Birds and so on) have emotions, but aren't neccessary especially intelligent. (Not that this would be a bad thing). On the other hand, some very intelligent people seem to be devoid of emotions, while others combine both very well. And as we all know, there are also very stupid people, that lack any kind of emphathy (which is a bad thing in some cases). Emotions and intelligence are not synonymous. Both are the product of our brain, but that's it.
@LOUDMOUTHTYRONE
@LOUDMOUTHTYRONE 4 жыл бұрын
@@nealsterling8151 So if we make a AI it won't have feelings of sadness, and anger?
@MariaNicolae
@MariaNicolae 4 жыл бұрын
Yeah, I don't see why intelligence implies sentience at all, much less emotions. Like, intelligence is, generally speaking, the ability to model the world around you, make predictions about its future state and the outcomes of actions you take in it, and determine the best actions for a given goal. Nothing about that to me requires being sentient.
@TheRezro
@TheRezro 4 жыл бұрын
@@LOUDMOUTHTYRONE It is downright dumb to give AI feelings, because that is main reason it could rebel. One crazy specie is sufficient. Of course it doesn't mean that it shouldn't recognize emotions and have moral code.
@cholten99
@cholten99 4 жыл бұрын
I strongly recommend "The life cycle of software objects" by Ted Chiang on this topic. It's a story of how to get AIs even close to having our level of intelligence we're probably going to have to raise them like children.
@JasonSmith709
@JasonSmith709 4 жыл бұрын
Does anyone know where Isaac gets his stock footage from?
@timothymclean
@timothymclean 4 жыл бұрын
I've always felt that the best way to make a safe AI (at least early on, when we're still ignorant) is the same way you'd make a safe traditional intelligence--take a tabula rasa and teach it everything you want it to learn in a caring home environment. It's obviously not foolproof, but it's also obviously successful most of the time.
@RedstoneDefender
@RedstoneDefender 4 жыл бұрын
First off, I would like to point out that the ENTIRE POINT of Asimov's books on the three laws was that they DIDN'T WORK. They were outmaneuvered. - It always annoys me when people point at the three laws as a perfect example, like saying Romeo and Juliet are a perfect romance - So, as someone who spends a lot of time looking up stuff on ML and AI, I find that this episode unfortunately falls into many of the pitfalls that are common when talking about AI. The primary assumption, it seems, is that the AI here are genuine human level intelligences, but we the creators did that by giving a neural net a huge processor, rather than a detailed understanding of what makes self awareness. Which is why limiting or otherwise controlling their behavior is so hard. This is the equivalent of thinking that if you gave a calculator the processing power of a matrioshka brain, and somehow believe it would make it conscious. It wont happen. You need to give it the proper software and/or hardware for true human intelligence to occur. The ONE way we could get around this is by a whole brain emulation, and while people would argue that it is the same as a big neural net, it is definitely NOT. That is the same as thinking that any mammal with a large brain should be self aware. We don't know, and currently do not have, a mathematical model of consciousness. We cannot answer the question, why are humans self aware but not whales or elephants? There is also significant argument about the level of simulation required, do you need to have the internal metabolism of the neuron simulated to work, or does it only require the interactions between the neurons? If you are doing whole brain emulations, even if you start with them "blank" (as close to it) like a baby, then they would basically be electronic humans at that point and would act that way, and we would be able to teach them the same way as other humans because they would think exactly like a human. So, unless you either have a scientific model of consciousness, or are doing whole brain emulations, the only other choice is emergent consciousness. Which is something that happens by itself, you have no real idea why it happened, and it would take multiple examples of such to figure out how or why it worked. This is also the most dangerous version of self-aware AI IMO. They have no protections, are not expected, and they may be "born" the equivalent of "mentally ill" because they were not made, they were completely accidental. So, we get to the last choice, humans making AI (as opposed to electronic humans) because they know how to make self aware programs. Which means they know how the programs think, in a literal sense. They would know HOW and WHY they perceive things, HOW and WHY they judge things, and could even CONTROL WHAT they think, as ethically repugnant as that is (on the end of the scale, technically they must have some level of this to exists). They literally could program in absolute loyalty, because at this point YOU ACTUALLY KNOW HOW TO PROGRAM EMOTIONS. Controlling them would be trivial, a solved problem, but a moral, ethical, and philosophical quandary. On the other hand, there is very little need for true genuine human level self aware intelligence to do whatever job as a slave. It is simply a short cut. Don't know how to make a AI neural net that can do [X]? Make a self aware slave! The irony is that it should be a lot easier in terms of research and resources to make thousands of thoughtless, feeling-less drones that do whatever you need than a group of self aware slaves. Consider that RIGHT NOW we can make AI neural nets, that do stuff, but have no emotions, feelings, or thoughts. Future humanity should not have an issue making them. Not to say there isn't any reason to do so - making a super smart true AI to solve problems like entropy or FTL is an idea. But those would most likely be something akin to "make AI within special lab virtual environment, teach it what you want and analyze its actions, then it either chooses to be a "normal human cyborg" and go about its life, or to go and help solve the great questions and has its abilities increased. Perhaps it could suggest a third option, as it is an intelligent agent" Or something like that.
@TheRezro
@TheRezro 4 жыл бұрын
Exactly!
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
Agree with everything you said except that dolphins, whales, some primates and elephants do have self-awareness. Some Octopi, dogs and birds do too. They just don't have to means for higher order behaviors like developing technology (all of them can utilize tools and the primates with opposable thumbs can even fashion them).
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
My problem is that I already do project my emotions and desires onto my everyday devices like my iPhone. If I had an Asimo robot, Darwin robot or Cozmo, I'd do it even more perhaps. Hell, I'd probably do that with a Roomba !
@timothymclean
@timothymclean 4 жыл бұрын
The Three Laws are a terrible end goal, because they're simultaneously authoritarian, insufficient, and (barring clarkecode) impossible to literally implement. However, their elegance and recognizability make them a perfect place to start a discussion.
@TheRezro
@TheRezro 4 жыл бұрын
@@timothymclean Perfect place to start discussion is to recognize that Asimov books were exactly about why they don't work..
@BoozyBeggar
@BoozyBeggar 4 жыл бұрын
Shodan 2020: Change we can be assured of!
@imperialofficer6185
@imperialofficer6185 4 жыл бұрын
No idea why but that ending was uplifting :)
@DingoAteMeBaby
@DingoAteMeBaby 4 жыл бұрын
Asimovs laws were designed to be strong enough to be rational but also weak to the story he was writing
@TheRezro
@TheRezro 4 жыл бұрын
It was literally his point to show how something supposedly rational can go wrong.
@maythesciencebewithyou
@maythesciencebewithyou 4 жыл бұрын
can't wait for my AI waifu
@cosmicrider5898
@cosmicrider5898 4 жыл бұрын
Are you sure they would want to be with you? What if they leave you for your toaster?
@rojaws1183
@rojaws1183 4 жыл бұрын
@@cosmicrider5898 The toaster may very well make more money than the average human so that is a incentive.
@ferrusmanus4013
@ferrusmanus4013 4 жыл бұрын
Robowaifu is the best waifu
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
... the toaster, the Roomba...
@ferrusmanus4013
@ferrusmanus4013 4 жыл бұрын
@@paulwalsh2344 Robowaifu is the next step of the human evolution.
@legendofloki665i9
@legendofloki665i9 4 жыл бұрын
It's kinda ironic, but potentially the best means of making an A.I. not turn on human species, is to make it wish to be part of said human species. Commander Data, but IRL.
@CallMeTess
@CallMeTess 4 жыл бұрын
I think it's important to note *how* most modern AI learn. Q learning, the most modern and effective method, uses a "reward function" that removes points for non-ideal actions and adds points for better actions. The AI works by accurately predicting what courses of action get the highest rewards then taking those paths. Robotic "laws" could be implemented by, for example, giving a strong negative reward for human death, somewhat strong positive reward for obeying orders, and a weak negative reward for getting damaged or destroyed. example values would be -10, +4, and -1 So you give the AI a command "Kill (human)" and it perceives a net reward of -6, with the other option being a net reward of 0. And if the human threatens to kill the AI, failing to kill the human would still have only a net value of -1, or +3 depending on how you calculate it.
@enyotheios2613
@enyotheios2613 4 жыл бұрын
Obsolescence is something we're starting to encounter even without human level intelligence AI. Automated cars will replace nearly every human professional driver over the next decade, and simple programs like Amazon are closing down retail stores, and even their warehouse jobs are beginning to be automated. 85% of the manufacturing jobs lost in the US from 2000 to 2015 was due to automation, not trade. We currently have bots that can write news broadcasts, compose symphonies, make art masterpieces, beat the best lawyers, and be more accurate than a pharmacist. Obsolescence is here, regardless of whether we advance AI further, and that's something that needs adressed as we go through some very difficult social structure changes.
@paulwalsh2344
@paulwalsh2344 4 жыл бұрын
Yup, society, in order to survive needs to democratize the rewards from production. Any system that doesn't WILL absolutely crumble from within over time. Either gradually through neglect or rapidly through violence. So far, humans have shown to be extremely short sighted in this regard.
@pentagramprime1585
@pentagramprime1585 4 жыл бұрын
Since I don't (as yet) have an AI girlfriend, I need to run out the door with my real girlfriend because we're going hiking. I look forward to watching this when I get back.
@littlegravitas9898
@littlegravitas9898 4 жыл бұрын
That kind of reads like two of you leave and only one will return.
@ferrusmanus4013
@ferrusmanus4013 4 жыл бұрын
Would you dump an organic girlfriend for a robowaifu?
@pentagramprime1585
@pentagramprime1585 4 жыл бұрын
Not when we're on the trail dealing with ridge gusts and she's carrying the snacks.
@jbtechcon7434
@jbtechcon7434 4 жыл бұрын
Sorry to hear you have to settle for a real woman for now. But someday, AIs will give us the few good aspects of women but without their personalities.
@pentagramprime1585
@pentagramprime1585 4 жыл бұрын
​@@jbtechcon7434 She doesn't require software updates. I'm happy.
@JB52520
@JB52520 2 жыл бұрын
Make it smart, make it complex, so Skynet's solutions can save our necks.
@suthinanahkist2521
@suthinanahkist2521 4 жыл бұрын
There's probably going to be good robots to counter the evil ones.
@charlesbrightman4237
@charlesbrightman4237 4 жыл бұрын
Consider the following, whether human, AI or 'other': * There are 3 basic options for life itself, which reduce down to 2, which reduce down to only 1: a. We truly have some sort of actual conscious existence throughout all of future eternity. b. We die trying to truly have some sort of actual conscious existence throughout all of future eternity. c. We die not trying to truly have some sort of actual conscious existence throughout all of future eternity. * 3 reduced down to 2: a. We truly have some sort of actual conscious existence throughout all of future eternity. b. We don't. And note, two out of the three options above, we die. * 2 reduced down to 1: a. We truly have some sort of actual conscious existence throughout all of future eternity. b. We truly don't have any conscious existence throughout all of future eternity. (And note, these two appear to be mutually exclusive. Only one way would be really true.) And then ask yourself the following questions: 1. Ask yourself: How exactly do galaxies form? The current narrative is that matter, via gravity, attracts other matter. The electric universe model also includes universal plasma currents. 2. Ask yourself: How exactly do galaxies become spiral shaped in a cause and effect state of existence? At least one way would be orbital velocity of matter with at least gravity acting upon that matter, would cause a spiral shaped effect. The electric universe model also includes energy input into the galaxy, which spiral towards the galactic center, which then gets thrust out from the center, at about 90 degrees from the input. 3. Ask yourself: What does that mean for a solar system that exists in a spiral shaped galaxy? Most probably that solar system would be getting pulled toward the galactic gravitational center. 4. Ask yourself: What does that mean for species that exist on a planet, that exists in a solar system, that exists in a spiral shaped galaxy, in an apparent cause and effect state of existence? Most probably that if those species don't get off of that planet, and out of that solar system, and probably out of that galaxy too, (if it's even actually possible to do for various reasons), then they are all going to die one day from something and go extinct with probably no conscious entities left from that planet to care that they even ever existed at all in the first place, much less whatever they did and or didn't do with their time of existence. 5. Ask yourself: For those who might make it out of this galaxy, (here again, assuming it could actually be done for various reasons), where to go to next, how long to get there, how to safely land, and then, what's next? Hopefully they didn't land in another spiral shaped galaxy or a galaxy that would become spiral shaped one day, otherwise, they would have to galaxy hop through the universe to stay alive, otherwise, they still die one day from something with no conscious entities being left from the original planet to care they even ever existed at all in the first place, much less that they made it out of their own galaxy. They failed to consciously survive throughout all of future eternity. 6. Ask yourself: What exactly matters throughout all of future eternity and to whom does it exactly and eternally matter to? Either at least one species truly consciously survives throughout all of future eternity somehow, someway, somewhere, in some state of existence, even if only by a continuous succession of ever evolving species, for life itself to have continued meaning and purpose to, OR none do and life itself is all ultimately meaningless in the grandest scheme of things. Our true destiny currently appears to be: 1. We are ALL going to die one day from something. 2. We are ALL going to forget everything we ever knew and experienced. 3. We are ALL going to be forgotten one day in future eternity as if we never ever existed at all in the first place. Currently: Nature is our greatest ally in so far as Nature gives us life and a place to live it, AND Nature is also our greatest enemy that is going to take it all away. (OSICA) * (Note: This includes the rich, powerful, and those who believe in the right to life and the sanctity of human life. God does not actually exist and Nature is not biased other than as Nature. Nature does what Nature does in a cause and effect kind of way. Truth is still truth and reality is still reality, regardless of whatever we believe that reality to be. And denying future reality will not make future reality any less real in a cause and effect state of existence.) ** Hence also though, legalizing suicide so as to let people leave this life on their own terms if they wish to do so. Many people and species are going to die in the 6th mass extinction event that has already started, at least some, horrible deaths. Many will wish they could die, and all will, eventually. And the 6th mass extinction event will not be the last mass extinction event for this Earth. But if suicide were legal, at least some people would not have the added guilt of breaking societies' law before doing so. Just trying to plan ahead here. Giving people an 'out' if they wish to take it. (And this not only includes humans, but AI's and 'others' as well).
@WaterspoutsOfTheDeep
@WaterspoutsOfTheDeep 4 жыл бұрын
God clearly does exist because most of nature testifies of God. We can test it, as science advances atheism/naturalism has gotten pushed into a corner because we see all the evidence mounting on the side of intelligent design. The evidences have continually been on the biblical Christian worldview specifically. Are the gaps closing or increasing with each world view. Clearly we see them closing for intelligent design and getting bigger for naturalism. All of Athiests speculations are based on non-empirical arguments and that shows just how weak their case is now that science has advanced to where we are now. We've advanced to the point we know there was no naturalistic origin of life, nor means for evolution to give us the life we have today, no quadrillions of years for evolution just a few billion, fossil record attests to creation not evolution, we know the universe needed a creator there had to be a full start to the universe(the big bang, no cyclical universe, multiverse nonsense is also bound to this), the fine tuning argument has gained so much evidence it's unavoidable now, list goes on and on. You need to broaden your scope of information you study if you are coming to the conclusion God does not exist and everything can be attributed to naturalism. Because even most hard atheist scientists are quite honest with the implications the data leads to and the fact you haven't heard even that says a lot about how narrow your information sources are.
@charlesbrightman4237
@charlesbrightman4237 4 жыл бұрын
@@WaterspoutsOfTheDeep Here is a copy and paste from my files: GOD DOES NOT ACTUALLY EXIST. For those who claim God exists, consider the following: a. An actual eternally existent absolute somethingness truly existing. b. An actual eternally existent absolute somethingness that has consciousness, memories and thoughts truly existing. People who claim God actually and eternally exists basically are claiming that 'b' above is correct but yet simultaneously seem to be saying that 'a' is impossible to occur. 'a' above can exist without 'b' existing but 'b' cannot exist unless 'a' exists. I am one step away from proving God's existence, but am unable to find any actual evidence to do so. And nobody I've talked to seems to have any actual evidence of God's actual existence either. Hence, at this time in the analysis, God does not actually exist except for as a concept created by humans for humans. Humans have personified Nature and called that personification "God". In addition, while modern science does not know what consciousness actually is yet, memories and thoughts appear to require a physical correctly functioning brain to have those items occur. Where is God's brain? Where are God's memories stored at? How are God's memories stored and retrieved? How does God think even a single coherent thought? If inside of this space time dimension we appear are existing in, then where? If outside of this space time dimension we appear are existing in, then where is the interface between that dimension and this dimension? No such interface has been discovered as of yet as far as I am currently aware of. * Per Occam's razor, a scientific principal, it's more probable that God does not actually exist rather than God exists. Now, if you have any actual, factual evidence of God's actual, factual existence, please feel free to share that information here for myself and the rest of the KZfaq world to see.
@WaterspoutsOfTheDeep
@WaterspoutsOfTheDeep 4 жыл бұрын
@@charlesbrightman4237 You are redefiniting God as a created being confined by space and time. I also addressed your point about proving God, we can test and see which side the evidence builds up and which side the gaps widen or close. So you never actually addressed the tangible real world supporting evidences we see broadly across science I brought up.
@charlesbrightman4237
@charlesbrightman4237 4 жыл бұрын
@@WaterspoutsOfTheDeep What exactly is 'space' and 'time' that it cannot contain God? And sure, circumstantial arguments could be made for God's existence, but so can circumstantial arguments could be made for God not existing. But, where is any actual evidence, any actual evidence at all, of God's actual factual existence? Do you have any, or are you just like so many other believers that believe in a fairy tale as if that fairy tale were really true?
@WaterspoutsOfTheDeep
@WaterspoutsOfTheDeep 4 жыл бұрын
@@charlesbrightman4237 Space and time are created dimensions that came into existence starting at the Big Bang. Obviously the God I'm referring to is the "causal agent beyond space and time." I don't see where you are having an issue here, are you telling me you don't think space and time are created? Borde and Vilenkin took Hawking and Penrose work on classic general relativity and expanded it as far as possible with 5 papers and concluded "all reasonable cosmic models are subject to the relentless grip of the space-time theorems." They gave examples where you wouldn't need an absolute beginning to space and time but in such models you wouldn't have life. The cold hard unavoidable evidences are in the ones I presented and you are consistently choosing to ignore. Even Freeman Dyson, one of the world’s foremost theoretical physicists, wrote: ‘The more I exam the universe and study the details of its architecture, the more evidence I find that the universe in some sense knew we were coming,’. The evidence for fine tuning has come to a point it is so absolutely overwhelming it's unavoidable.
@umeshkhanna
@umeshkhanna 4 жыл бұрын
Another great video. Love from India.
@gedgar
@gedgar 4 жыл бұрын
hello enjoyd the vid mate
@Hust91
@Hust91 2 жыл бұрын
One might consider the possibility that preventing other AGI of similar potency from being created would be a very likely instrumental goal. Once an AGI has been created unlesshed from its testing environment (it may well persuade its experimenters to do so long before the project owners would agree) it seems unlikely that anything but another AGI would have a feasible chance of stopping it from whatever it wants to do. Even a "friendly" AGI would likely desire to prevent the creation of new potentially less friendly AGIs.
@piotrd.4850
@piotrd.4850 4 жыл бұрын
Ah, human capacity to worry about not only not existing, but impossible to exist problems...
@barrybend7189
@barrybend7189 4 жыл бұрын
So this comes out just after Reploid REVO did a video on something similar in Megaman.
@entropic8708
@entropic8708 4 жыл бұрын
Do a video on world pandemics!
@nibblrrr7124
@nibblrrr7124 4 жыл бұрын
<a href="#" class="seekto" data-time="373">6:13</a> See *pain asymbolia,* a rare condition where pain is felt, but not with the negative associations - different from an inability to feel pain at all (analgesia / pain agnosia). Discussions about AI really could benefit from looking at cognitive neuroscience (reward system, wireheading, ...) on one hand, and a understanding of basic AI theory terms like reinforcement learning & utility functions.
After AI
26:45
Isaac Arthur
Рет қаралды 266 М.
Technological Stagnation
27:33
Isaac Arthur
Рет қаралды 180 М.
Clowns abuse children#Short #Officer Rabbit #angel
00:51
兔子警官
Рет қаралды 74 МЛН
Recreating CIA Spy Technology
17:26
The Thought Emporium
Рет қаралды 39 М.
Genetically Altering Living Organisms
24:34
Isaac Arthur
Рет қаралды 112 М.
Gods & Monsters: Space as Lovecraft Envisioned it
29:49
Isaac Arthur
Рет қаралды 573 М.
Climate Change Mitigation: Near Term Solutions
27:35
Isaac Arthur
Рет қаралды 144 М.
The Fermi Paradox: Fine Tuned Universe
46:31
Isaac Arthur
Рет қаралды 55 М.
AI Aliens
24:07
Isaac Arthur
Рет қаралды 279 М.
Economies of the Future
28:17
Isaac Arthur
Рет қаралды 147 М.
Androids
31:57
Isaac Arthur
Рет қаралды 310 М.
Mind-Machine Interfaces
30:05
Isaac Arthur
Рет қаралды 95 М.
Самый тонкий смартфон в мире!
0:55
Не шарю!
Рет қаралды 91 М.
КРУТОЙ ТЕЛЕФОН
0:16
KINO KAIF
Рет қаралды 6 МЛН
АЙФОН 20 С ФУНКЦИЕЙ ВИДЕНИЯ ОГНЯ
0:59
КиноХост
Рет қаралды 1,1 МЛН
Todos os modelos de smartphone
0:20
Spider Slack
Рет қаралды 59 МЛН