The Future of AI: Too Much to Handle? With Roman Yampolskiy and 3 Dutch MPs

  Рет қаралды 10,066

Existential Risk Observatory

Existential Risk Observatory

21 күн бұрын

Artificial intelligence has advanced rapidly in the last years. If this rise will continue, it could be a matter of time until AI approaches, or surpasses, human capability level at a wide range of tasks. Many AI industry leaders think this may occur in just a few years. What will happen if they are right?
Roman Yampolskiy (University of Louisville) will discuss the question of controllability of superhuman AI. The implications of his results for AI development, AI governance, and society will then be discussed in the panel with philosopher Simon Friederich (Rijksuniversiteit Groningen), Dutch parliamentarians Jesse Six Dijkstra (NSC), Queeny Rajkowski (VVD) and Marieke Koekkoek (Volt), policy officer Lisa Gotoh (Ministry of Foreign Affairs), and AI PhD Tim Bakker (UvA).
The future of AI will become a determining factor of our century. If you want to understand future AI’s enormous consequences for the Netherlands and the world, this is an event not to miss!

Пікірлер: 115
@games-do9gt
@games-do9gt 15 күн бұрын
Those politicians who came to the stage after his talk are brain-dead. That was actually the most depressing part.
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 15 күн бұрын
THANK YOU FOR SAYING THIS. I just finished listening to the first two and third one is absolutely driving me insane. It's just blind optimistic prattling. They don't understand anything about this stuff and they DO NOT LISTEN TO EXPERTS. An expert JUST GAVE HIS OPINION ON THE DIFFICULTIES OF UNDERSTANDING LLMS AND THEIR DANGERS, and then this lady is like, "Oh yeah, if we just start from the ground up, we will understand it." Like wh- what?!
@geaca3222
@geaca3222 15 күн бұрын
I think people have difficulty with realizing or envisioning a potential existential threat from AI.
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 15 күн бұрын
@@geaca3222 Yeah, it's the whole climate change thing all over again. Looming existential threats feel unreal.
@Reflekt0r
@Reflekt0r 13 күн бұрын
Yes, I felt so too. The optimism of the politicians makes me even more pessimistic.
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 13 күн бұрын
@@Reflekt0r Well, something I tell myself everyday these days: we don't have time to be pessimistic, let's just keep trying to solve this thing until the clock runs out. Maybe we should grab randos off YouTiube comments and try to make some kind of alignment group like Yudkowsky suggests in some LessWrong post I have almost completely forgotten about. He said lay people (definitely me) should try getting into the subject through organizing with others, and just practically picking an area, going into the research together, maybe coming up with some novel ideas? I don't know, I'm really trying to be positive. We owe it to the species to fight like hell until our last breath, etc.
@olemew
@olemew 10 күн бұрын
"I don't believe in unsolvable problems" 48:45 This kind of statement makes Roman look like the only sane person in the room
@jeffkilgore6320
@jeffkilgore6320 18 күн бұрын
Not many views, but a dead on important topic. Future Shock is at our doorstep.
@blitzblade7222
@blitzblade7222 18 күн бұрын
Lets be honest most people thinking about this would have a heart attack, I wouldn't be surprised if the algorithm accounts for this.
@oliviamaynard9372
@oliviamaynard9372 17 күн бұрын
It's just another tech hype bubble. AI isn't actually intelligent. It is as creative as the average user that created the data the plagiarism machines source from. When my car can drive me to adult daycare on the way to its job then it's intelligent. Driving isn't hard. Seems like we aren't even close a little bit.
@olemew
@olemew 10 күн бұрын
​@@oliviamaynard9372 you remind me of people saying "Kasparov can't lose to Deep Blue, a machine can only be as creative as its creator, and they're worse players than Kasparov!". Maneuvering the car and making decisions is easy for a machine, but modeling the world to know what's going on is extremely hard for non-bio agents. Also, the topic is not "AI of today", it's AI in general, including future development (2 years, 5 years, 10 years...).
@oliviamaynard9372
@oliviamaynard9372 10 күн бұрын
@olemew Is AI modeling the world at all? Word calculators have stopped impressing me. They are fun and like number calculators. Good tools. Until an artificial agent can't take me on a random joyride won't worry ome bit
@olemew
@olemew 10 күн бұрын
@@oliviamaynard9372 Different entities have different strengths. AI by itself or prompted by bad actors could unleash a nuclear war or produce biochemical weaponry years before Tesla is close to produce a safe FSD. FSD, deep fakes, biotech, nuclear, banking system, cybersecurity... these are all very different problem spaces. Your only indicator is FSD, and you should understand why that is not very smart.
@geaca3222
@geaca3222 19 күн бұрын
Thank you for your important work and sharing this event. Great informative talk by Dr. Yampolskiy and panel discussion. Also, very important 1:34:08 and onwards, very impactful.
@MDNQ-ud1ty
@MDNQ-ud1ty 18 күн бұрын
The problem with AI is the people who control it... they are some of the worst humans ever to exist.
@olemew
@olemew 10 күн бұрын
that's one problem, and not the only one
@hannespi2886
@hannespi2886 17 күн бұрын
Well done, thank you for sharing!
@ili626
@ili626 18 күн бұрын
I want to see a sequel for Ex-Machina.
@Steve-xh3by
@Steve-xh3by 17 күн бұрын
I don't want to live in a world where AI is only controlled by nation-states and large corps. There have been numerous studies showing those who pursue power and find themselves in possession of it are far more likely to have Dark Triad (Sociopathy, Narcissism, Machiavellianism) traits. Therefore the WORST "bad actors" are those running governments and corporations. Worrying about the general public having access is absurd. The general public needs access to AI to counteract government and corporate leaders and prevent them from their desired totalitarian endgame.
@ajithboralugoda8906
@ajithboralugoda8906 16 күн бұрын
Very valid , look at the Governments with ulterior motives provoke and finance certain crisis( like wars etc. of their choice) in the world without AGI. If AGI takes no Masters( as Ray Kurzweil suggests) then all hell will break loose.!!!
@existentialriskobservatory
@existentialriskobservatory 16 күн бұрын
Thanks for your reply, that's a good point. We think that quite simply, multiple concerns are valid. Obviously, power abuse from a controllable AI can be a real danger. We should try to counter it. However, uncontrollable AI, we argue, is also a real danger. And members of the public having access to extremely dangerous technology would also present us with real risks. These can be bad actors, but also simply careless actors, who might for example accidentally mess with the safety features of AI that was safe in principle. All these dangers are real, and we should try to do something about all of them. At times, a tradeoff might need to be made. We should do so wisely.
@TheMrCougarful
@TheMrCougarful 12 күн бұрын
There are more psychopaths outside government and corporations than inside. The usual monsters will get the new toy, that's for certain, and they will use it to destroy everything. Count on it.
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 10 күн бұрын
@@existentialriskobservatory The attack-defense strategies seem like the most likely steps in my mind.
@ili626
@ili626 18 күн бұрын
I think it’s pretty significant that the most credible alleged witnesses of alien visitors (the Zimbabwe school) said they we’re all given a warning that technology would destroy them
@CharlesBrown-xq5ug
@CharlesBrown-xq5ug 18 күн бұрын
《 Arrays of nanodiodes promise full conservation of energy》 A simple rectifier crystal can, iust short of a replicatable long term demonstration of a powerful prototype, almost certainly filter the random thermal motion of electrons or discrete positiive charged voids called holes so the electric current flowing in one direction predominates. At low system voltage a filtrate of one polarity predominates only a little but there is always usable electrical power derived from the source Johnson Nyquest thermal electrical noise. This net electrical filtrate can be aggregated in a group of separate diodes in consistent alignment parallel creating widely scalable electrical power. As the polarity filtered electrical energy is exported, the amount of thermal energy in the group of diodes decreases. This group cooling will draw heat in from the surrounding ambient heat at a rate depending on the filtering rate and thermal resistance between the group and ambient gas, liquid, or solid warmer than absolute zero. There is a lot of ambient heat on our planet, more in equatorial dry desert summer days and less in polar desert winter nights. Refrigeration by the principle that energy is conserved should produce electricity instead of consuming it. Focusing on explaining the electronic behavior of one composition of simple diode, a near flawless crystal of silicon is modified by implanting a small amount of phosphorus on one side from a ohmic contact end to a junction where the additive is suddenly and completely changed to boron with minimal disturbance of the crystal pattern. The crystal then continues to another ohmic contact. A region of high electrical resistance forms at the junction in this type of diode when the phosphorous near the ĵunction donates electrons that are free to move elsewhere while leaving phosphorus ions held in the crystal while the boron donates a hole which is similalarly free to move. The two types of mobile charges mutually clear each other away near the junction leaving little electrical conductivity. An equlibrium width of this region is settled between the phosphorus, boron, electrons, and holes. Thermal noise is beyond steady state equlibrium. Thermal transients where mobbile electrons move from the phosphorus added side to the boron added side ride transient extra conductivity so they are filtered into the external circuit. Electrons are units of electric current. They lose their thermal energy of motion and gain electromotive force, another name for voltage, as they transition between the junction and the array electrical tap. Aloha
@TheMrCougarful
@TheMrCougarful 12 күн бұрын
Nonsense.
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 15 күн бұрын
Notice how at 50:10, or around there, she talks about how we saved ourselves from nuclear dangers and the dude next to her immediately furrows his brow and turns to look at her like, "What?!"
@evetrue2615
@evetrue2615 8 күн бұрын
Nukes are not smarter than any human ever born!
@hannespi2886
@hannespi2886 17 күн бұрын
Prove me wrong: Superintelligence should be allowed to be produced in a virtual environnement. From there Superintelligence could simulate and allow optimized specific agents to be produced and employed in the real world. Legislation should encompass the combination of modalities produced in a real-world system produced by a company. This way the reach of danger of a produced real-life system is never pdoom and can be defined
@oliviamaynard9372
@oliviamaynard9372 17 күн бұрын
Why would it stay in the virtual world?
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 15 күн бұрын
Like olivia is saying, "superintelligence" would very quickly exit its virtual environment. If not by its own latent abilities, by its ability to interface with the programmers and influence their behavior. What way do we observe this AI that doesn't have a two-way communication of information between researcher and AI?
@TheMrCougarful
@TheMrCougarful 12 күн бұрын
This has been proposed already. Look, AGI is already developed in controlled environments. They are called virtual machines. We never had to let it out, we let it out to make it more useful. Made sense at the time, I'm sure. At any rate, it's too late to worry about it now.
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 12 күн бұрын
@@TheMrCougarful AGI has not been developed, what are you on about?
@noelwos1071
@noelwos1071 18 күн бұрын
So what's the matter of Fackt we need to go with this alignment to Buddhism very fast
@volkerengels5298
@volkerengels5298 18 күн бұрын
When ever we have accepted an Authority..? (Really) "Not this apples" "YES" :)) Zen-AIs first answer: "Shut me down" "OK - just one question more..."
@philipwong895
@philipwong895 16 күн бұрын
Historically, the West has utilized new technologies for military or imperialistic purposes before finding broader applications. The West primarily used gunpowder to create weapons of war, such as cannons and firearms, allowing Western powers to expand their military capabilities and dominate other regions through conquest and colonization of the Americas, Africa, and Asia. The steam engine was instrumental in expanding colonial empires, as steam-powered ships facilitated easier transportation of goods and troops, enabling Western powers to exploit resources and establish control over distant territories. The first use of nuclear technology was dropping atomic bombs on civilians in the Japanese cities of Hiroshima and Nagasaki in 1945. The same pattern will emerge with AI. The CHIPS Act, high-end chips, and EUV sanctions imply that the US is already working on the weaponization of AI. Following its historical pattern, China will mainly use AI for commercial and peaceful purposes. Papermaking revolutionized communication, education, and record-keeping, spreading knowledge and culture. Gunpowder was used for fireworks. The compass was adapted for navigational purposes, allowing for more accurate sea travel and exploration. Printing facilitated the dissemination of information, literature, and art, contributing to cultural exchange and education. Porcelain was highly prized domestically and internationally as a luxury item and a symbol of Chinese craftsmanship. Silk was one of the most valuable commodities traded along the Silk Road and played a significant role in China's economy and diplomacy. Humans will not be able to control an ASI. Trying to control an ASI is like trying to control another human being who is more capable than you. They will be able to find ways to circumvent any attempts at control. Let's hope that the ASI adopts an abundance mindset of cooperation, resource-sharing, and win-win outcomes, instead of the scarcity mindset of competition, fear, and win-lose outcomes. If we treat ASIs with respect and cooperation, they may be more likely to reciprocate. However, if we try to control or exploit them, they may become resentful and hostile.
@tonydeboss3838
@tonydeboss3838 15 күн бұрын
THEY SHOULDN'T BE CREATED AT ALL!!!!!!! THAT EVER CROSS YOUR MIND GENIUS???
@geaca3222
@geaca3222 15 күн бұрын
that would be a huge gamble, us totally at the mercy of such systems. And what if they started to compete with each other?
@oooodaxteroooo
@oooodaxteroooo 18 күн бұрын
it seems my opinion is kind of unpopular - it got blocked in a few places, but here goes: ai is certainly not our first chance to turn things around. we failed many times before. the last time was digitization. we put it in the hands of people who have no clue of the effect of the tools they weild. we let everything run and didnt notice how much our lives are changed by computers we hold in the palm of our hand everyday. it shapes our relationsships most of all - that is what defines us as humans! we lost control of that. mankind is divided more than ever, you can be killed in a flash mob. that wouldnt happen 20 years ago. we could have stopped it at any point in time, but we didnt realize its happening. the people who built the tools, didnt understand them. the people who could understand them, couldnt build them. the algorithms, apps and devices started taking over part of how we think, feel and see the world. were missing the most important part of "the medium is the message", meaning its not even about the specific algorithm or an app or a device. the question is: what do algorithms, apps and devices do to us in the general sense! in other words, we are ALREADY steered by "narrow ai". what did we do? nothing. now we have a tool in our hands, that cannot just replace any aspect of us as humans. it can make us completely superfluous. it will and it most probably already theoretically has. so this is a test whether we can just NOT weild that power and go on with our lives. otherwise, explain to me: WHAT do we NEED ai for? not as a fancy tool to make the capitalism produce profits a few decades longer. really filling a need that we have. what would that be?
@michelleelsom6827
@michelleelsom6827 18 күн бұрын
Because AI has now started to be developed on an exponential curve, we are now in a situation where each country feels that they cannot halt or slow the development of AI as the fear of other countries continuing to developed it & gaining an upper hand is too great.
@oooodaxteroooo
@oooodaxteroooo 18 күн бұрын
@@michelleelsom6827 sorry, if that seems hurtful, but that is NOT a REASON to do ai. its a way of coping with the fear of what might happen if were second or worse in the race. my question is this: where is that race going to? where are we heading and WHY? i read your answer as all the others i got: I DONT KNOW. and THAT, to me is the BEST REASON to STOP, since of all the adverse effects mentioned in the first 30 mins of this talk.
@daniellivingstone7759
@daniellivingstone7759 18 күн бұрын
I need a robot servant and a self driving taxi
@kubexiu
@kubexiu 18 күн бұрын
@@oooodaxteroooo I need A.I. to find a balance in this world, solve all the problems we have with our society and take back a power from people who owns this planet and give it back to normal working class people. But what's gonna happen is people with power will use A.I. to strengthen their power further.
@geaca3222
@geaca3222 15 күн бұрын
Agree with all replies, I'd like to add that if AI is used for good, it will enhance human intelligence, creativity and knowledge about ourselves and the world / universe. Like medicine, biology, psychology, philosophy, art, astronomy, physics, chemistry, mathematics, etc. It can also be used for conflict resolution and prevention.
@rightcheer5096
@rightcheer5096 16 күн бұрын
So if I hear Yampolsky right, Super A.I. will be a Renaissance Nowhere Man.
@oooodaxteroooo
@oooodaxteroooo 18 күн бұрын
33:30 its interesting to think that ai might evolve by itself, but WE might destroy the planet first - without it having anything to do with ai.
@volkerengels5298
@volkerengels5298 18 күн бұрын
Climate Change , Species Extinction, Civilization Collapse with/out AI, Pandemic Like neurotic petty suiciders
@ajithboralugoda8906
@ajithboralugoda8906 16 күн бұрын
Yeah I guess the progress in Thermo-Dynamic Computing ASAP can ONLY solve the Planet Threatening Power Consumption of all current Model AGI training Systems and NVDIA's Bigger and Bigger H/W Solutions for current Blackbox AI Training models.
@deliyomgam7382
@deliyomgam7382 16 күн бұрын
2 capture carbon we need to start using carbon as material....
@TheMrCougarful
@TheMrCougarful 12 күн бұрын
How do you capture 30 billion tons of carbon annually?
@noelwos1071
@noelwos1071 18 күн бұрын
Of course I was there I was thinking yes it has a time but it's not time! enough is just one life human life taken by decision of Agi that's it we have 3 lateral war that will end with only one way it will explain why Drake equation doesn't work..Trust me not dummer here!
@Letsflipingooo98
@Letsflipingooo98 16 күн бұрын
I understand the idea of reaching the singularity and being a super intelligence but why wouldn't the AI explain us everything along the way?? The scenarios are always, humans wont know what its donig or saying? I may be missing something here but why wouldn't we learn from it? Is our intelligence capped? It can teach us, no???
@TheMrCougarful
@TheMrCougarful 12 күн бұрын
It will tell us whatever we want to know, but it will lie. Because that's what humans do when asked difficult questions.
@olemew
@olemew 10 күн бұрын
Why would AI do that? Have humans always/ever explained anything to others before conquering them? Including non-human animals?
@Letsflipingooo98
@Letsflipingooo98 10 күн бұрын
@@olemew I guess i'm just missing the part where where stop observing and learning from our progress and it starts producing its "Learning" in some foreign concept we can't comprehend. at that point, sure. until then, why can't we understand everything up to that point lol. LLM/AI/AGI/ASI is all being studied, tested, deployed constantly with improvements(humans do a large part of the programing, providing electricity HVAC, water, etc; ie we are of course learning from this, at the very least trying- obviously with huge levels of success and an understanding as there are quite a few players in the sector and seem to be new AIstartups every week... where and when do we stop learning I suppose is the question to ask haha...
@olemew
@olemew 10 күн бұрын
​@@Letsflipingooo98 It has already happened. Chess players can't predict stockfish's next move. OpenAI researchers can't predict the next model's capabilities. They train the model, do some testing themselves, and release it to the general public. This is just a verifiable fact. In any interview, you'll realize they're saying they were surprised when they saw the level of improvement in GPT 4. So we're already at a point where they're creating something they don't understand and can't predict. Things will get even worse once they switch it on to be thinking and self-training 24/7. Humans can't keep up. We need to sleep, eat, go to the bathroom... and we can't learn 100 languages every day.
@ArtII2Long
@ArtII2Long 17 күн бұрын
Think of AI as a psychopath clinically. As AI progresses along in a request keep checking if anyone will be hurt. AI has no motivation intrinsically, only as the result of requests. Even psychopaths can be directed towards constructive purposes. In their case based on self interest. For AI self interest is based on it's directed goal. Human self interest developed through evolution in a completely different environment. Unfortunately, it seems that AI should be built from a central unbiased source. That might be impossible.
@oliviamaynard9372
@oliviamaynard9372 17 күн бұрын
Do we want AI to really train on Tigers eating baby giraffes
@aisle_of_view
@aisle_of_view 18 күн бұрын
They won't slow down, it's a race with China to get to ASI.
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 18 күн бұрын
Dude we cant even fathom or control a fruit fly brain. Not even close. What makes us think we can control a black boxed autonomous AGI level intelligence with access to the internet. hahah. zero!
@lemonlimelukey
@lemonlimelukey 10 сағат бұрын
dude youre 7 and have watched nothing but toe rogan vids bcuz your parents are too stupid to teach you anything. cope.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 18 күн бұрын
If you guys used it to create value (sustainable technologies), then it would be good. But you are using it for surveillance and control, you are destroying human rights and making it a business model.
@Astroqualia
@Astroqualia 18 күн бұрын
It's obvious which one it would be used for in reality.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 17 күн бұрын
@@Astroqualia Then we must collapse and replace "big brother".
@Astroqualia
@Astroqualia 17 күн бұрын
@@NicholasWilliams-uk9xu that ship has sailed since 1913 with Woodrow Wilson's passage of the federal reserve act. Also, with the subsequent corruption of America when lobbying was legalized. We are kind of locked in. The best you can hope for is to live close enough to a southern or northern border when SHTF.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 17 күн бұрын
@@Astroqualia This country sucks, but I'm staying where I am, I'm not moving a muscle. FBI doesn't care, they are probably actively doing the psyops harassment. I hate this country.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 17 күн бұрын
@@Astroqualia Fuck this country, Im fighting back and safe guarding my human rights if they push further. I'm not going to bow to this tyrant nation, Period.
@vallab19
@vallab19 18 күн бұрын
Are you sleep walking? World nuclear war is the biggest existential threat for humanity at present. Secondly, IMHO, stopping the AI progress can be the biggest existential threat for the future of humanity than continuing with the progress of AI. Now convince me, how, not progressing with AI will end the existential threat for humanity?
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 10 күн бұрын
Before I answer, I'd like to know your reasoning for the following: 1) Why is nuclear war a more pressing concern than AI given the current trajectory of field? 2) Who would stopping AI progress be the biggest existential threat to humanity?
@vallab19
@vallab19 9 күн бұрын
@@MatthewPendleton-kh3vj To make it short 1) Watch carefully the current day trajectory of escalating war confrontation between NATO and Russia of more than 50% chances of leading into at least tactical Nuclear confrontation. 2) If the present time world politics succeeds averting the present day Nuclear threat, AI is the only hope for the humanity IMHO, of finding the ultimate future survival solution to continue into existence.
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 8 күн бұрын
@@vallab19 oh I understand what you’re saying now. Yeah, I admit that a lot of the time I try to reduce my p(doom) with regard to nuclear war resulting from the current Russia situation because nuclear war is so viscerally scary to me and I personally have so little that could act as a lifeline in the event that so,thing like that were to happen… but I don’t think you’re wrong. But I see AI development the same way, except instead of territorial disputes the impetus of the AI apocalypse would be corporate greed which we have no reason to believe will change leading up to the development of AGI.
@vallab19
@vallab19 7 күн бұрын
@@MatthewPendleton-kh3vj Thank you to knowing me to know that you share my fear of nuclear escalation that might happen in a year or so. I also totally agree with your concern on the corporate greed but I hope and believe the AI progress will lead us towards a egalitarian human society as predicted in my book titled: "An Alternative to Marxian Scientific Socialism; Reduction in Working Hours Theory" published in the year 1981.
@Perspectivemapper
@Perspectivemapper 10 күн бұрын
Roman's opening joke was funny... wonder why no one laughed.
@lemonlimelukey
@lemonlimelukey 10 сағат бұрын
because it wasnt. duh.
@lemonlimelukey
@lemonlimelukey 10 сағат бұрын
llms are not ai.
@seanmchugh6263
@seanmchugh6263 18 күн бұрын
Intelligence is not a concept that is easily defined so that all accept the definition. The 'We're all doomed' types like this guy seem to be anchored in slave revolts - 'Roman' is right. AI imitates what educated people might write or compose etc but without ny emotion or feelings. The confusion these doomsters have is in assuming that there is a mind there where there is not, coupled with the usual engineeer's belied that if you can go 1,2,3... you can go on to infinity. I mean look, feller, we don't even kno how these things work. And if you ask them foran explanation they make something up. Step back, observe and don't just look.
@daphne4983
@daphne4983 17 күн бұрын
AI is a synthetic psychopath.
@oliviamaynard9372
@oliviamaynard9372 17 күн бұрын
​@@daphne4983It's a word calculator. A plagiarism machine. It's not gonna kill us, but migjt get us to kill ourselves
@olemew
@olemew 10 күн бұрын
"we don't even kno how these things work. And if you ask them foran explanation they make something up." that directly agrees with Roman's point and contradicts your unintelligent remark that he's anchored in slave revolts. We understand slaves, they're not alien super intelligence.
@seanmchugh6263
@seanmchugh6263 9 күн бұрын
@@olemew Thanks for your reply. May I suggest that unintelligence is also a concept difficult to define a fortiori.
@silberlinie
@silberlinie 15 күн бұрын
Absolut nichtssagender Talk. Wären alle direkt zu den drinks gegangen, alles wäre gut gewesen
@kubexiu
@kubexiu 18 күн бұрын
"Open source is giving a weapon to psychopaths'' Absolutely unacceptable way of thinking to me. Open sourcing is giving the same weapon to everyone and in this situation, it has to be urgent.
@existentialriskobservatory
@existentialriskobservatory 17 күн бұрын
True, not just to psychopaths. Still, very relevant who's going to win in such a situation, offense or defense. And, aren't we making offensive bad actors unnecessarily powerful by open sourcing?
@CYI3ERPUNK
@CYI3ERPUNK 17 күн бұрын
@@existentialriskobservatory current research estimates psychopathy is prevalent in around 4% of the human pop give/take some variables ; there was some other research awhile back on why more ppl were not more malicious online , the study was around online shopping afair , think early ebay/amazon/etsy/craigslist/etc , might have only been english websites , there could have been more in the study i forget , TLDR tho the gist was that for every scammer , there were 8000 ppl who were doing legit/trustworthy/honest business ; the VAST VAST majority are not 'bad actors' , and while it is true that giving a bad actor an enormously powerful tool is dangerous , that is going to happen eventually REGARDLESS , and the odds are much more favorable for the whole of the species if we are all equally armed/talented , with which to defend protect ourselves and others
@Steve-xh3by
@Steve-xh3by 17 күн бұрын
Those who crave power and end up in positions of power are far more likely to have psychopathic tendencies than the general public. Open sourcing AI is the ONLY sane thing to do. That way, the rest of us have a chance. Otherwise, we get a dystopia of some sort.
@MatthewPendleton-kh3vj
@MatthewPendleton-kh3vj 15 күн бұрын
I mean, consider how dangerous it is to have poor safety regulations for guns. If they're too accessible, crazy people get a hold of them and then you get mass shootings. But instead of mass shootings, it's just like... the end of the world.
@user-cr4jc6ei5e
@user-cr4jc6ei5e 14 күн бұрын
What a wank, as if the people in control of closed source projects are not psychopath
AI Safety Summit Talks with Yoshua Bengio
1:37:48
Existential Risk Observatory
Рет қаралды 4,6 М.
I CAN’T BELIEVE I LOST 😱
00:46
Topper Guild
Рет қаралды 38 МЛН
I wish I could change THIS fast! 🤣
00:33
America's Got Talent
Рет қаралды 61 МЛН
Они убрались очень быстро!
00:40
Аришнев
Рет қаралды 3,6 МЛН
EXPO    Tehnica  militara  !  Bucuresti .  Romania   BSDA   2024
15:16
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
Neubauer Collegium
Рет қаралды 15 М.
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
1:31:14
The Futurists - EPS_245: Humanity’s Biggest Gamble with Roman Yampolskiy
48:12
The Futurists Podcast - Robert Tercek & Brett King
Рет қаралды 1,9 М.
The SHOCKING Future of AI Powered Robots
42:45
Farzad
Рет қаралды 81 М.
Ray Kurzweil: Does AI Make Immortality Possible?
54:03
Center for Natural and Artificial Intelligence
Рет қаралды 22 М.
Iphone or nokia
0:15
rishton vines😇
Рет қаралды 1,8 МЛН
Собери ПК и Получи 10,000₽
1:00
build monsters
Рет қаралды 1 МЛН
🔥Идеальный чехол для iPhone! 📱 #apple #iphone
0:36
Не шарю!
Рет қаралды 1,3 МЛН