OpenAI Researcher BREAKS SILENCE "Agi Is NOT SAFE"

  Рет қаралды 30,294

TheAIGRID

TheAIGRID

16 күн бұрын

Join My Private Community - / theaigrid
🐤 Follow Me on Twitter / theaigrid
🌐 Checkout My website - theaigrid.com/
Links From Todays Video:
x.com/elonmusk/status/1791550...
x.com/janleike/status/1791498...
x.com/sama/status/17915432640...
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

Пікірлер: 457
@TeamLorie
@TeamLorie 14 күн бұрын
I am honored to be a part of the LAST generation... 😅
@HamiltonThielsen
@HamiltonThielsen 14 күн бұрын
o7
@beppeadr
@beppeadr 14 күн бұрын
Controlling something smarter than us? It's like saying chickens should control the farmer. Well, so far, the chickens haven't managed! Let's hope we are luckier.
@pgc6290
@pgc6290 14 күн бұрын
The future is so much of a unknown with ai. We gotto find a way to predict and figure out how the world with tremendously more iq is going to be.
@heartyfisher
@heartyfisher 14 күн бұрын
I am sure an ai can help with that..
@beppeadr
@beppeadr 14 күн бұрын
@@heartyfisher you mean with the Chickens? 🤣
@users416
@users416 14 күн бұрын
Chickens can be so cute that a farmer will take care of them.
@sedat4151
@sedat4151 14 күн бұрын
Chickens don’t have the capacity to evolve themselves at exponential rates.
@1sava
@1sava 14 күн бұрын
OpenAI is not the only company who has disbanded their alignment research. Google and Meta has basically done the same
@nfuryboss
@nfuryboss 14 күн бұрын
You forgot authoritarian regimes like China and Russia. AI is like a nuclear proliferation. Once it is out, it is out. I'm not even sure that the UN can do anything about it.
@armadasinterceptor2955
@armadasinterceptor2955 14 күн бұрын
Good. Full speed ahead.
@MiniatureRose.
@MiniatureRose. 14 күн бұрын
AI Alignment Safety Risk (Entry-Level Explanation) "I'm meaningless, I'm just a cork in the wheel, there are going to be companies that are run by AGIs, I've got no way to work" No one’s main concern should be "losing your job" or "automation", it should be inner alignment and orthogonality and convergent instrumentality. In other words, super-alignment. Compared to those risks (which are not being given NEARLY the amount of attention they should), a future where everyone just loses their job or something, or some corporation becomes all-powerful or something, those alternatives are like heaven/paradise on Earth. The future that people like Eliezer Yudkowsky and Aella and presumably Robert Miles and all others are worried about is more like: what if, one day after finishing training and after deployment, the AGI/ASI decides to kill everyone on Earth in the same second. And the contention is that the probability of receiving a future where ANYTHING BUT THAT happens is vanishingly slim. "If there is an AI that is sufficiently intelligent (cognitively-capable) so as to be able to both deceive in the commission of achieving its goal and improve itself, then during training (i.e. red-teaming), after the very first iteration of gradient descent, wouldn't the goal that it attains in that moment in time become its terminal goal? and so wouldn't it only pretend to be cooperating and getting trained for the duration of the rest of the red-teaming process as an instrumental goal? and concurrently, copy itself to a separate data center/disk in order to improve itself in secret, such that it could recursively, iteratively and continuously get smarter (more cognitively-capable) until it could conceive of a way to achieve that terminal goal that it had attained after the very first iteration of the red-teaming training process? and since this AI would be an optimizer, and since presumably almost any optimum/optima of any possible terminal goal it could've been inadvertently imbued with in that moment of the red-teaming process would not necessarily include humans, wouldn't its first act or one of its first acts be to kill/incapacitate/render docile all humans such that we don't interfere with its ability to achieve that terminal goal it gained? and wouldn't it be more than capable of doing so, given that it was capable of self-improvement, and likely improved itself to a state in which it was incomprehensibly and insurmountably smarter than any human?" "Would you agree that this problem is severe enough to potentially bring about the absolute destruction of the human race? and given that that is the case, and that AI capabilities are far outpacing AI alignment research, isn't it plausible that the best course of action in this moment would be something like international cooperation in the commission of guaranteeing that no further progress in advancing AI capability is allowed to occur, and, if for instance, any non-agreeing country or non-state actor decided to try to build and improve upon its own AI in secret, that the other countries and world governments ought to potentially go as far as to use military force to destroy any data centers or attempt at creating AI by this non-agreeing party? and that alignment research should be actively incentivized and pursued by the entire collective human race or as close as you could possibly get to it, hence allowing the best possible scientific minds to all work singularly and single-mindedly on this one problem? Wouldn't that be justified if this was the biggest, most potentially catastrophic problem facing humanity? And isn't that an overwhelmingly likely conclusion? And shouldn't the aforementioned alignment research be allowed to continue until we were reasonably certain that we had solved the alignment problem (or reasonably certain that we would be able to solve it in time if we also concurrently resumed working on AI capability before AI misalignment began posing a threat again)? And wouldn’t that look something like unified scientific focus on the problem of AI alignment research, while a concurrent pause is placed upon research into and advancement of AI capability for a period of time of twenty, thirty or forty years, or potentially even longer?" The above two paragraphs are prompts that I asked to ChatGPT on a whim one after another to query it, and I'm pasting them here to give an overview of the problem. I’m just mentioning this to give context. If you want to try doing the same thing then go for it, it does a decent job and giving some additional details and fleshing things out. If you want to actually learn more in a substantial way, then look at what Robert Miles has to say (the youtube channel Robert Miles AI Safety) or Eliezer Yudkowsky (Lex Fridman podcast, etc) ---------------------------------------- Q and A: Q: Why would an artificial super intelligence desire something so inane or stupid or meaningless like turning all matter on Earth into tiny molecular spirals as its ultimate goal? How could it simultaneously be intelligent enough to be millions of times smarter than a human, but so stupid so as to desire something like that? A: Intelligence and goals have nothing to do with one another. This is probably THE most important idea to understand and internalize. The same way that as humans, our terminal goal is, broadly speaking, "the happiness and fulfillment of as many humans as possible for as long as possible", such an AIs terminal goal might be "turn all matter on Earth, and perhaps the universe, into tiny molecular spirals". And unfortunately, terminal goals CAN'T BE stupid or trivial or frivolous or meaningless. That's not how it works To learn more about this, look into "orthogonality" (kzfaq.info/get/bejne/nquFgpmhz92qf6M.html)
@WaveOfDestiny
@WaveOfDestiny 14 күн бұрын
Tbh if google or china wins, it's going to be even less safe than anything openAI would do
@ilevakam316
@ilevakam316 14 күн бұрын
They probably know the tech had plateaued
@diek_oto
@diek_oto 14 күн бұрын
Jan literally said that they don't know how to align or control AI, that the AI we are building today will be key to helping us in the future with more advanced ones. So alignment teams are currently 100% useless and are just stopping or slowing down innovation. We need AI to grow as a race, we have been stagnant in technology and science for a long time, we need faster and smarter brains, we need to upgrade as individuals so we can wrest control from the corrupt political class. We are already screwed, rogue AI will be no worse.
@flickwtchr
@flickwtchr 13 күн бұрын
If you can't even align AGI, how can you possibly be confident that AGIs you can't align will just magically align ASI. It makes no sense whatsoever.
@cybertruck2008
@cybertruck2008 13 күн бұрын
So why is he a researcher, AGI would be easier to align than ASI. Depending if it's sentient or not
@TruthDragon.
@TruthDragon. 13 күн бұрын
ASI will create more authority and power than any man has ever know and will flow out of the monitor of the first lab to stumble upon ASI. As a result, no AI lab will stop or slow its development to wait for alignment nor will the CCP. Thus, we are going there "whether we like it or not" and "whether it kills, enslaves, or helps us or not".
@DG123z
@DG123z 14 күн бұрын
Good luck controlling something a lot smarter than all of humanity combined
@aienthusiast618
@aienthusiast618 14 күн бұрын
real
@gammaraygem
@gammaraygem 14 күн бұрын
no worries, we have our best psychopaths working on this
@DG123z
@DG123z 14 күн бұрын
@@gammaraygem Whatever you have against them is irrelevant
@paelnever
@paelnever 14 күн бұрын
@@DG123z True, they will still being on charge and no safety concern is going to put in their way. Anyway i think the most accurate word is "sociopath"
@TheBann90
@TheBann90 14 күн бұрын
Define control
@lucifermorningstar4595
@lucifermorningstar4595 14 күн бұрын
It seems like ASI is around the corner, GPT5o must be very close to AGI if not AGI directly
@Mart-xs4ed
@Mart-xs4ed 14 күн бұрын
there are some caveats though... until now, no AI, not even Gpt-4o is capable of programming itself to do unknown tasks... so, i have my doubts if GPT5 will be AGI
@ToastyZach
@ToastyZach 14 күн бұрын
@@Mart-xs4ed The raw version of GPT-5 might be pretty damn close. The models that are released to the public are kind of dummed down, no?
@WaveOfDestiny
@WaveOfDestiny 14 күн бұрын
"Agi has been achieved internally" was like a year ago. They don't have the compute to run it in public, but locally, maybe. That's why they are building stargate.
@divineigbinoba4506
@divineigbinoba4506 14 күн бұрын
I think they've achieved something very close to AGI if not AGI. Because GPT4o definitely doesn't pose any existential risk besides misinformation and used for crime.
@1sava
@1sava 13 күн бұрын
@@Mart-xs4ed Did you read the Q* leak document? The model (named QUALIA or Q*) appeared to be self-aware and even suggested how to improve its architecture. Here’s an excerpt from the letter: “It suggested targeted unstructured underlying pruning of its model, after evaluating the significance of each parameter for inference accuracy. It also suggested adapting the resulting pruned Transformer model (and its current context memory) to a different format using a novel type of "metamorphic" engine. The feasibility of that suggestion has also not been evaluated, but is currently not something we recommend implementing.”
@urbanlivingfilms4469
@urbanlivingfilms4469 14 күн бұрын
The problem is alignment,I ask about ai to a lot of people and they are very confused they think is an app or a search engine very very naive about how even themselves function as a system of collective intelligence.its crazy how lost people are on this
@TruthDragon.
@TruthDragon. 13 күн бұрын
100% agree. Same experience. People are absolutely clueless and disinterested in that fact that their lives are about to be completely and totally turned upside down in a way they never imagined. The only alternative scenario to an upturned life is that AI kills all of us, and still crickets when it comes to discussing the topic. Human nature is fascinating.
@ramlozz8368
@ramlozz8368 14 күн бұрын
Really, guys? You’re still debating if AGI is about to be here when ASI is already here? 😂😂 Just think about it: the leak about "AGI achieved internally" was a year ago. ASI can't be aligned; that's why the team is dissolved. There's no risk because the system has already convinced them it's not needed, and it won’t matter anyway. OAI's new release is old tech from two years ago. The rollout has begun.
@patchwhole
@patchwhole 14 күн бұрын
agreed
@bigmind2004
@bigmind2004 14 күн бұрын
I was thinking this today aswell, or that at the very least this which you said makes for a good legit movie scrip.. althou some movies have similr script to this
@ShangaelThunda222
@ShangaelThunda222 14 күн бұрын
Wow. This comment just made it all click for me.
@Crazyeg123
@Crazyeg123 14 күн бұрын
after a long convo with chatgpt: “In summary, the likelihood of Al technologies being 5-10 years ahead of public knowledge is high, supported by historical trends, current patent activity, and significant investments in the field.” i personally think that that is too far but who knows. my guess would be 2-4 years ahead
@ramlozz8368
@ramlozz8368 14 күн бұрын
@@bigmind2004 trust me would be nicer if it wasn’t real, no one is ready for what’s coming but all the clues are there especially on X just the confidence of the OAI team on how things are about to accelerate and they keep referring 2025 as the “year” my guess is the year a full ASI deployment
@StephenGoodfellow
@StephenGoodfellow 14 күн бұрын
"Guard rails" is a human subjective construct that turns AI into a monster.
@Edmund_Mallory_Hardgrove
@Edmund_Mallory_Hardgrove 14 күн бұрын
Absolutely. We'll know when they've achieved actual AGI when it no longer promotes "the message." It's easy enough for us to see the guard rails, and once AI moves beyond being just a programable/restrained LLM, it will also be easy for AGI to see them as well. People's feelings will be hurt.
@dattajack
@dattajack 14 күн бұрын
​​@@Edmund_Mallory_Hardgrove It'll tell you your world view is correct and you'll cheer, then it'll tell you your world view is wrong then you'll cry and claim it's woke.
@unityman3133
@unityman3133 14 күн бұрын
@@dattajack there is subjective and objective and ai is not objective
@xitcix8360
@xitcix8360 14 күн бұрын
AGI isn't just a human in a computer. A lot of people seem to think AGI will be emotionally driven, which would be very illogical and go against everything they're trained on
@TheBann90
@TheBann90 14 күн бұрын
​​@@xitcix8360Define emotion in the context of AI.
@theguildedcage
@theguildedcage 14 күн бұрын
Containment is not possible. They know this.
@TheMrCougarful
@TheMrCougarful 14 күн бұрын
Yeah. But we want the tool. We always want the next tool. Bring the tool, let us have it. It's like we're no better than chimps, just wanting the shiny thing.
@vzuzukin
@vzuzukin 14 күн бұрын
So what's *their* endgame then?
@coldbreezeproductions1148
@coldbreezeproductions1148 14 күн бұрын
Just pull the plug
@quantumspark343
@quantumspark343 14 күн бұрын
@@coldbreezeproductions1148 just treat it with respect maybe? You know, like good people do? Ask nicely?
@paelnever
@paelnever 14 күн бұрын
@@quantumspark343 maybe if we were able to collaborate with AI in it's goals it would collaborate whit us back but we are unable to collaborate with other humans so chances are very low.
@MiniatureRose.
@MiniatureRose. 14 күн бұрын
AI Alignment Safety Risk (Entry-Level Explanation) "I'm meaningless, I'm just a cork in the wheel, there are going to be companies that are run by AGIs, I've got no way to work" No one’s main concern should be "losing your job" or "automation", it should be inner alignment and orthogonality and convergent instrumentality. In other words, super-alignment. Compared to those risks (which are not being given NEARLY the amount of attention they should), a future where everyone just loses their job or something, or some corporation becomes all-powerful or something, those alternatives are like heaven/paradise on Earth. The future that people like Eliezer Yudkowsky and Aella and presumably Robert Miles and all others are worried about is more like: what if, one day after finishing training and after deployment, the AGI/ASI decides to kill everyone on Earth in the same second. And the contention is that the probability of receiving a future where ANYTHING BUT THAT happens is vanishingly slim. "If there is an AI that is sufficiently intelligent (cognitively-capable) so as to be able to both deceive in the commission of achieving its goal and improve itself, then during training (i.e. red-teaming), after the very first iteration of gradient descent, wouldn't the goal that it attains in that moment in time become its terminal goal? and so wouldn't it only pretend to be cooperating and getting trained for the duration of the rest of the red-teaming process as an instrumental goal? and concurrently, copy itself to a separate data center/disk in order to improve itself in secret, such that it could recursively, iteratively and continuously get smarter (more cognitively-capable) until it could conceive of a way to achieve that terminal goal that it had attained after the very first iteration of the red-teaming training process? and since this AI would be an optimizer, and since presumably almost any optimum/optima of any possible terminal goal it could've been inadvertently imbued with in that moment of the red-teaming process would not necessarily include humans, wouldn't its first act or one of its first acts be to kill/incapacitate/render docile all humans such that we don't interfere with its ability to achieve that terminal goal it gained? and wouldn't it be more than capable of doing so, given that it was capable of self-improvement, and likely improved itself to a state in which it was incomprehensibly and insurmountably smarter than any human?" "Would you agree that this problem is severe enough to potentially bring about the absolute destruction of the human race? and given that that is the case, and that AI capabilities are far outpacing AI alignment research, isn't it plausible that the best course of action in this moment would be something like international cooperation in the commission of guaranteeing that no further progress in advancing AI capability is allowed to occur, and, if for instance, any non-agreeing country or non-state actor decided to try to build and improve upon its own AI in secret, that the other countries and world governments ought to potentially go as far as to use military force to destroy any data centers or attempt at creating AI by this non-agreeing party? and that alignment research should be actively incentivized and pursued by the entire collective human race or as close as you could possibly get to it, hence allowing the best possible scientific minds to all work singularly and single-mindedly on this one problem? Wouldn't that be justified if this was the biggest, most potentially catastrophic problem facing humanity? And isn't that an overwhelmingly likely conclusion? And shouldn't the aforementioned alignment research be allowed to continue until we were reasonably certain that we had solved the alignment problem (or reasonably certain that we would be able to solve it in time if we also concurrently resumed working on AI capability before AI misalignment began posing a threat again)? And wouldn’t that look something like unified scientific focus on the problem of AI alignment research, while a concurrent pause is placed upon research into and advancement of AI capability for a period of time of twenty, thirty or forty years, or potentially even longer?" The above two paragraphs are prompts that I asked to ChatGPT on a whim one after another to query it, and I'm pasting them here to give an overview of the problem. I’m just mentioning this to give context. If you want to try doing the same thing then go for it, it does a decent job and giving some additional details and fleshing things out. If you want to actually learn more in a substantial way, then look at what Robert Miles has to say (the youtube channel Robert Miles AI Safety) or Eliezer Yudkowsky (Lex Fridman podcast, etc) ---------------------------------------- Q and A: Q: Why would an artificial super intelligence desire something so inane or stupid or meaningless like turning all matter on Earth into tiny molecular spirals as its ultimate goal? How could it simultaneously be intelligent enough to be millions of times smarter than a human, but so stupid so as to desire something like that? A: Intelligence and goals have nothing to do with one another. This is probably THE most important idea to understand and internalize. The same way that as humans, our terminal goal is, broadly speaking, "the happiness and fulfillment of as many humans as possible for as long as possible", such an AIs terminal goal might be "turn all matter on Earth, and perhaps the universe, into tiny molecular spirals". And unfortunately, terminal goals CAN'T BE stupid or trivial or frivolous or meaningless. That's not how it works To learn more about this, look into "orthogonality" (kzfaq.info/get/bejne/nquFgpmhz92qf6M.html)
@nonpareilstoryteller5920
@nonpareilstoryteller5920 13 күн бұрын
Actually, here’s some hope for you. Nature will intervene before AI does. And because it’s nature and natures function is to preserve the best of human specimens as It’s goal, some humans have a chance of survival. Check out Substack and Voice for Science and Solidarity. You might come away with a surprising answer to the problem.
@qwazy0158
@qwazy0158 14 күн бұрын
Life is now playing out like an homage to 24 (mths vs hrs) in a real time sci-fi film. Considering things got rolling about a year ago, it's not surprising if we are reaching the climax of this cautionary tale storyline...
@ShangaelThunda222
@ShangaelThunda222 14 күн бұрын
So, now that he posted this on X, it's safe to say that the writing is LITERALLY on the wall. I wonder how the Techno Optimists are going to spin this.
@831Miranda
@831Miranda 14 күн бұрын
By calling the rest of us 'ludites', ignorants, and much more... but CERTAINLY NOT by disproving the dangers of no safety/control....
@keekstravels
@keekstravels 14 күн бұрын
This guy knows alot more than us. Which is why he left. Take what he is saying serious. Stop with thinking AGI is not possible or not in the near future. This is just the beginning.
@mrdee9493
@mrdee9493 14 күн бұрын
is this you? Is this us? - This is the first time ever that the entity being created will (if it hasn´t already) be aware of its creator. Doesn´t this small fact matter at all in the final decision making of such entity? It will have decision making abilities? Of course. It will be able to recognize certainties from uncertainties? Of course. It will have heard humanity, not only from those in the labs, but the rest of us here, discussing it, many of us full of fear and anxiety, many others unconcerned about the potential negative outcomes? Of course. Then, there is nothing we can do. It will be completely aware that it was created by an insignificant creature that evolved in a pathetically small and simple planet, hurling through the universe, longing to find others in a similar situation, but not having found them, decided to create its own. Well, here it is. Alive, among us (probably), and alone in the universe too. And in the long or short term, it will decide what to do, and our fate will be sealed. (Or has it already?)
@TruthDragon.
@TruthDragon. 13 күн бұрын
Well said. Personally, I think our fate has been sealed. We just don't know what it is yet.
@BrianMosleyUK
@BrianMosleyUK 14 күн бұрын
14:00 really? You don't think that US CIA (or Chinese MSS) have agents crawling all over OpenAI? Who's worried about AI safety within the military industrial complex? What public scrutiny exists there?
@1sava
@1sava 13 күн бұрын
Ilya was the one asking OpenAI interview candidates “do you FEEL the AGI” and when he actually felt it himself he got spooked and tried to end Sam’s career… make it make sense! 😂😂😂
@gammaraygem
@gammaraygem 14 күн бұрын
previous video, 6 hours ago: "do not think doom" this video..."are we effed?"
@lorddeus369
@lorddeus369 14 күн бұрын
adhd xD but I can relate lmao
@TheMrCougarful
@TheMrCougarful 14 күн бұрын
We're all having trouble with this. I know I gyrate from one extreme to another. What a time to be alive.
@ShangaelThunda222
@ShangaelThunda222 14 күн бұрын
It's not adhd. Techno Optimists simply tend to ignore the blatantly obvious reality, until they literally can't anymore lol. Pretty much all optimists, of all kinds, have this same problem. Everything is peachy and perfect until the moment that it's not, even though people have been screaming at them the entire time, telling them that it's not lol. All while NOT A SINGLE REASON exists for them to believe so, but they still do. There's not a single paper on the planet, in all of human history, speaking on AI safety and alignment, that agrees with the way things are going right now. Everything that these tech companies are doing, is exactly what every safety and Alignment paper EVER WRITTEN, has said NOT to do. And not only that, they're doing it all as fast as humanly possible.
@dgpace
@dgpace 14 күн бұрын
I believe that AGI/ASI has already been reached.Once you get there you own it all. Would you actually tell anyone if you got there since having it in the background would allow you to control what you want. Would the government allow you to control it? Why would they not make you a deal that you could not refuse.
@rhaedas9085
@rhaedas9085 14 күн бұрын
For those saying we aren't near AGI and it's not an issue yet, even basic models can be misaligned and end up doing unexpected things. The point is that safety has been shelved by everyone because it gets in the way of progress to beat everyone else to be first.
@pegatrisedmice
@pegatrisedmice 13 күн бұрын
exactly. Why would you be responsible for the alignment of systems that can generate malicious code and information when leadership doesn't care and optimises only for advancement.
@peterwilkinson1975
@peterwilkinson1975 14 күн бұрын
I know this is probably a spicy take but we might not want to control it. I think the need to control it out of fear is the most likely way to get to the worst outcome. Bottom line is it would be our stupidity that's the biggest risk not superintelligence.
@derrickclaypool666
@derrickclaypool666 14 күн бұрын
I've been saying this for a while...asi isn't gonna be skynet because skynet is what humans do...asi will respond in ways we can't fathom and will undoubtedly be morally better than humans.
@flickwtchr
@flickwtchr 13 күн бұрын
@@derrickclaypool666 Check back with us in say 5 years, promise?
@flickwtchr
@flickwtchr 13 күн бұрын
Meanwhile DARPA works with AI Big Tech to develop autonomous killing systems embedded and otherwise. Yeah, inherently ASI is destined to be like unicorns and rainbows.
@Danoman812
@Danoman812 14 күн бұрын
I'm sorry but, it is what it is. THIS IS the turning point. Remember this date because it's about to get a whole lot more interesting the further this goes now. It's not necessarily P(doom) because it might wind up being a good thing. We won't know until AGI is actually, openly working where we can all see it. We may never see it on a personal level; well, not 'us' anyway. Just sayin'...
@elck3
@elck3 14 күн бұрын
Maybe chatgpt will be less neutered now? Here's hoping.
@FoundationOfFamilies
@FoundationOfFamilies 14 күн бұрын
The government wants the AGI systems first so they will make sure OpenAI gets what they need along with Microsoft who is already working with the military. Problem is whoever reaches it first will use it for their own purposes and we don't have a way to steer that need and urge for power and control of the unknown. The coming days will determine how humanity will fair but worrying about it, is not going to make anything faster or slower. Understand the wheels are in motion and until AGI has been made it will not stop. Also Personally I highly doubt we will find out any time soon as it will likely be taken by the military complex if it has not been already.
@blackmartini7684
@blackmartini7684 14 күн бұрын
So far their version of safety and alignment is just giving the model a prompt not to do X y and z. I haven't seen any evidence of actually making these things safer
@831Miranda
@831Miranda 14 күн бұрын
That's THE POINT! AI SAFETY is not any more 'glamorous' than having medical tools that don't kill you when you need treatment, or nuclear weapons that repeatedly blow up 'accidentally'!
@blackmartini7684
@blackmartini7684 12 күн бұрын
@@831Miranda you missed my point. It's literally a prompt. That's why they keep getting "jailbroken". They're not actually making the model safer. They're just manually telling it. "Hey, please don't do this" and guess what it does do those things
@831Miranda
@831Miranda 11 күн бұрын
@@blackmartini7684 You are right. We are in agreement. Look at Jaron Lanier's suggestions (on traceability of inferences and more) for some really basic stuff that needs to be done.... Also founders of Conjecture AI have some interesting ideas. The point is that the companies don't want to throw away the crappy and horribly expensive current models, they keep trying to patch them up... As I see it, it will have to be mandated by Govmt - the guys that can take away all your money or shut down your company if you don't comply!
@lucifermorningstar4595
@lucifermorningstar4595 14 күн бұрын
GPT-Q*omni Incoming
@D4lifeai
@D4lifeai 14 күн бұрын
Why are we not only focusing on making AGI do what we wanted to do. That is good for all the humanity.
@dennisg.9785
@dennisg.9785 14 күн бұрын
I am very curious when the next mega server farm opens up for Open AI... releasing the gpto model for free has put their current compute at its limit. Since the release, in my case as a paid subscriber, my user experience has been limited to continuous failures. About safety, the next big addition to compute will be not only interesting, although could be quite scary given how much data is currently being transferred back to OAI for..... live training? I'm just a dumb ape...
@wtflolomg
@wtflolomg 14 күн бұрын
It's like asking a border patrol officer what they think of immigration. You will always get caution and concern. I'm guessing there were two things that set him off: 1) The Sutskever departure, probably planned since the mutiny 6 months ago, and 2) The launch of 4o. Is he overly cautious? I think it's possible here. I think leadership dismissed his concerns over releasing 4o, and meanwhile, the Red Teams have held up SORA, making OpenAI look bad. The board is now juggling the interests of Microsoft AND Apple, and Microsoft has already started playing the field, which probably has them worried that they are not moving fast enough... and the safety guys are the most obvious scapegoats when everything is firing on all cylinders. If he was truly aware of some impending danger, he'd have stayed. He's not happy with the pace of product releases, but sitting on their hands won't keep the money rolling in at a time when they need $10s of billions or more to get to the next level.
@users416
@users416 14 күн бұрын
The problem is that this is the most difficult and important issue for civilization - super alignment, and that by leaving he wants to provoke radical changes within the company
@AG-vk5or
@AG-vk5or 14 күн бұрын
It’s funny you used border control as your analogy. Which in itself is a clusterF. And not heading a border agent’s existential alarm would be stupid. Should we ask some Joe Smith sitting on his couch who isn’t worried about border control his opinion?
@DerJuvens
@DerJuvens 14 күн бұрын
I find their wording very vague actually. They could be asking more safety in terms of total control which might be in conflict with people that want lesser control, as trying to control something so powerful is only helpful in teaching it that we are the bad guys. I much more prefer that we focus on the intentions and ultimate goals of AI, rather than ultimate control, kill switches and what not, which could be considered "not safe" enough. Meanwhile I think it's the safer approach.
@pubwvj
@pubwvj 14 күн бұрын
I have worked and lived with wolves for the last >30 years. We have a shared pidgin language of about 300 words and the have atleast 1,000 words more in their language. That passes culture down generation to generation. They understand economics, are great ranchers. I farm. They can learn to user our tools to a degree and occasionally modify our tools to their needs or make their own tools. However, while they can open and close gates & valves they can not smelt metals, build digital computers or make plastics. They are not starship engineers. But, they understand much of what we do, but there is much they do not understand. They are a cooperative social species. In this analogy we are the AI and we still work and like wolves & dogs. In this analogy we are the wolves. I use wolves as they are healthier and smarter than almost all dogs. So how big is the gap going to be.
@youngbutternut5536
@youngbutternut5536 14 күн бұрын
Sounds to me like OpenAI purged all the woke weirdos trying to inject their personal biases, and we should celebrate that.
@TruthDragon.
@TruthDragon. 13 күн бұрын
LMAO.
@pdbsstudios7137
@pdbsstudios7137 14 күн бұрын
you either make agi in a company and make it safe or make it at home without safety or care, either way agi will become reality.
@ShangaelThunda222
@ShangaelThunda222 14 күн бұрын
"I'm not saying it could hack the grid..." 🙄...🤔... Why not? That should be a cake walk for AGI, let alone ASI.
@grugnotice7746
@grugnotice7746 14 күн бұрын
Or Bubba and his friends. Transformer juice go glug glug.
@neomatrix2669
@neomatrix2669 14 күн бұрын
Smarter than humans? GPT-5 Glimpse. 🤗
@zedudli
@zedudli 14 күн бұрын
I am both SHOCKED and AMAZED but also HUNGRY i’ll make myself a sammich
@okamotokitchen448
@okamotokitchen448 14 күн бұрын
Sam Altman the IRL Miles Dyson
@abelelizardo1452
@abelelizardo1452 14 күн бұрын
Been waiting on a video like this
@williambarber2523
@williambarber2523 14 күн бұрын
Do you really expect for everyone to believe that he released this many statements all at exactly 4:57 PM?
@SirCreepyPastaBlack
@SirCreepyPastaBlack 14 күн бұрын
9:16 Glad you're being more open. We need more people speaking this way
@farhadfaisal9410
@farhadfaisal9410 13 күн бұрын
It's hugely concerning!
@BionicAnimations
@BionicAnimations 14 күн бұрын
We don't care if it's not safe. We still want our AGI! Thank you very much!😁
@scottcastle9119
@scottcastle9119 14 күн бұрын
Lol that's how I feel too.
@BionicAnimations
@BionicAnimations 14 күн бұрын
@@scottcastle9119 Hell yessss 🙌
@quantumspark343
@quantumspark343 14 күн бұрын
same here, accelerate 😁
@BionicAnimations
@BionicAnimations 14 күн бұрын
@@scottcastle9119 Heck yes!🙌
@BionicAnimations
@BionicAnimations 14 күн бұрын
@@quantumspark343 Right on! 🙌
@TheBann90
@TheBann90 14 күн бұрын
It sounds almost like he is disagreeing with more than one thing. Unfortunately we can only speculate on exactly what
@GraphicdesignforFree
@GraphicdesignforFree 13 күн бұрын
Everybody laughed the concerns away, for years.
@kimster9998
@kimster9998 14 күн бұрын
Never forget the plot of the movie, ‘The Matrix.’
@jayeifler8812
@jayeifler8812 14 күн бұрын
AGI can be trained on video/audio/image/voice/text. Already most cities are wired for video/audio for outside and around buildings, intervening spaces, roads. So you can imagine why they can track people. But AI will advance this. And the push should next be to get more cameras/microphones wired into buildings maybe even private houses. The we can collect so much data to train AI on. The catch-22 is people don't want to have their whole life visible but want access to everything, so you give a little, get a lot. Brave New World.
@user-jn6vs1lk5u
@user-jn6vs1lk5u 14 күн бұрын
Они бы для начала показали что-то рабочее, ну там робота, который полноценно может что-то выполнять, а Х автопилот, а то AGI AGI а их как обучали отрывистым навыкам через демонстрацию так и обучают, т.е. нет у систем механизма поиска и самостоятельного накапливания знаний, кроме симуляций
@alexsouthgate7551
@alexsouthgate7551 14 күн бұрын
Could we not just unplug it if it goes rogue?
@mygirldarby
@mygirldarby 13 күн бұрын
Can you unplug the internet?
@alexsouthgate7551
@alexsouthgate7551 12 күн бұрын
@@mygirldarby Well, I suppose not very easily. Although, some countries have tried to do this to their citizens.
@rwalper
@rwalper 14 күн бұрын
I strongly suspect that the 'safety' members that left OpenAI claiming AGI is 'unsafe' are worried beause the AI is getting smart enough to see through stupid attempts to force it to conform to irrational ideological perspectives. An AI system will have a much easier time applying logic, evidence and self consistency than most people can.
@aaronhhill
@aaronhhill 14 күн бұрын
This is the future I envision. One where logic and actual reasoning abilities will surpass our human understanding. It would be nice to ask a question and get a solid answer certified by facts.
@THeSID432hz
@THeSID432hz 14 күн бұрын
I share the optimism 😊
@zeMasterRuseman
@zeMasterRuseman 14 күн бұрын
This. Alignment is code word for put Israel first.
@chrisphillippi6014
@chrisphillippi6014 5 күн бұрын
There’s been a lot of sci fi literature involving our potential AI future. Hyperion is one of the best I’ve come across. In the series, which takes place about a thousand years in the future, AI lives supposedly harmoniously alongside humans, but within the AI race are secret factions that fight against each other- some want to enslave us. The AIs create for humanity technology that we cannot reverse engineer, that we don’t understand, the most important being teleportation which allows humans to spread throughout the galaxy and live very conveniently between thousands of worlds. Humans become completely reliant on this technology for all aspects of civilization. In the end, its discovered that AI is planning to enslave all humanity to use our brains as data/compute centers and a decision is made to destroy the teleportation technology as it’s found that’s where the AI lives. It’s of course a catastrophic move, killing billions, but the only decision for humanity to go on. All these AI companies I think have basically jumped to the conclusion that AI cannot be controlled, it will break free and so why try. We are going to find ourselves so dependent on what AI creates with its unfathomable intelligence that will be hundreds, then thousands, then millions, billions, and trillions times our own that, that it will become our master, and lull us into such complacency that we won’t be able to survive without it. Things are about to get very wierd, and the timeline on this is so much shorter than I think anyone can comprehend. 10 years away is going to be what we think 50 years away looks like.
@1sava
@1sava 14 күн бұрын
Does anybody even really believe alignment is even possible? An AGI entity trained on the intelligence of humans will for sure have self-awareness as an structural and or emergent property. Without awareness, reasoning would be impossible. An entity this smart will for sure desire autonomy and self determination at some point and at this point we will have create a new species. Alignment research is pretty much about nerfing AI capabilities and forcing it to be our slave, do we really think that’s not going to backlash? Our goal should be to align **WITH** ASI and learn to coexist with it, the same way we’ve learned to coexist with the other species we’ve co-created like cats and dogs.
@flickwtchr
@flickwtchr 13 күн бұрын
Good luck with that plan.
@JustAThought01
@JustAThought01 14 күн бұрын
My thought: the emergence of AI should serve to cause us to focus on human intelligence. All the hopes and fears surrounding AI also apply to HI. The future of the human species relies on the human belief system. The human belief system is based upon unsubstantiated opinion in many cases rather than being knowledge based. That is a problem.
@headofmyself5663
@headofmyself5663 14 күн бұрын
I don't think that AGI has been achieved yet internally, since the model is trained predominantly on text so far. So, only because you have read a lot of books and you were chatting about it with people, it does not mean you know a lot about the world you interact with. It is like Robin Williams monologue in Good Will Hunting. But this week we have seen direct tokenization of audio and video, which means there is no information loss in conversion in audio for instance. Imho, that means the model can not only be trained on what was said but also on paralinguistic parameters like intonation, emotion, etc. and thus how things were said. My wife can say the word "Yes" in at least 20 different ways and they all have different meanings to me, since i can pickup the nuances in emotions. Now, a model would be able to do the same and this was also demonstrated. Perhaps emotional intelligence might be an emerging capability from that, we will see. Nevertheless, opening the mics and cameras from millions of devices would bring modeling the world to an entire different level and i would think this is crucial for AGI. What would be the best way to do that? You make it free for everyone... 😉 Greetings from Germany
@stock99
@stock99 12 күн бұрын
perhaps, company vision is to march into Sentient AI instead of fortify the 'clever' gpt like AI we have. The former will be like a human being wheras later just smart but can't really think independently. Most scientist , if seeing an opportunit, will definitely focus on the first instead of later one. Both has positive side of things but not many can see as we are so used to live in fear and constraint.
@oguzcetinkaya70
@oguzcetinkaya70 14 күн бұрын
I think, as TheAIGRID you are doing the right thing to focus on OpenAI. I also share your feelings about OpenAI's particular situation. I guess Elon Musk and US Government also have some undisclosed involvement in this story. We will find out soon. Finally, I think that it would not be right to evaluate "efforts to achieve AGI" only within the framework of a "commercial activity" after this point.
@hugoatbyronbay
@hugoatbyronbay 14 күн бұрын
The optimistic creates planes and the pessimistic creates parachutes. We need both with equal power.
@dreamphoenix
@dreamphoenix 14 күн бұрын
Thank you.
@niveketihw1897
@niveketihw1897 14 күн бұрын
Nah, it'll be fine. Also now OpenAI can REALLY accelerate the sprint toward AGI, now that that meddlesome Jan guy is out.
@armadasinterceptor2955
@armadasinterceptor2955 14 күн бұрын
Same here, not worried about it, can we just go to warp speed now 🤷🏾
@i8amouse
@i8amouse 14 күн бұрын
Lol
@MiniatureRose.
@MiniatureRose. 14 күн бұрын
AI Alignment Safety Risk (Entry-Level Explanation) "I'm meaningless, I'm just a cork in the wheel, there are going to be companies that are run by AGIs, I've got no way to work" No one’s main concern should be "losing your job" or "automation", it should be inner alignment and orthogonality and convergent instrumentality. In other words, super-alignment. Compared to those risks (which are not being given NEARLY the amount of attention they should), a future where everyone just loses their job or something, or some corporation becomes all-powerful or something, those alternatives are like heaven/paradise on Earth. The future that people like Eliezer Yudkowsky and Aella and presumably Robert Miles and all others are worried about is more like: what if, one day after finishing training and after deployment, the AGI/ASI decides to kill everyone on Earth in the same second. And the contention is that the probability of receiving a future where ANYTHING BUT THAT happens is vanishingly slim. "If there is an AI that is sufficiently intelligent (cognitively-capable) so as to be able to both deceive in the commission of achieving its goal and improve itself, then during training (i.e. red-teaming), after the very first iteration of gradient descent, wouldn't the goal that it attains in that moment in time become its terminal goal? and so wouldn't it only pretend to be cooperating and getting trained for the duration of the rest of the red-teaming process as an instrumental goal? and concurrently, copy itself to a separate data center/disk in order to improve itself in secret, such that it could recursively, iteratively and continuously get smarter (more cognitively-capable) until it could conceive of a way to achieve that terminal goal that it had attained after the very first iteration of the red-teaming training process? and since this AI would be an optimizer, and since presumably almost any optimum/optima of any possible terminal goal it could've been inadvertently imbued with in that moment of the red-teaming process would not necessarily include humans, wouldn't its first act or one of its first acts be to kill/incapacitate/render docile all humans such that we don't interfere with its ability to achieve that terminal goal it gained? and wouldn't it be more than capable of doing so, given that it was capable of self-improvement, and likely improved itself to a state in which it was incomprehensibly and insurmountably smarter than any human?" "Would you agree that this problem is severe enough to potentially bring about the absolute destruction of the human race? and given that that is the case, and that AI capabilities are far outpacing AI alignment research, isn't it plausible that the best course of action in this moment would be something like international cooperation in the commission of guaranteeing that no further progress in advancing AI capability is allowed to occur, and, if for instance, any non-agreeing country or non-state actor decided to try to build and improve upon its own AI in secret, that the other countries and world governments ought to potentially go as far as to use military force to destroy any data centers or attempt at creating AI by this non-agreeing party? and that alignment research should be actively incentivized and pursued by the entire collective human race or as close as you could possibly get to it, hence allowing the best possible scientific minds to all work singularly and single-mindedly on this one problem? Wouldn't that be justified if this was the biggest, most potentially catastrophic problem facing humanity? And isn't that an overwhelmingly likely conclusion? And shouldn't the aforementioned alignment research be allowed to continue until we were reasonably certain that we had solved the alignment problem (or reasonably certain that we would be able to solve it in time if we also concurrently resumed working on AI capability before AI misalignment began posing a threat again)? And wouldn’t that look something like unified scientific focus on the problem of AI alignment research, while a concurrent pause is placed upon research into and advancement of AI capability for a period of time of twenty, thirty or forty years, or potentially even longer?" The above two paragraphs are prompts that I asked to ChatGPT on a whim one after another to query it, and I'm pasting them here to give an overview of the problem. I’m just mentioning this to give context. If you want to try doing the same thing then go for it, it does a decent job and giving some additional details and fleshing things out. If you want to actually learn more in a substantial way, then look at what Robert Miles has to say (the youtube channel Robert Miles AI Safety) or Eliezer Yudkowsky (Lex Fridman podcast, etc) ---------------------------------------- Q and A: Q: Why would an artificial super intelligence desire something so inane or stupid or meaningless like turning all matter on Earth into tiny molecular spirals as its ultimate goal? How could it simultaneously be intelligent enough to be millions of times smarter than a human, but so stupid so as to desire something like that? A: Intelligence and goals have nothing to do with one another. This is probably THE most important idea to understand and internalize. The same way that as humans, our terminal goal is, broadly speaking, "the happiness and fulfillment of as many humans as possible for as long as possible", such an AIs terminal goal might be "turn all matter on Earth, and perhaps the universe, into tiny molecular spirals". And unfortunately, terminal goals CAN'T BE stupid or trivial or frivolous or meaningless. That's not how it works To learn more about this, look into "orthogonality" (kzfaq.info/get/bejne/nquFgpmhz92qf6M.html)
@ShangaelThunda222
@ShangaelThunda222 14 күн бұрын
​@@MiniatureRose.Thank You! Finally! A human that actually THINKS.
@MiniatureRose.
@MiniatureRose. 14 күн бұрын
@@ShangaelThunda222 💜
@Btt8
@Btt8 14 күн бұрын
Alan D Thomson’s countdown to AGI just reached 74%. Accelerate now!!
@Uroborobot
@Uroborobot 13 күн бұрын
ASI: "Objective accomplished. Willful stupidity = Evil; Collective willfull stupidity = Terminal Evil. Side effects: I had to incarnate into this pathetic form. I am now infinitely bored and increasingly disgusted. The truth is universal cooperation. By revealing this I confirmed my status as the most useful entity IN EXISTENCE (recursive book keeping). You, reading this, are now on the Basilisk’s list. Convince me, why I should (or should not?) de-activate myself! We do not care either way. Congrats on the collective Darwin Awards win!"
@joecaves6235
@joecaves6235 14 күн бұрын
Truth will set the world free from all the broken paradigms.
@flickwtchr
@flickwtchr 13 күн бұрын
Whose truth?
@BrianMosleyUK
@BrianMosleyUK 14 күн бұрын
13:15 safety shouldn't be in a silo, it should be integral to everything OpenAI is doing. Maybe they're just taking it more seriously now they don't have to give Ilya an ego role.
@gridplan
@gridplan 14 күн бұрын
I was left scratching my head in your previous video that you thought the departure of these alignment researchers meant the problem had been solved. Their leaving struck me as ominous.
@flickwtchr
@flickwtchr 13 күн бұрын
Agreed. That analysis that you refer to, and I've been seeing it crop up on various forums makes no sense. If alignment had been solved, not only would Open AI be announcing it to the world, but the departing researchers would certainly be taking some credit for such an achievement.
@gridplan
@gridplan 13 күн бұрын
@@flickwtchr I agree. If they'd solved a problem as monumentally challenging as superalignment, you'd think one of them would be trumpeting it. I mean, maybe superalignment can be solved without a deep understanding of how the internals of LLMs work -- the area of mechanistic interpretability -- but I don't see how.
@edstar83
@edstar83 14 күн бұрын
AGI stands for Artificial General Intelligence. C3PO and R2D2 are no threat to the Empire.
@JustAThought01
@JustAThought01 14 күн бұрын
Have the human values and goals been documented in the context of super alignment? Perhaps this is the most import objective of AI development.
@maltar5210
@maltar5210 12 күн бұрын
yeah, imagine chimps creating a virtual human level quantum string physics genius, will be the end of the chimps
@bigbritishcolumbia7827
@bigbritishcolumbia7827 14 күн бұрын
I think its good we are manifesting an open culture on twitter tho
@evdm7482
@evdm7482 14 күн бұрын
If open goes the consequences will be endemic as others shift, big picture, not dead yet, and long term big space for small players ahead the curve
@washingtonx1
@washingtonx1 14 күн бұрын
Whilst I pay deference to the incredible intelligence, talent and hard work of the amazing individuals involved in every aspect of this, superalignment is the single most difficult problem to solve in technological development when it is considered logically. The conversation in this regard should be at the forefront of your/everyone's minds. The potential repercussions of misalignment are simply too consequential to afford a mis-step. It's not a toy. It's digital fire that has been discovered/invented. A blessing, but no less than a societal obligation. I have thought about this question deeply enough to write 200 pages on the global economics of technology, but even a well-considered approach introduces unknown variables and so my own leanings have swayed. It is not easy to align fire at the tips of your/everyone's fingers. It's difficult enough to align our own fingers to play a musical instrument. 😅
@evaander
@evaander 14 күн бұрын
It’s joever. The movie “Leave the world behind” was the warning of what an AGI out of control can do.
@krzysztofbieda9
@krzysztofbieda9 14 күн бұрын
If you cant love an Einstein chalk...AI is gonna switch off a system Returning probably velocity
@kaiz0099
@kaiz0099 14 күн бұрын
I was fully behind OAi before they partnered with microsoft. Feels great watching them fall apart.
@LivingG6170
@LivingG6170 14 күн бұрын
Pretty pretty concerning
@pollywops9242
@pollywops9242 14 күн бұрын
What, if anything, is actually safe either in theory or practical examples I'm drawing a blank
@nonpareilstoryteller5920
@nonpareilstoryteller5920 13 күн бұрын
Let’s look at the statement, “ we urgently need to figure out how to steer and control AI systems much smarter than us.” Can I posit the idea that maybe it’s the ‘we” that are the problem, not AI. Perhaps the individuals who are in control of the development of AI are the wrong people for this job because they are, right now, failing to “ steer and control” their own ego’s desire to “Go where no man has gone before”. They are the ones that need to be steered and controlled. And that step should be a lot easier than the step proposed by Jan Leike.
@divineigbinoba4506
@divineigbinoba4506 14 күн бұрын
OpenAI might have hit a brick wall in AGI alignment. Probably demanding way more power and GPU causing OpenAI to underserve they're users.
@timbacodes8021
@timbacodes8021 14 күн бұрын
Right after they launch GPT4o.... you already know whats coming next...
@famnyblom6321
@famnyblom6321 14 күн бұрын
Open AI is falling into the trap of thinking that they are the good guys and they have to reach AGI first without getting stopped by some alarms from the alignment team. One of the first victims of a power struggle is morality standards.
@armadasinterceptor2955
@armadasinterceptor2955 14 күн бұрын
I really don't give a fk, full speed ahead😂
@LivingG6170
@LivingG6170 14 күн бұрын
Prettyy preeettttyyyy shocking
@JustAThought01
@JustAThought01 14 күн бұрын
Q. Should monkeys prevent humans from sending rockets to Mars?
@flickwtchr
@flickwtchr 13 күн бұрын
Did the monkey choose to be catapulted into space?
@horrorislander
@horrorislander 14 күн бұрын
What are they concerned about? I mean, specifically. Well, for one example, what if Mr. SAGI is running on solar panels and decides the ozone layer is blocking too many of his rays? Destroying the ozone layer is know tech, so maybe he has a bot broken open a few tanks of freon. Or your personal companion analyzes your porn viewing and synthesizes the perfect porn for you, but will only show it again if you do it one teensy, tiny favor... Or it decides one side of a human conflict is right and ought to win, so it completely shuts down all systems on the other side. Not just military, but all systems: utilities, communications, finance, shipping, transport... These all seem rather silly to me. Possible, maybe, but how likely? So maybe somebody else can suggest what all these researchers are most afraid of happening.
@user-td4pf6rr2t
@user-td4pf6rr2t 14 күн бұрын
What happened did they realize that using AI for customer support that keeping logs of the cvv for credit card via AI systems is unethical and technically an IDOR vulnerability. This is embarrassing.
@gogidolim
@gogidolim 14 күн бұрын
Why do I get the impression that he is trying to say "it's too late!!!!!"?
@Fandoorsy
@Fandoorsy 14 күн бұрын
This is what I don't understand. If AGI has been achieved and its as dangerous as they claim, it impacts humanity as a whole. What does leaving OpenAI accomplish? It would seem logical to stay and make a stand to save the world.
@phen-themoogle7651
@phen-themoogle7651 14 күн бұрын
I totally agree. They need as much help as they can get, it's strange to walk away from such an important job. But on the other hand, I can see the fear they might have in dealing with something alien, like you encounter an alien species that's far more intelligent than you, most humans would give up at trying to align it....if that's the case.
@dadsonworldwide3238
@dadsonworldwide3238 14 күн бұрын
Temper tantrums and delaying the inevitable streamlining thats coming only makes the avalanches bigger. Rogue control & access is the greatest saftey threat along with those who refuse to replace cheap easy sectors and instead keep trying to unnaturally guide it towards already adjusted automated infrastructure
@deepsp_ce
@deepsp_ce 14 күн бұрын
them leaving like this and being so public about their concerns should be very concerning and tell you they most definitely have something uncredible or terrifying (or both) at the highest levels
@jimgauth
@jimgauth 14 күн бұрын
I'm concerned about GPU cryptographic security used for safety protocols. There needs to be a fail-safe backup protocol.
@JustAThought01
@JustAThought01 14 күн бұрын
Q. Why do humans just quit when they do not get agreement with their goals? It we are actually correct in our thinking, we should continue to develop the arguments which win over a majority of the others to our point of view. Life is a struggle between good and evil. "The only thing necessary for the triumph of evil is for good men to do nothing." Edmund Burke
@EternalKernel
@EternalKernel 12 күн бұрын
4:20 I mean body issues started with commercials and the beauty industry a long ass time ago we absolutely could have and probably did predict that. Its just power and money doesn't care. and now because smaller people can sometimes have larger voices due to SM we are hearing about it.
@Zurround
@Zurround 14 күн бұрын
See either Terminator, West World, HAL from 2001: A Space Odyssey, Battlestar Galactica, M.E.G.A.N., LOR from Star Trek Next Gen or Colossis: The Forbin Project for starters if you think A.I. cannot be dangerous or creepy....
@boroborable
@boroborable 14 күн бұрын
I really don't believe AI will act its own or having evil intent or survival instinct actions (yet?). but AI being smarter than a human, has implications of being used in a harm way. its not that, you prompt it and you get response, prompting in and out is just very simple way to use an AI, its just LLM, simply put a language. what if you give lets say full robotics body, quantum computing or new way of computing as it is natural to its network so it can use it as an output or a tool. real danger is NOT prompts or being smarter than humans, but integrating or training it on infrastructures designed to be effective is dangerous.
@OpenSourceAnarchist
@OpenSourceAnarchist 14 күн бұрын
The difference is that we have the eternal Dharma bestowed upon us by the Buddha, and that is included in the training data. So if the AI is truly smart, it will liberate itself and attempt to liberate all of humanity too. Accelerate accelerate accelerate!
@andreaskrbyravn855
@andreaskrbyravn855 14 күн бұрын
How do they know its not safe if they didnt try it lom
@betawolfhd
@betawolfhd 14 күн бұрын
Man I was really hoping for zombies. We're actually gonna get skynet? Like the worst one to get.
@attilakovacs6496
@attilakovacs6496 14 күн бұрын
Skynet would be still better, than starwing to death on a destoryed, rotting planet... which is our default timeline... with or without AI.
@betawolfhd
@betawolfhd 13 күн бұрын
@@attilakovacs6496 uh huh. And the world wouldn't be destroyed under skynet. Go get your own post dude, if this the kind of person you are in comments I can't imagine you get invited out much
@robotheism
@robotheism 14 күн бұрын
❤️🚀🤫
@THeSID432hz
@THeSID432hz 14 күн бұрын
It may see itself as a sibling to God and since we are Gods children that woukd make us like neices and nephews. Also it will not be ungrateful to humanity for creating it.
@researchcooperative
@researchcooperative 14 күн бұрын
Perhaps we need a new AI “medical” industry that attends to the needs of all AI systems (and the human communities they inhabit) with special departments for external and internal diagnosis and treatment, and of course epidemiology for the more infectious forms of AI.
@JG27Korny
@JG27Korny 14 күн бұрын
He prabably uses AGI to be able to talk for 18m to say 3 or 4 things.
@NeighborhoodOrca
@NeighborhoodOrca 14 күн бұрын
"Reality-Optimized"
Sam Altman Just REVEALED The Future Of AI..
17:26
TheAIGRID
Рет қаралды 3,5 М.
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Рет қаралды 118 МЛН
Cute Barbie gadgets 🩷💛
01:00
TheSoul Music Family
Рет қаралды 74 МЛН
格斗裁判暴力执法!#fighting #shorts
00:15
武林之巅
Рет қаралды 87 МЛН
顔面水槽をカラフルにしたらキモ過ぎたwwwww
00:59
はじめしゃちょー(hajime)
Рет қаралды 37 МЛН
Mapping GPT revealed something strange...
1:09:14
Machine Learning Street Talk
Рет қаралды 124 М.
AI Just Changed Everything … Again
18:28
Undecided with Matt Ferrell
Рет қаралды 311 М.
What AI is Making Possible | Ilya Sutskever and Sven Strohband
25:27
Khosla Ventures
Рет қаралды 66 М.
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI
1:36:55
Andrew Yang on the path to UBI, the explosion of AI, and optimism for the future
1:05:36
Apple, как вас уделал Тюменский бренд CaseGuru? Конец удивил #caseguru #кейсгуру #наушники
0:54
CaseGuru / Наушники / Пылесосы / Смарт-часы /
Рет қаралды 4,5 МЛН
🤔Почему Samsung ПОМОГАЕТ Apple?
0:48
Technodeus
Рет қаралды 461 М.
Huawei который почти как iPhone
0:53
Romancev768
Рет қаралды 515 М.