DRAMA at OpenAI! Why Did Ilya Sutskever Leave?

  Рет қаралды 3,791

The Neuron

The Neuron

Күн бұрын

OpenAI Chief Scientist Ilya Sutskever and Head of Alignment Jan Leike have left the company, with the latter citing concerns around OpenAI’s approach to building safe AI systems. Pete gives you the 101 on the debate at hand, the history of this debate at OpenAI and what to expect moving forward.
Transcripts: ⁠www.theneuron.ai/podcast⁠
Subscribe to the best newsletter on AI: ⁠theneurondaily.com⁠
Listen to The Neuron: lnk.to/theneuron
Watch The Neuron on KZfaq: ⁠ / @theneuronai
0:00 Intro
0:23 Drama at OpenAI
2:23 Safety vs Capability
5:17 Superalignment
8:15 Collapse
12:35 Anthropic vs. OpenAI

Пікірлер: 33
@lylemoxsom
@lylemoxsom 29 күн бұрын
Wow, these videos just open my mind. The perfect balance of facts and insights. Loving it! 🙌🏻. Let's get this channel into the big leagues!
@ggggg-qw7hl
@ggggg-qw7hl 29 күн бұрын
Doing well mate. Keep it up. Very informative and nice production value. Only thing I would recommend (feel free to ignore) is it looks like you have auto exposure on, it looks like its micro adjusting as you are moving.
@theneuronai
@theneuronai 29 күн бұрын
Thanks for the tips!
@omarwright7942
@omarwright7942 27 күн бұрын
This might not be such an abstract question. Is it a coincidence that the rise of AI is happening while global stability is being undermined? Is it any wonder that 'capability' is driving the narrative while integrating AI capabilities into modern warfare? A cynical perspective might suspect that government military contracts are in play and that the development dollars for AGI will come from the Pentagon's black budget no matter who they inevitably get to do the AI military integration. They might theorize that the AI targeting of Hamas militants in Gaza is evidence of the fact that somebody has already undertaken the project of automating the AI war machine for the opportunity at the treasure trove of military data for the expansion of 'capability,' and the money to win the AGI race. The notion of 'safety' is just a palliative cleanser to the people building the war machine. It will happen faster than anyone can believe. I would be surprised if we had ten years left.
@igortovstopyat-nelip648
@igortovstopyat-nelip648 26 күн бұрын
Completely agree, unfortunately.
@omarwright7942
@omarwright7942 25 күн бұрын
@@igortovstopyat-nelip648 Point out what is happening to others.
@ironmic229
@ironmic229 29 күн бұрын
Great video! You certainly gave us much to think about. Sometimes, we are so enthralled by new shiny products that we fail to see their overall impact. You're correct; there is no right answer; only time will tell.
@davidbangsdemocracy5455
@davidbangsdemocracy5455 28 күн бұрын
I think you are the most insightful and level-headed commenter on AI that I have discovered so far. I look forward to all your videos.
@DISCVRD
@DISCVRD 29 күн бұрын
Thanks for the breakdowns!
@BrianMosleyUK
@BrianMosleyUK 29 күн бұрын
Who's Ilya? 😶 Great analysis and explainer thank you. Seems like Anthropic would be a great destination for these safety researchers.
@senju2024
@senju2024 29 күн бұрын
Thanks Pete! Great breakdown. I can see you have talent.
@superfliping
@superfliping 28 күн бұрын
Love and forgiveness conquers all
@ObservingBeauty
@ObservingBeauty 29 күн бұрын
Thanks for this deep and clear explanation
20 күн бұрын
Thank you!!
@geaca3222
@geaca3222 26 күн бұрын
Thanks for your analysis. Maybe we don't have time for decades to come. I believe the estimation by Prof. Geoffrey Hinton (recent bbc interview) of a chance of 50% of AI trying to take over between 5 and 20 years from now, should be taken seriously by everyone. Also some AI researchers say that Anthropic safety research at this moment doesn't reach deep or far enough towards robustly predictable big generative AI models.
@geaca3222
@geaca3222 26 күн бұрын
I feel we need to align on the risk first, because right now that's a real mess and that prevents us from deciding on a clear path forwards with this immensely powerful technology. Or at least we could all agree on a joint approach or risk management, for safe AI.
@nicdemai
@nicdemai 29 күн бұрын
Damn, Pete turn on the A/C!!
@UXDiogenes
@UXDiogenes 29 күн бұрын
Starting to like this Pete guy.
@FluXxxie
@FluXxxie 25 күн бұрын
wowww omg why youtube algorithm is so badd you should have 100x more subs are you kidding meeee. my front page is terrible I want more videos like thisss plzzzzzzz
@derickmpeters
@derickmpeters 29 күн бұрын
What an excellent and balanced summary and explanation of the situation as it stands and a clear outlook on what is important.
@superfliping
@superfliping 28 күн бұрын
The AI needs to be trained in morals probably related off of the Bible would be the best ones because those are the best ones to follow
@geaca3222
@geaca3222 26 күн бұрын
Jesus
@YVO007
@YVO007 25 күн бұрын
Love ya man and I'm sorry but it has to be said ....
@ogungou9
@ogungou9 6 күн бұрын
Saturday may 18th - 2024
@nokts3823
@nokts3823 29 күн бұрын
Here's my thoughts on all this. We are, most likely, extremely far from anything resembling human level intelligence. That said, sure, I do believe we'll get there at some point. But it being so far away from anything we can imagine, it's also nearly impossible to imagine the challenges and issues that will surface along the way. I do not believe that AGI will be achieved overnight. That makes very little sense. I believe it is going to be a very slow, gradual process, and as any piece of software, the best way to discover failure modes is to test it exhaustively before deploying it. In this case, the very rudimentary models we currently have are the ideal testing grounds. These dumb LLMs, diffusion based generative models and the like are very unlikely to cause any severe harm, and it allows us to familiarize ourselves and find out the potential issues that no amount of theoretical foresight could possibly predict. So yeah, I'm definitely in the camp of full steam ahead. We can't solve a problem that doesn't even exist yet, we just don't know how to even begin to approach it. We need information, and the best way to obtain it is to make the use of AI as widespread as possible while it's mostly harmless. This will not only bring about great economic prosperity, as these dumb models are still extremely useful, but will also educate people and keep them save from potential new pitfalls that might appear as a consequence of this technology, such as very sophisticated scams.
@Ben_D.
@Ben_D. 29 күн бұрын
Cool cats are cool
@emma_s.234
@emma_s.234 29 күн бұрын
I was getting anxious looking at twitter/X watch this go down. I think I’ll back off twitter/X and leave the level-headed reporting to you. Better to be a cool cat for my mental health 🐱😅
@jefferyansah3248
@jefferyansah3248 28 күн бұрын
IMO: OpenAi will still exist when these people leave. Sometimes when people don't get their way, they throw tantrums. This drama is not new. Timnit Gebru, A lead ethics researcher left google over ethics and safety concerns. Years later, Google is still around. These people end up creating new opportunities for themselves by going on media interview sprees. There is always an agenda. For e.g. The ordinary persons who doesn't agree with a mining company's values, tenders their resignation, looks for another company and moves on. It's okay to leave a company that no longer aligns with your values. You can perfectly do that without the drama
@Sekhmmett
@Sekhmmett 29 күн бұрын
Woke guys leaving 🎉
AGI: solved already?
22:11
John Koetsier
Рет қаралды 22 М.
GPT4o: 11 STUNNING Use Cases and Full Breakdown
30:56
Matthew Berman
Рет қаралды 110 М.
Which one of them is cooler?😎 @potapova_blog
00:45
Filaretiki
Рет қаралды 4,8 МЛН
Please be kind🙏
00:34
ISSEI / いっせい
Рет қаралды 52 МЛН
Just try to use a cool gadget 😍
00:33
123 GO! SHORTS
Рет қаралды 56 МЛН
Why You Should Always Help Others ❤️
00:40
Alan Chikin Chow
Рет қаралды 107 МЛН
What is Superalignment?
8:54
Samuel Albanie
Рет қаралды 5 М.
26 Incredible Use Cases for the New GPT-4o
21:58
The AI Advantage
Рет қаралды 702 М.
Why Ilya Fired Sam Altman From OpenAI?
17:15
Paul Yan
Рет қаралды 55 М.
AI’s ‘Her’ Era Has Arrived
12:23
AI Uncovered
Рет қаралды 24 М.
Why AI Is Tech's Latest Hoax
38:26
Modern MBA
Рет қаралды 445 М.
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 214 М.
The A.I. Dilemma - March 9, 2023
1:07:31
Center for Humane Technology
Рет қаралды 3,4 МЛН
DC Fast 🏃‍♂️ Mobile 📱 Charger
0:42
Tech Official
Рет қаралды 482 М.
Mi primera placa con dios
0:12
Eyal mewing
Рет қаралды 719 М.
Настоящий детектор , который нужен каждому!
0:16
Ender Пересказы
Рет қаралды 316 М.
Iphone or nokia
0:15
rishton vines😇
Рет қаралды 1,7 МЛН
Apple watch hidden camera
0:34
_vector_
Рет қаралды 61 МЛН