Рет қаралды 168,558
Microsoft promise ‘whale-size’ compute for a GPT-5-tier model, and say the end is not in sight for scaling the power of AI. Google ship models and a fascinating paper, while Anthropic unveil the inner workings of large language models. Meanwhile Sam Altman is forced to repeatedly apologize, Ilya Sutskever leaves, and GPT-4o is pushed back. My reflections on all of the above, and details you may have missed from each paper.
AI Insiders: / aiexplained
Kevin Scott Talk: • Microsoft Build 2024: ...
Mark Chen Hint: x.com/GaryMarcus/status/17901...
Noam Comments: / 1676971506969219072
Anthropic Scaling Monosemanticity: transformer-circuits.pub/2024...
www.anthropic.com/news/mappin...
Ilya Leaves: / 1790517455628198322
Then Jan Leike: x.com/janleike/status/1791498...
And Logan Hints: x.com/OfficialLoganK/status/1...
Altman Apologizes: x.com/sama/status/17919368575...
www.forbes.com/sites/antoniop...
And Her Delayed: help.openai.com/en/articles/8...
Superalignment Starved: fortune.com/2024/05/21/openai...
openai.com/index/introducing-...
Gemini Updated Paper: storage.googleapis.com/deepmi...
And Prizes: x.com/JeffDean/status/1793026...
Google AI Studio: ai.google.dev/aistudio
Business GenAI Consulting: theinsiders.ai
Non-hype Newsletter: signaltonoise.beehiiv.com/
AI Insiders: / aiexplained