Lenka Zdeborová - Statistical Physics of Machine Learning (May 1, 2024)

  Рет қаралды 12,404

Simons Foundation

Simons Foundation

27 күн бұрын

Machine learning provides an invaluable toolbox for the natural sciences, but it also comes with many open questions that the theoretical branches of the natural sciences can investigate.
In this Presidential Lecture, Lenka Zdeborová will describe recent trends and progress in exploring questions surrounding machine learning. She will discuss how diffusion or flow-based generative models sample (or fail to sample) challenging probability distributions. She will present a toy model of dot-product attention that presents a phase transition between positional and semantic learning. She will also revisit some classical methods for estimating uncertainty and their status in the context of modern overparameterized neural networks. More details: www.simonsfoundation.org/even...

Пікірлер: 9
@atabac
@atabac 24 күн бұрын
wow, if all teachers explain things like her, complexities are simplified.
@kevon217
@kevon217 21 күн бұрын
Excellent talk. Love the connections and insights.
@ozachar
@ozachar 21 күн бұрын
As a physicist, but non expert in AI, viewer: Very interesting insights. Over-parameterization (size) "compensates" for sub-optimal algorithm. Also non trivial that it doesn't lead to getting stack in fitting the noise. Organic neural brains (human or animal) obviously don't need so much data, and also are actually not that large in number of parameters (if I am not mistaken). So for sure there is room for improvement in the algorithm and structure, which is exactly her direction of research. A success there will be very impactfull.
@nias2631
@nias2631 17 күн бұрын
FWIW iIf you consider a brain's neurons as analogs to neurons in an ANN then the human brain, at least, has more complexity by far. Jeffrey Hinton points out that the mechanism of backprop (chain rule) to adjust parameters is more efficient by far than biological organisms in its ability to store patterns.
@nias2631
@nias2631 17 күн бұрын
That efficiency is what worries him and also points to a need for a definition of sentience arising under different learning mechanisms than our own.
@theK594
@theK594 24 күн бұрын
Fantastic lecture! Very clear and well structured! Thank you, diky🇨🇿!
@shinn-tyanwu4155
@shinn-tyanwu4155 22 күн бұрын
You will be a good mother please make many babies 😊😊😊
@forcebender5079
@forcebender5079 25 күн бұрын
想要理解机器学习内部黑箱,要靠更进一步的人工智能,由更先进的人工智能反过来解析黑箱,破解黑箱的机理,靠现在用人力去理解黑箱内部机制是不可能的。
@jiadong7873
@jiadong7873 24 күн бұрын
huh?
And this year's Turing Award goes to...
15:44
polylog
Рет қаралды 92 М.
Gaussian Mixture Models for Clustering
12:13
Machine Learning TV
Рет қаралды 89 М.
[Vowel]물고기는 물에서 살아야 해🐟🤣Fish have to live in the water #funny
00:53
Joven bailarín noquea a ladrón de un golpe #nmas #shorts
00:17
¡Puaj! No comas piruleta sucia, usa un gadget 😱 #herramienta
00:30
JOON Spanish
Рет қаралды 22 МЛН
Relating Topology and Geometry - 2 Minute Math with Jacob Lurie
2:19
Fields Institute
Рет қаралды 33 М.
Bias and Variance, Simplified
4:57
Underfitted
Рет қаралды 10 М.
My Own Original Level Five Menger Sponge
3:13
Eric Freeman
Рет қаралды 9 М.
Bias and Variance for Machine Learning | Deep Learning
7:15
AssemblyAI
Рет қаралды 16 М.
3 Integrals You Won't See in Calculus (And the 2 You Will)
12:05
Force Fields, Behind the Fog of Maths: Sheldrake-Vernon Dialogue 87
37:39
Google DeepMind CEO on Drug Discovery, Hype, Isomorphic
12:42
Bloomberg Television
Рет қаралды 91 М.
Индуктивность и дроссель.
1:00
Hi Dev! – Электроника
Рет қаралды 1,5 МЛН
Эволюция телефонов!
0:30
ТРЕНДИ ШОРТС
Рет қаралды 6 МЛН
How To Unlock Your iphone With Your Voice
0:34
요루퐁 yorupong
Рет қаралды 13 МЛН
iphone fold ? #spongebob #spongebobsquarepants
0:15
Si pamer 😏
Рет қаралды 176 М.
POCO F6 PRO - ЛУЧШИЙ POCO НА ДАННЫЙ МОМЕНТ!
18:51