18. Information Theory of Deep Learning. Naftali Tishby

  Рет қаралды 91,408

Компьютерные науки

Компьютерные науки

Күн бұрын

Deep Learning: Theory, Algorithms, and Applications. Berlin, June 2017
The workshop aims at bringing together leading scientists in deep learning and related areas within machine learning, artificial intelligence, mathematics, statistics, and neuroscience. No formal submission is required. Participants are invited to present their recently published work as well as work in progress, and to share their vision and perspectives for the field.
doc.ml.tu-berlin.de/dlworkshop...

Пікірлер: 39
@confusedonkey
@confusedonkey 6 жыл бұрын
9:07 Each Layer is characterized by its Encoder & Decoder 13:45 DNN Layers in Information Plane 14:55 Video of convergence in Info. Plane 28:54 Video of training curve and Info. Plane 30:05 Normalized Mean and STD (Fig 4. of the paper) 35:00 Benefit of Hidden Layers 40:30 Noisy Relaxation, Info. Bottleneck Bound
@lihan1720
@lihan1720 5 жыл бұрын
Around 29:25, "When the error is small, you continue training. And that's when the compression is starting and generalization is improving". Mind blowing! Rethink about early stopping. You may lose the whole compression and generalization stage.
@sohrabferdowsi8231
@sohrabferdowsi8231 6 жыл бұрын
I'm glad finally some spirit of serious science is being injected into the deep learning area. Not a set of tuning and tweaking recipes falsely called science with a lot of buzz and undeserved unusual credit for some.
@joelxart
@joelxart 6 жыл бұрын
One of few DL talks including more science than guessing.
@TheGodSaw
@TheGodSaw 6 жыл бұрын
Give this man more time!
@mrvzhao
@mrvzhao 6 жыл бұрын
Awesome talk. And it's so dense that I had to pause the video about every 10 seconds just to reflex on what was said.
@tinowymann5714
@tinowymann5714 6 жыл бұрын
Amazing work. Hope this gives an impetus to the research in this domain. Even though I could not understand the maths behind in this short time. The results and insights are mind blowing.
@delaile
@delaile 6 жыл бұрын
Great talk. Thanks for sharing!
@pooya97
@pooya97 6 жыл бұрын
Very nice talk and visualizations. Thanks for sharing.
@michalsustr
@michalsustr 6 жыл бұрын
Great talk! Thank you for sharing!
@borispyakillya4777
@borispyakillya4777 7 жыл бұрын
Amazing talk
@gabiwork
@gabiwork 25 күн бұрын
Great video! Can I find anywhere this presentation?
@ProfessionalTycoons
@ProfessionalTycoons 6 жыл бұрын
This is a great video
@nikre
@nikre 7 жыл бұрын
It would be great if you could make an English-only Yandex channel for non-Russian talking audience, so that subscribing would be more meaningful.
@OjaysReel
@OjaysReel 6 жыл бұрын
Seconded.
@ProfessionalTycoons
@ProfessionalTycoons 5 жыл бұрын
very very good and interesting work
@mjpc
@mjpc 6 жыл бұрын
In these Information Plane diagrams, presumably ResNet would start at the top and only go through the compression phase?
@mugokiberenge8818
@mugokiberenge8818 5 жыл бұрын
Good staff!
@joze3108
@joze3108 6 жыл бұрын
I was reading the paper "Opening the black box of DNNS via Information", which is related to this talk, with large interest! What I didn't understand is how the training data (12 binary inputs mapped to binary output) is generated, especially what the "spherically symmetric real valued function of f(x)" stands for. I would be very grateful if someone could give me a hint on this!
@atabakd
@atabakd 6 жыл бұрын
I am having trouble connecting this work with resnets. It seems, if we force deeper layers not to bottleneck the input, but to keep the information from the input, we get better results!
@mazenezzeddine8319
@mazenezzeddine8319 6 жыл бұрын
Thanks, I need to watch another time (s). Those who disliked the video can they explain their point? I am sure that understanding the sophisticated scientific content of the lecture, is still easier than understanding how those people think and what neural network their brains run.
@pauldacus4590
@pauldacus4590 6 жыл бұрын
Speed == 0.75
@ch1caum
@ch1caum 6 жыл бұрын
Good stuff! Too bad there was too little time and the talk had to be compressed. Great presentation though, if anything had was forgotten it was probably less important. Having trouble decoding the implications though, as it's a lot of information, after I do I'm sure it's be a great learning experience.
@itsRAWRtime007
@itsRAWRtime007 6 жыл бұрын
compressed talk
@pavanlulla
@pavanlulla 6 жыл бұрын
Information bottleneck is what helps learning.. goodbye long meetings!
@r1a1p1AllenPogue
@r1a1p1AllenPogue 6 жыл бұрын
An entire semester of grad school in an hour.
@yoloswaggins2161
@yoloswaggins2161 5 жыл бұрын
Naftali "Essentially" Tishby
@user-ir7it4tk1t
@user-ir7it4tk1t 5 жыл бұрын
Naftali ''So Essentially" Tishby
@vishualee
@vishualee 5 жыл бұрын
Our eyes only look at what is in the immediate line of vision, while the surrounding objects are blurred. That seem to be an analogy to the information bottleneck theory.
@mridul121
@mridul121 6 жыл бұрын
TAking a Lot to comprehend and accent just not helping.
@SundaraRamanR
@SundaraRamanR 6 жыл бұрын
Just in case anyone else has trouble with the accent, the closed captions (CC button) manages to get reasonably good subtitles for this talk - I'd say it's about 90% accurate, so you'll have to look out for the 10% errors in speech-to-text conversion, especially when it comes to technical or mathematical terms. But overall it's a great aid in comprehension here.
001. Information Theory of Deep Learning - Naftali Tishby
1:47:05
Компьютерные науки
Рет қаралды 23 М.
마시멜로우로 체감되는 요즘 물가
00:20
진영민yeongmin
Рет қаралды 22 МЛН
DO YOU HAVE FRIENDS LIKE THIS?
00:17
dednahype
Рет қаралды 98 МЛН
A clash of kindness and indifference #shorts
00:17
Fabiosa Best Lifehacks
Рет қаралды 59 МЛН
A Deeper Understanding of Deep Learning - Prof. Naftali Tishby
21:35
Data Science Summit
Рет қаралды 11 М.
Heroes of Deep Learning: Andrew Ng interviews Geoffrey Hinton
39:46
Preserve Knowledge
Рет қаралды 152 М.
DOE CSGF 2016: The Deterministic Information Bottleneck
17:36
Krell Institute
Рет қаралды 3 М.
04 Schwartz Ziv Decoding the Information Bottleneck in Self Supervised Learning
48:20
Information Theory in the Geosciences
Рет қаралды 281
Entropy & Mutual Information in Machine Learning
51:51
Rami Khushaba
Рет қаралды 14 М.
Connections between physics and deep learning
51:46
MITCBMM
Рет қаралды 65 М.
Choose a phone for your mom
0:20
ChooseGift
Рет қаралды 7 МЛН
Mastering Picture Editing: Zoom Tools Tutorial
0:52
Photoo Edit
Рет қаралды 507 М.
АЙФОН 20 С ФУНКЦИЕЙ ВИДЕНИЯ ОГНЯ
0:59
КиноХост
Рет қаралды 252 М.
iPhone 16 с инновационным аккумулятором
0:45
ÉЖИ АКСЁНОВ
Рет қаралды 8 МЛН