Efficient Computing for Deep Learning, Robotics, and AI (Vivienne Sze) | MIT Deep Learning Series

  Рет қаралды 56,625

Lex Fridman

Lex Fridman

Күн бұрын

Lecture by Vivienne Sze in January 2020, part of the MIT Deep Learning Lecture Series.
Website: deeplearning.mit.edu
Slides: bit.ly/2Rm7Gi1
Playlist: bit.ly/deep-learning-playlist
LECTURE LINKS:
Twitter: / eems_mit
KZfaq: / @miteemsviviennesze
MIT professional course: bit.ly/36ncGam
NeurIPS 2019 tutorial: bit.ly/2RhVleO
Tutorial and survey paper: arxiv.org/abs/1703.09039
Book coming out in Spring 2020!
OUTLINE:
0:00 - Introduction
0:43 - Talk overview
1:18 - Compute for deep learning
5:48 - Power consumption for deep learning, robotics, and AI
9:23 - Deep learning in the context of resource use
12:29 - Deep learning basics
20:28 - Hardware acceleration for deep learning
57:54 - Looking beyond the DNN accelerator for acceleration
1:03:45 - Beyond deep neural networks
CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman

Пікірлер: 46
@lexfridman
@lexfridman 4 жыл бұрын
I really enjoyed this talk by Vivienne. Here's the outline: 0:00 - Introduction 0:43 - Talk overview 1:18 - Compute for deep learning 5:48 - Power consumption for deep learning, robotics, and AI 9:23 - Deep learning in the context of resource use 12:29 - Deep learning basics 20:28 - Hardware acceleration for deep learning 57:54 - Looking beyond the DNN accelerator for acceleration 1:03:45 - Beyond deep neural networks
@gggrow
@gggrow 4 жыл бұрын
Looking forward to watching this, but shouldn't the Vladimir Vapnik lecture be coming first?
@createchannel8815
@createchannel8815 4 жыл бұрын
Me too. Invite her again.
@createchannel8815
@createchannel8815 4 жыл бұрын
Great talk. The Speaker Vivienne was clear and concise. Very informative.
@gonzalochristobal
@gonzalochristobal 4 жыл бұрын
thank you lex, the amount of information you already shared is invaluable, eternally grateful
@NomenNescio99
@NomenNescio99 4 жыл бұрын
Thank you for sharing the lecture, this is the type of content I really enjoy.
@burkebaby
@burkebaby Жыл бұрын
Impressive amounts of information delivered by this lady!. To watch a such high densely packed informative video I had to take more than few breaks.
@samuelec
@samuelec 4 жыл бұрын
Impressive amounts of information delivered by this lady!. To watch a such high densely packed informative video I had to take more than few breaks. I wonder how she managed to go through those 80 slides so fast and if there is someone that watched it all in one go without lose the focus !
@JousefM
@JousefM 4 жыл бұрын
Thanks for a rather "exotic" topic I need to learn about as an AI newbie, much appreciated Lex!
@summersnow7296
@summersnow7296 4 жыл бұрын
Excellent lecture 👏👏👏. Things that we don’t usually think about as a ML practitioner but highly important. Great insights.
@UglyG82
@UglyG82 4 жыл бұрын
Great stuff Lex. Thank you !
@UglyG82
@UglyG82 4 жыл бұрын
And Thank you Vivienne for the fantastic insight
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
I am not an expert in the field of Vivienne Sze, however, she was an extremely good lecturer. Every concept was extremely clear.
@pierreerbacher4864
@pierreerbacher4864 4 жыл бұрын
The density of neurons in this channel is incredibly high.
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
Vivienne is incredibly smart, it is a pleasure to listen to her.
@davidvijayramchurn1860
@davidvijayramchurn1860 4 жыл бұрын
Ironically, if you call someone 'dense' in English slang, it would imply the opposite.
@jayhu6075
@jayhu6075 4 жыл бұрын
Many thanks for sharing to the people that not study can afford at the MIT. Respect.
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
Agree with you. Respect.
@nikhilpandey2364
@nikhilpandey2364 4 жыл бұрын
I was researching about this on my own. I have been doing the network pruning wrong. I wouldn’t mind a hit in accuracy if my latency budget were met but now I think I can be far more frugal with the decrease in accuracy. Thanks a lot.
@JonMcGill
@JonMcGill 3 жыл бұрын
I used to be a Field Apps engineer for telecom, and she's certainly correct about the power problem with respect to chip technology. Very likeable lecturer!!
@Happy-wi7ml
@Happy-wi7ml 4 жыл бұрын
Brilliant thank you
@ganeshdongari7098
@ganeshdongari7098 3 жыл бұрын
Excellent
@merlinmystique
@merlinmystique 4 жыл бұрын
Thank you, every video you post is incredibly useful. Though, it is really hard to enter this field from scratch: everything you learn forces you to go learn thousands other things, it gets really frustrating sometimes. I hope in time this will go better
@Soulixs
@Soulixs 3 жыл бұрын
thanks lex
@BlackHermit
@BlackHermit 4 жыл бұрын
FastDepth is really interesting. Could be useful for many people.
@XCSme
@XCSme 4 жыл бұрын
Great video and an interesting problem. Why stop at architecture? What about using different materials for specialized DNN hardware? Maybe using some lower power transistors that are less accurate but good enough for inference. I don't think the brain neurons are always 100% accurate and consistent, but the brain seems to be somewhat fault tolerant.
@warsin8641
@warsin8641 4 жыл бұрын
I love this I will rewatch everything when I'm older and hopefully understand better and deeper I'm only a junior in high school 😖
@tomfillot5453
@tomfillot5453 4 жыл бұрын
Maybe start by looking at Crash Course computer science. They give a good overview of how a computer actually works, and should give you more context for what are the different types of memory, operation and stuff like that. Then 3Blue1Brown has an excellent video series on neural networks. A lot of understanding comes from calculus, but fortunately he also has an excellent video series on that !
@alterna19
@alterna19 4 жыл бұрын
Warsin I like your avatar
@thusspokeshabistari
@thusspokeshabistari 4 жыл бұрын
Try to watch the video slowly in segmented chunks, and then write down what you understand and don't understand about the particular segment of the video(s), and then you can Google what you don't understand and then get back to viewing the video again later.
@ayushdutta8050
@ayushdutta8050 4 жыл бұрын
Haha .. senior year here 😅 . . AI has no age cutoff thank god haha
@punyaslokdutta4362
@punyaslokdutta4362 4 жыл бұрын
Trade off between number of filters on the 3D Convolution and a 4D Convolution ? . Convolution is a matrix operation (w*Imap+ Bias). RELU Activation is mostly used to provide non-linearity . I feel the number of Filters is needed to see higher abstracted stuff . For instance, The initial layer of a CNN understands pixels based information primarily for edges , cuts, depths. The layer following it understands shape , structures. The further layers help us understand semantic meaning of eyes, skin, ears, nose, face. But, How will the model perform when instead of multiple filters , we having more layers . That is, the information in filters in stuffed inside the CNN layers. Or is it done for easing computation while training ?
@machinimaaquinix3178
@machinimaaquinix3178 4 жыл бұрын
This was a great talk, thank goodnees KZfaq has a .75 speed mode. She talks fast!
@colouredlaundry1165
@colouredlaundry1165 4 жыл бұрын
She has very high knowledge throughput: 10Gb information per second xD
@kitgary
@kitgary 4 жыл бұрын
Genius!!!!!
@masbro1901
@masbro1901 2 жыл бұрын
1:10:37 its 100x faster than FPGA ?? wth, wow, thats blowing my mind, i thought designing custom hardware for specialized algorithm on FPGA its the fastest way on the planet, is it really??
@adamsimms8528
@adamsimms8528 4 жыл бұрын
I'm trying to imagine how this structure will become half relevant as we move into UltraRAM which is as close to as fast as DRAM but NOT volatile like memory stick type RAM. What are the implications if the data can be laid out and accessed in place where it is saved. Suddenly the whole structure is no longer useful.
@minhongz
@minhongz 4 жыл бұрын
So essentially power consumption and speed are almost equivalent for AI chips. Does anyone know what architecture Tesla chips employ?
@paulrautenbach
@paulrautenbach 4 жыл бұрын
While watching this I was seeing parallels with what I know about the Tesla chips from their Autonomy Investor Day presentation. The Tesla chips were designed with an energy budget in mind and so address many of the same things. One advantage the Tesla chips have is they do not need to be general purpose - so, to a large extent, only need to support a single architecture or configuration. In some cases the Tesla chips avoid storage and retrieval of intermediate data by passing outputs directly to inputs via hardware channels between successive computational stages implemented as separate hardware. A large proportion of the Tesla chips are used for static memory to implement what she called global memory. This avoids going off-chip for most values.
@samoha0812
@samoha0812 Жыл бұрын
This is what I exactly wanted to hear. Thank you. I expected to hear about how AI chip is designed to minimize energy consumption attracted by her presentation title but lot of content is focused on computing algorithm rather than hardware design but great presentation providing comprehensive understanding about computing and energy consumption. Thank you
@jkobject
@jkobject 4 жыл бұрын
what about neuromorphic computing?
@jessenochella4309
@jessenochella4309 4 жыл бұрын
REEVERSIBLE computing uses less power! But you need new chip architecture and algorithms.
@fayssalelansari8584
@fayssalelansari8584 4 жыл бұрын
not bad
@vuththiwattanathornkosithg5625
@vuththiwattanathornkosithg5625 4 жыл бұрын
Tesla hardware 3.0???
@alexanderpadalka5708
@alexanderpadalka5708 4 жыл бұрын
🗽
@dapdizzy
@dapdizzy 4 жыл бұрын
I don’t think the content presented by those youngsters is on par with the talks to the legends. You know what I mean.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 366 М.
Can You Draw A PERFECTLY Dotted Line?
00:55
Stokes Twins
Рет қаралды 105 МЛН
Жайдарман | Туған күн 2024 | Алматы
2:22:55
Jaidarman OFFICIAL / JCI
Рет қаралды 1,7 МЛН
Self-Driving Car with JavaScript Course - Neural Networks and Machine Learning
2:32:40
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
How to Start a Speech
8:47
Conor Neill
Рет қаралды 19 МЛН
Why Are 96,000,000 Black Balls on This Reservoir?
12:07
Veritasium
Рет қаралды 101 МЛН
Scott Aaronson: Quantum Computing | Lex Fridman Podcast #72
1:33:42
Lex Fridman
Рет қаралды 235 М.
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
Neubauer Collegium
Рет қаралды 17 М.
تجربة أغرب توصيلة شحن ضد القطع تماما
0:56
صدام العزي
Рет қаралды 24 МЛН
В России ускорили интернет в 1000 раз
0:18
Короче, новости
Рет қаралды 1,8 МЛН
ИГРОВОВЫЙ НОУТ ASUS ЗА 57 тысяч
25:33
Ремонтяш
Рет қаралды 349 М.
Собери ПК и Получи 10,000₽
1:00
build monsters
Рет қаралды 2,4 МЛН
Урна с айфонами!
0:30
По ту сторону Гугла
Рет қаралды 8 МЛН
WATERPROOF RATED IP-69🌧️#oppo #oppof27pro#oppoindia
0:10
Fivestar Mobile
Рет қаралды 17 МЛН