Lecture 03 - Pruning and Sparsity (Part I) | MIT 6.S965

  Рет қаралды 12,276

MIT HAN Lab

MIT HAN Lab

Жыл бұрын

Lecture 3 gives an introduction to the basics of neural network pruning which can reduce the parameter counts of neural networks by more than 90%, decreasing the storage requirements and improving the computation efficiency of neural networks. In this lecture, we go through all steps of pruning and introduce different granularities and criteria of neural network pruning.
Keywords: Neural Network Pruning, Pruning, Magnitude-based Pruning, Channel Pruning, Fine-grained Pruning
Slides: efficientml.ai/schedule/
--------------------------------------------------------------------------------------
TinyML and Efficient Deep Learning Computing
Instructors:
Song Han: songhan.mit.edu
Have you found it difficult to deploy neural networks on mobile devices and IoT devices? Have you ever found it too slow to train neural networks? This course is a deep dive into efficient machine learning techniques that enable powerful deep learning applications on resource-constrained devices. Topics cover efficient inference techniques, including model compression, pruning, quantization, neural architecture search, and distillation; and efficient training techniques, including gradient compression and on-device transfer learning; followed by application-specific model optimization techniques for videos, point cloud, and NLP; and efficient quantum machine learning. Students will get hands-on experience implementing deep learning applications on microcontrollers, mobile phones, and quantum machines with an open-ended design project related to mobile AI.
Website:
efficientml.ai/

Пікірлер
Lecture 04 - Pruning and Sparsity (Part II) | MIT 6.S965
1:07:46
MIT HAN Lab
Рет қаралды 4,3 М.
Lecture 05 - Quantization (Part I) | MIT 6.S965
1:11:43
MIT HAN Lab
Рет қаралды 14 М.
Llegó al techo 😱
00:37
Juan De Dios Pantoja
Рет қаралды 55 МЛН
Дарю Самокат Скейтеру !
00:42
Vlad Samokatchik
Рет қаралды 9 МЛН
Quantization vs Pruning vs Distillation: Optimizing NNs for Inference
19:46
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 806 М.
MIT 6.S191: Deep Generative Modeling
56:19
Alexander Amini
Рет қаралды 40 М.
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
6. Monte Carlo Simulation
50:05
MIT OpenCourseWare
Рет қаралды 2 МЛН
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 376 М.
تجربة أغرب توصيلة شحن ضد القطع تماما
0:56
صدام العزي
Рет қаралды 63 МЛН
НЕ БЕРУ APPLE VISION PRO!
0:37
ТЕСЛЕР
Рет қаралды 370 М.
Это - iPhone 16 и вот что надо знать...
17:20
Overtake lab
Рет қаралды 136 М.