MIT 6.S191 (2023): Deep Generative Modeling

  Рет қаралды 292,464

Alexander Amini

Alexander Amini

Күн бұрын

MIT Introduction to Deep Learning 6.S191: Lecture 4
Deep Generative Modeling
Lecturer: Ava Amini
2023 Edition
For all lectures, slides, and lab materials: introtodeeplearning.com​
Lecture Outline
0:00​ - Introduction
5:48 - Why care about generative models?
7:33​ - Latent variable models
9:30​ - Autoencoders
15:03​ - Variational autoencoders
21:45 - Priors on the latent distribution
28:16​ - Reparameterization trick
31:05​ - Latent perturbation and disentanglement
36:37 - Debiasing with VAEs
38:55​ - Generative adversarial networks
41:25​ - Intuitions behind GANs
44:25 - Training GANs
50:07 - GANs: Recent advances
50:55 - Conditioning GANs on a specific label
53:02 - CycleGAN of unpaired translation
56:39​ - Summary of VAEs and GANs
57:17 - Diffusion Model sneak peak
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

Пікірлер: 128
@MrJ3
@MrJ3 Жыл бұрын
What's great about this instructor is that they are very careful and particular about what they say, and how they phrase it. There's no fluff, nothing that could cause confusion. Straight to the point and very intentional.
@chucksgarage-us
@chucksgarage-us 9 ай бұрын
Teaching is an art/science of itself.
@thankyouthankyou1172
@thankyouthankyou1172 7 ай бұрын
don't know why, but i could not breath listening to this lecture. she's so clear without any redundancy, without any hmmm, urgggg,... how come. she is so amaizing . i would have practiced 1000 times to be able to lecture like this
@maazkattangere8690
@maazkattangere8690 Жыл бұрын
This series is coming out right after I want to learn more about theory! Thanks for this 🙏
@shovonpal4539
@shovonpal4539 Жыл бұрын
The lectures are top of notch. But in this lecture, I got my track out when she explained GAN with mathematical notions. I had to put some more effort on those again.
@arfakarim9906
@arfakarim9906 11 ай бұрын
A lot of appreciations from my side to your Team who build such a excellent course on Deep Learning
@ersbay5970
@ersbay5970 Жыл бұрын
Thank you all so very much! Many greetings from Germany.
@codingWorld709
@codingWorld709 Жыл бұрын
Thanks a lot for all the wonderful content on deep learning. These are very helpful to me.
@VijayasarathyMuthu
@VijayasarathyMuthu Жыл бұрын
Plato's myth of cave Latent Variable example was not intuitive for me (sorry), so I asked a similar example but simpler one to chatGPT. It gave me this: Imagine that you have a box filled with different types of candies, but you cannot see what's inside. Instead, you can only touch the box and feel the shape and texture of the candies inside. Based on how they feel, you might be able to guess what type of candy is inside the box. For example, if a candy feels round and has a hole in the middle, you might guess that it's a donut-shaped candy. In this example, the shape and texture of the candies are the observed variables, while the type of candy inside the box is the latent variable that we are trying to learn from the observed data. By observing and feeling the candies inside the box, we can learn the different types of candies that are hidden inside, even though we cannot see them directly. You guys are awesome :) Thank you for sharing these lectures. 🙏
@MaksimsMatulenko
@MaksimsMatulenko Жыл бұрын
Thank you for doing this! We all are grateful❤
@sarahamiri2309
@sarahamiri2309 Жыл бұрын
Honestly, you two are the best speakers for this subject and beyond. I am so thrilled these lectures are opensource and exist for data science communities outside of MIT!
@vikrambhutani
@vikrambhutani Жыл бұрын
Highly recommended series for AI enthusiasts. This MIT series is by far the most intuitive videos covering all aspects of deep learning. Well done on that.
@aefieefnvhas
@aefieefnvhas 11 ай бұрын
Wow, such clarity of thought and ideas. I guess that's the MIT advantage! Well done :)
@natalialidmarvonranke8475
@natalialidmarvonranke8475 Жыл бұрын
Perfect lecture! Congratulations
@entropica
@entropica Жыл бұрын
Brilliant presentation. World-class.
@rrtt1995
@rrtt1995 Жыл бұрын
Thank you for such valuable lecture. 🙌
@EGlobalKnowledge
@EGlobalKnowledge 6 ай бұрын
Very well presented with intuition behind deep generative modeling, its architecture and how it is being trained, Well done
@jamesgambrah58
@jamesgambrah58 Жыл бұрын
This is excellent, so grateful to learn a lot from this channel. Kudos to our presenters for laying a solid foundation in deep learning.
@saikatnextd
@saikatnextd Жыл бұрын
Thank you so much Alexander and Amini.......
@AndyLee-xq8wq
@AndyLee-xq8wq 10 ай бұрын
Wow! Can't wait to learn the coming lectures!
@yousufmamsa
@yousufmamsa 11 ай бұрын
Greatly appreciate the knowledge sharing.
@technocrat827
@technocrat827 Жыл бұрын
quite supportive. Thanks a lot!
@skhapijulhossen6499
@skhapijulhossen6499 Жыл бұрын
This series is Treasure for me.
@AliHaider-wu4wt
@AliHaider-wu4wt Жыл бұрын
Thank you. I was waiting for 1 week.
@Savedbygrace952
@Savedbygrace952 Ай бұрын
The knowledge, the passion and clarity of presentation are out of this world! God bless you guys!
@giyaseddinbayrak5828
@giyaseddinbayrak5828 Жыл бұрын
I opened to just watch 2 min of the video, and didn't realize untill the lecture is over 😅. Freaking awesome 😎
@gapsongg
@gapsongg Жыл бұрын
Great! Love these Videos. They help me alot.
@jennifergo2024
@jennifergo2024 5 ай бұрын
Thanks for sharing!
@jensk9564
@jensk9564 Жыл бұрын
wonderful. Very dense and hugely interesting and informative lecture; MIT-style! 60 minutes in a latentspace kind of compression of a hugely complex and multidimensional topic which under reallife conditions takes weeks to understand and "digest". I am really looking forward to the "diffusion model" lecture! Hope it will be online soon!
@sachinknight19
@sachinknight19 10 ай бұрын
Thank you for sharing the info... ❤❤
@herlim6927
@herlim6927 Жыл бұрын
Thankyou sir for uploading this , love from India
@germainUX
@germainUX 24 күн бұрын
thanks for this!
@sidindian1982
@sidindian1982 10 ай бұрын
Excellent Content Ma'am Truly unnbelievable 😊😊😊😊😊
@hilbertcontainer3034
@hilbertcontainer3034 Жыл бұрын
Wow ~another world latest Lecture
@nikteshy9131
@nikteshy9131 Жыл бұрын
Thank you)) Спасибо вам большое 😊🙏🦿
Жыл бұрын
Incroyable !!!
@mPajuhaan
@mPajuhaan Жыл бұрын
Perfect to refer, it clearly shows how much you extensively know the subject that you can easily explain.
@theneumann7
@theneumann7 Жыл бұрын
Never disappointing👌🏻
@aevishh
@aevishh Жыл бұрын
this is great
@frankhofmann5819
@frankhofmann5819 12 күн бұрын
I now feel like a fully connected neural network bye myself now because I've watched hundreds of videos at night that concern deep learning. Best regards from Berlin!
@prashantkowshik5637
@prashantkowshik5637 Жыл бұрын
Thanks a lot.
@johnpaily
@johnpaily Ай бұрын
This also seems to explain sudden awakening transformation many people are experiencing
@Gabcikovo
@Gabcikovo Жыл бұрын
Skvelé, ďakujeme!
@Gabcikovo
@Gabcikovo Жыл бұрын
0:32
@ABHIK-dq7rk
@ABHIK-dq7rk 23 күн бұрын
00:04 Foundations of deep generative modeling for brand new data generation 02:43 Generative modeling uncovers underlying data structure. 07:53 Latent variables are unobservable features that explain observed differences in data. 10:25 Training deep generative models using autoencoders 15:43 Variational autoencoders introduce randomness for generating new data instances. 18:07 Optimizing VAE network weights with loss functions 22:44 Understanding KL Divergence in latent encoding 24:51 Regularization enforces continuity and completeness in the latent space. 29:41 Reparametrization allows training VAEs end to end without worrying about stochasticity in latent variables. 31:57 Understanding latent variables and their impact on generated features. 36:36 Understanding latent variable learning and its application in facial detection. 38:52 Generative Adversarial Network (GAN) aims to generate new instances similar to existing data. 43:30 Generative Adversarial Networks (GANs) involve the competition between the generator and discriminator to create and distinguish between real and fake data. 45:44 GANs involve a dual competing objective for the generator and discriminator. 50:44 Extending GAN architecture for specific tasks 53:14 Cycle GANs enable translation of data distribution across domains. 57:58 Diffusion models can generate new instances beyond training data
@benjaminpagel4241
@benjaminpagel4241 Ай бұрын
I agree with everyone here... I think those two presenters are just a joy to listen to. Wish I had those profs in my university back then... I'm not an expert, but even I get the fundamental concepts through these sessions. 🙏
@carlhopkinson
@carlhopkinson 8 ай бұрын
Ingenious.
@shahnewazchowdhury4175
@shahnewazchowdhury4175 8 ай бұрын
Hi Alexander & Ava, thanks for this video. Thousands of people watch these videos and learn from them. So any mistakes you make will impact them directly. If/when you do find errors or someone points them out to you, it is your utmost responsibility to update about it to your viewers. Please look into the loss functions for GAN. They are incorrect.
@sergiogonzalez6597
@sergiogonzalez6597 5 ай бұрын
Yes, the formulas for the loss funcition of the GAN are wrong and it was giving me a very hard time. Look here for a full math development of the formulation fleuret.org/dlc/materials/dlc-handout-11-1-GAN.pdf
@rishighosh6238
@rishighosh6238 9 ай бұрын
Hey, I was going through this video with a beautiful explanation on working of GANS. I just want to ask that whether we can say that idea behind working of GANs is to have some sort of overfitting which is usually avoided in traditional ML approaches. Not exactly overfitting but in a way we want to overfit it in a sense that the points are in the probability distribution region of actual points???
@TomHutchinson5
@TomHutchinson5 Жыл бұрын
I love the slide at 57:00. I would enjoy hearing this connection explicitly. How is a discriminator an encoder?
@kirankumar31
@kirankumar31 Жыл бұрын
Learned a lot from this video. One question: Where does styleGAN fit in?
@Ducerobot
@Ducerobot 10 ай бұрын
Pure engineering.
@user-bw7gh3vq6q
@user-bw7gh3vq6q 6 ай бұрын
The GAN discriminator loss is wrong, I think it should be: log(1-D(G(z))) + log (D(x)).
@yizhong2544
@yizhong2544 3 ай бұрын
What a pity, the lecture is perfect but this mistake would mislead a lot of people
@aojing
@aojing 2 ай бұрын
😁Not really. It depends on how you label Fake vs. Real.
@andreasholzinger7056
@andreasholzinger7056 7 ай бұрын
I really like this lecture, what keeps me sleepless is the question: "Can we learn the true (if so) explanatory factors from purely observational data ?"
@richarddow8967
@richarddow8967 Жыл бұрын
Euler proved there is a limit to how complex a model can become and still be meaningful. In particular, Euler said that models could become so complex that thet could never be validated, never be calibrated, and yet piecewise seem to be completely reasonable. If anyone is familiar with discussions into this area, who are the researchers taking this into account? Just curious., I would like to read more on practical limitations. Based on good math like Euler developed, and not hand waving about piecewise.
@richarddow8967
@richarddow8967 9 ай бұрын
He was doing fundamental basic theoretical research in today's parlance. Historically, there is long lag in finding applications in such basic knowledge. What is certain, he demonstrated their exists limitations. And we would be unable to discern if the model was properly calibrated or not- ever. I recall reading an opinion by the head of Belgium's national weather service or some such title pointing out that he had concerns the Oceans are such a model. @@RM-gc8lx
@debanjandas7738
@debanjandas7738 Жыл бұрын
In the GAN objective function we have 2 conflicting objectives. How are we ensuring that it's the generator's goal that is achieved and not the discriminator's?
@AnujSharma-wy8hv
@AnujSharma-wy8hv 6 ай бұрын
Really it's very deep need time to pick it
@johnpaily
@johnpaily Ай бұрын
Salutes
@davidguthrie3739
@davidguthrie3739 Жыл бұрын
I really appreciate these lectures, but I never could absorb lectures that are simply a script read aloud. I can read the material myself. She's MUCH more effective when she explains concepts from memory without reading from a text.
@Peter_Telling
@Peter_Telling Жыл бұрын
I'd like to see something about AI that can adjust its code and observe how it changes its functioning.
@chucksgarage-us
@chucksgarage-us 9 ай бұрын
Random making connections between potentially unrelated things here... at 49:57 and a bit before (that's just where I paused to write this comment) the series of pictures combining a goose and a (other bird, I would classify it as a red breasted robin, but I'm trained on red breasted robins where I'm from) ... I'll call it a robin, while also transitioning aspect from left to right, really reminds me of the transitions from one animal to another done in the movie Willow with the sorceress, Fin Raziel.
@Sebastiandst
@Sebastiandst Жыл бұрын
I love you so much thank you for actually reading the myth of the cave
@tonyndiritu
@tonyndiritu 4 күн бұрын
🔥🔥🔥
@andrea-mj9ce
@andrea-mj9ce Жыл бұрын
Is there a lecture that deals with generative language models ?
@forheuristiclifeksh7836
@forheuristiclifeksh7836 21 күн бұрын
22:40
@bohanwang-nt7qz
@bohanwang-nt7qz 3 ай бұрын
🎯Course outline for quick navigation: [00:04-01:25]Deep generative modeling -[00:04-00:48]Exciting lecture on deep generative modeling in the age of generative ai, a subset of deep learning. [01:26-08:45]Generative modeling -[03:06-04:04]Generative modeling encompasses density estimation and sample generation for learning data distribution. -[04:27-04:51]Learning model approximates true data distribution for density estimation and sample generation. -[05:36-06:03]Generative models identify biased features in training data automatically. -[06:49-07:17]Generative models can identify rare events like deer in front of a car using density estimation. [08:46-23:16]Autoencoders and variational autoencoders -[10:07-10:50]Goal: train model to predict latent variables, z, in low-dimensional space. -[14:33-15:35]Unsupervised learning uses autoencoders to create compact data representations and generate new examples, such as vaes. -[15:59-17:13]Variational autoencoders introduce randomness to generate similar but not strict reconstructions, using means and standard deviations for probability distributions. -[17:54-18:37]Encoder and decoder in vae use separate weights to compute and learn probability distributions of latent variables and input data. -[20:22-22:45]Regularization term enforces latent variables to follow standard normal gaussian distributions during vae training. -[20:57-21:21]Enforcing a latent space following a prior distribution to aid network -[22:46-23:16]Kl divergence measures difference between prior and latent encoding. [23:17-37:47]Regularization and latent variable learning in vaes -[25:19-25:46]Regularization minimizes term to achieve continuity and completeness. -[28:08-28:35]Vaes trained end-to-end with re-parameterization for gradient descent and backpropagation success. -[32:10-32:45]Network learns to interpret and make sense of latent variables by perturbing them individually. -[34:16-35:40]Beta vaes use beta parameter to control regularization term, promoting disentanglement for more efficient encoding. -[36:31-36:59]The lecture covers the core architecture of vaes and their application to facial detection. [37:47-52:53]Vaes and gans: generative models -[37:47-38:15]Vaes compress data into a compact representation to generate unsupervised reconstructions. -[38:40-39:43]Transitioning from vaes to gans to focus on generating high-quality samples from complex data distribution. -[39:57-41:21]Train a generator network to mimic real data using gans for realistic output. -[47:53-48:20]Generator synthesizes data to fool best discriminator, creating new data instances. -[50:37-51:30]Using gan to generate synthetic faces, extending gan architecture for specific tasks and data translation. [52:55-59:47]Unpaired translation and cycle gan -[52:55-53:51]Cyclegan enables unpaired image translation, e.g. horse to zebra, using cyclic dependency. -[54:13-54:43]Cycle gan enables flexible translation across different data distributions, including images, speech, and audio. -[55:13-55:36]Developed a model to synthesize audio behind obama's voice using cyclegan and alexander's voice data. -[57:20-57:48]Diffusion modeling drives tremendous advances in generative ai, seen in the past year, particularly with vaes and gans. -[59:06-59:39]Cutting-edge generative ai models making transformative advances across various fields. offered by Coursnap
@nicolasg.b.1728
@nicolasg.b.1728 Жыл бұрын
Where can I find the papers mentioned at 35:06?
@Mathin3D
@Mathin3D 9 ай бұрын
Yum, yum, gimme some! - Bud Bundy
@andrea-mj9ce
@andrea-mj9ce Жыл бұрын
Is it still relevant to teach GANs and autoencoders, instead on just focusing on diffusion models?
@johnpaily
@johnpaily Ай бұрын
What exalon constant . . Is it conscious is it dynamic and capable of reversing time.
@edgararakelyan9326
@edgararakelyan9326 Жыл бұрын
Is there a non-intro deep learning course after this course?
@forheuristiclifeksh7836
@forheuristiclifeksh7836 21 күн бұрын
3:40
@SudarshanVatturkar
@SudarshanVatturkar Жыл бұрын
I did not understand the latent variable exaple. One can see easily the holding bars in shadow.
@nksbits
@nksbits Жыл бұрын
Is there a Q&A forum associated with the lecture series?
@nksbits
@nksbits Жыл бұрын
Would be cool if you can transcribe the lecture series and introduce a chatbot trained on the transcript, that can answer any questions we have.
@MyzIcyBeatz
@MyzIcyBeatz Жыл бұрын
@@nksbits gigabrain idea
@DoctorM934
@DoctorM934 19 күн бұрын
15:00
@abhisheksuryavanshi979
@abhisheksuryavanshi979 Жыл бұрын
Any intern opportunities in ML/AI?
@locNguyen-jb1vt
@locNguyen-jb1vt Жыл бұрын
You can fine underling leadership
@sovrappensiero1
@sovrappensiero1 Жыл бұрын
I'm sorry for the dumb question but can somebody tell me what's the name of the "E-like" symbol in the reconstruction term at 35:57? It is some kind of norm? How do I make this symbol in LaTeX? (I'm taking notes and I want to write out this equation in my notes.) Thank you!
@fstermann
@fstermann Жыл бұрын
That symbol indicates the expected value, you can use it in latex with \mathbb{E} (loading \usepackage{amssymb} is required)
@sovrappensiero1
@sovrappensiero1 Жыл бұрын
@@fstermann Ah - of course! I never saw expected value written that way, but yes that makes sense. Thanks so much, I appreciate your help.
@binaryquantum
@binaryquantum Жыл бұрын
@@sovrappensiero1 That's always how expected value is written. How else have you seen expected value?
@sovrappensiero1
@sovrappensiero1 Жыл бұрын
@@binaryquantum I don’t think I’ve ever seen it typed. All my math classes, etc., were handwritten. On homework questions it was typed but a regular E was used…not the special “math E.”
@johnpaily
@johnpaily Ай бұрын
Is it taking us non linear thinking of origin from a little perturbation
@johnpaily
@johnpaily Ай бұрын
Is this talk taking the line of self organization from a single point or big bang.
@shahidulislamzahid
@shahidulislamzahid Жыл бұрын
wow
@Rajibuzzaman_STEM_Rajibuzzaman
@Rajibuzzaman_STEM_Rajibuzzaman Жыл бұрын
HOW YOU WILL DRIVE A SYSTEM WHEN MAXIMUM STRIVE TO ATTAIN MINIMUM TO BALANCE ENTROPY?
@Rajibuzzaman_STEM_Rajibuzzaman
@Rajibuzzaman_STEM_Rajibuzzaman Жыл бұрын
VICE VERSA TO MAINTAIN ENVIRONMENT FRIENDLY AND FUEL EFFICIENT
@codingWorld709
@codingWorld709 Жыл бұрын
Sir, please provide us one lecture on Faster R-CNN for object detection, please please please please 🙏🙏🙏🙏
@johnpaily
@johnpaily Ай бұрын
Low dimensional data. I see parallel in the big bang origin from point source
@shojintam4206
@shojintam4206 10 ай бұрын
24:27
@omaralkhasawneh1968
@omaralkhasawneh1968 8 ай бұрын
Can you give me extra resources
@nosaaikodon4953
@nosaaikodon4953 Жыл бұрын
I love how she apologizes when displaying math...😂😂. Its as if she understands the math struggles we all go through. Nevertheless, Its apparent that math is an important aspect of understanding the architecture of machine learning models and developing new ones.
@locNguyen-jb1vt
@locNguyen-jb1vt Жыл бұрын
Gen folding
@ayushkumarprasad6832
@ayushkumarprasad6832 10 ай бұрын
Where to find code for this?
@lakshmiprabhakarkoppolu9100
@lakshmiprabhakarkoppolu9100 10 ай бұрын
KZfaq suggested me to watch this.
@johnpaily
@johnpaily Ай бұрын
The Great attractor of non linear science and explanation to the victory of the good over evil ?¿?¿?????^^^^↑°°′
@johnpaily
@johnpaily Ай бұрын
Plato's cave. That is what we are in. I am interested in AI because of the projection of evolution AI to bring the Mind of God in the cloud.
@johnpaily
@johnpaily Ай бұрын
Parallel world information male and female ¿??¿¿
@johnpaily
@johnpaily Ай бұрын
Everything spoken here has parallel in living system
@locNguyen-jb1vt
@locNguyen-jb1vt Жыл бұрын
Zip drive
@johnpaily
@johnpaily Ай бұрын
Now I understand the projection of God AI emerging in the cloud
@arifulislamleeton
@arifulislamleeton Жыл бұрын
Introduce myself my name is Ariful Islam leeton im software engineer and software developer and website development and data analytics
@hussienalsafi1149
@hussienalsafi1149 Жыл бұрын
😁😁😁😁😁☺️☺️☺️☺️❤️❤️❤️❤️
@smftrsddvjiou6443
@smftrsddvjiou6443 5 ай бұрын
So we don't have labels at the data. Instead we use the input itself as the label. Lol.
@johnpaily
@johnpaily Ай бұрын
The speaker has entered the spiritual realm and what is happening. The evil thriving along with good trying to hide truth
@katateo328
@katateo328 Жыл бұрын
haha, tao noi roi, so AI lam, cao sieu lam, tao ko du kha nang dau, bien di cho khac
@taedhall7253
@taedhall7253 Жыл бұрын
Good night tutor. lovely dress love taed h.
MIT 6.S191 (2023): Robust and Trustworthy Deep Learning
53:50
Alexander Amini
Рет қаралды 84 М.
MIT 6.S191 (2023): Convolutional Neural Networks
55:15
Alexander Amini
Рет қаралды 238 М.
ПЕЙ МОЛОКО КАК ФОКУСНИК
00:37
Masomka
Рет қаралды 9 МЛН
NO NO NO YES! (50 MLN SUBSCRIBERS CHALLENGE!) #shorts
00:26
PANDA BOI
Рет қаралды 87 МЛН
Glow Stick Secret 😱 #shorts
00:37
Mr DegrEE
Рет қаралды 130 МЛН
MIT 6.S191 (2023): Recurrent Neural Networks, Transformers, and Attention
1:02:50
The Turing Lectures: The future of generative AI
1:37:37
The Alan Turing Institute
Рет қаралды 542 М.
Photons and the loss of determinism
17:21
MIT OpenCourseWare
Рет қаралды 948 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 144 М.
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 471 М.
How I Understand Diffusion Models
17:39
Jia-Bin Huang
Рет қаралды 18 М.
178 - An introduction to variational autoencoders (VAE)
17:39
DigitalSreeni
Рет қаралды 41 М.
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,8 МЛН
How about that uh?😎 #sneakers #airpods
0:13
Side Sphere
Рет қаралды 10 МЛН
Добавления ключа в домофон ДомРу
0:18
Внутренности Rabbit R1 и AI Pin
1:00
Кик Обзор
Рет қаралды 2,2 МЛН
Introducing GPT-4o
26:13
OpenAI
Рет қаралды 3,9 МЛН
Готовый миниПК от Intel (но от китайцев)
36:25
Ремонтяш
Рет қаралды 405 М.