Building our first simple GAN

  Рет қаралды 106,556

Aladdin Persson

Aladdin Persson

Күн бұрын

In this video we build a simple generative adversarial network based on fully connected layers and train it on the MNIST dataset. It's far from perfect, but it's a start and will lead us to implement more advanced and better architectures in upcoming videos.
❤️ Support the channel ❤️
/ @aladdinpersson
Paid Courses I recommend for learning (affiliate links, no extra cost for you):
⭐ Machine Learning Specialization bit.ly/3hjTBBt
⭐ Deep Learning Specialization bit.ly/3YcUkoI
📘 MLOps Specialization bit.ly/3wibaWy
📘 GAN Specialization bit.ly/3FmnZDl
📘 NLP Specialization bit.ly/3GXoQuP
✨ Free Resources that are great:
NLP: web.stanford.edu/class/cs224n/
CV: cs231n.stanford.edu/
Deployment: fullstackdeeplearning.com/
FastAI: www.fast.ai/
💻 My Deep Learning Setup and Recording Setup:
www.amazon.com/shop/aladdinpe...
GitHub Repository:
github.com/aladdinpersson/Mac...
✅ One-Time Donations:
Paypal: bit.ly/3buoRYH
▶️ You Can Connect with me on:
Twitter - / aladdinpersson
LinkedIn - / aladdin-persson-a95384153
Github - github.com/aladdinpersson
OUTLINE:
0:00 - Introduction
0:29 - Building Discriminator
2:14 - Building Generator
4:36 - Hyperparameters, initializations, and preprocessing
10:14 - Setup training of GANs
22:09 - Training and evaluation

Пікірлер: 157
@AladdinPersson
@AladdinPersson 3 жыл бұрын
If you're completely new to GANs I recommend you check out the GAN playlist where there is an introduction video to how GANs work and then watch this video where we implement the first GAN architecture from scratch. If you have recommendations on GANs you think would make this into an even better resource for people wanting to learn about GANs let me know in the comments below and I'll try to do it :) I learned a lot and was inspired to make these GAN videos by the GAN specialization on coursera which I recommend. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link. affiliate: bit.ly/2OECviQ non-affiliate: bit.ly/3bvr9qy Here's the outline for the video: 0:00 - Introduction 0:29 - Building Discriminator 2:14 - Building Generator 4:36 - Hyperparameters, initializations, and preprocessing 10:14 - Setup training of GANs 22:09 - Training and evaluation
@sourabharsh16
@sourabharsh16 Жыл бұрын
Thanks a lot for the video. It really helped me in understanding the naunces of GAN and helped me write it from scratch as well. Keep on going, buddy!
@saurabhjain507
@saurabhjain507 3 жыл бұрын
Very nicely explained. Loved your clarity.
@aminasadi1040
@aminasadi1040 6 ай бұрын
Awesome video, you explain exactly what should be explained, I love it!
@philwhln
@philwhln 3 жыл бұрын
Nice intro to GANs, thanks!
@hackercop
@hackercop 2 жыл бұрын
This worked for me thanks, am enjoying this playist!
@kae4881
@kae4881 3 жыл бұрын
Dang Man! Love your videos, you're EPIC!!!
@maqboolurrahimkhan
@maqboolurrahimkhan 3 жыл бұрын
Thanks Awesome and simple implementation :)
@aadarshraj1890
@aadarshraj1890 3 жыл бұрын
You Are Awesome😎😎. Please Continue This Series...Thanks For Awesome Video Series
@car6647
@car6647 2 жыл бұрын
thanks a lot, now i have a better understanding of GAN
@aymensekhri2133
@aymensekhri2133 2 жыл бұрын
Thank you very much! I got lots of things
@mohsenmehranian7571
@mohsenmehranian7571 2 жыл бұрын
Thanks, it was a very good video!
@niveyoga3242
@niveyoga3242 3 жыл бұрын
Heyo, awesome vid as always! I wanted to ask you if you could do some variational autoencoders in pytorch & maybe also cover some of the mathematics of the special variants, if you are interested (i.e. as you're doing for GANs)? :)
@icanyagmur
@icanyagmur Жыл бұрын
Nice work!
@mohammedshehada5373
@mohammedshehada5373 2 жыл бұрын
Thanks for the amazing content really helpful, Can we have some GAN stuff using audio data please? voice cloning maybe? Thanks again
@sardorabdirayimov
@sardorabdirayimov Жыл бұрын
Great effort. Good tutorial
@vatsal_gamit
@vatsal_gamit 3 жыл бұрын
You're like a magic 🔥
@ShahryarSalmani
@ShahryarSalmani 3 жыл бұрын
Perfect explanation of the loss function and why we use the minimization instead of maximization of Discriminator.
@Htyagi1998
@Htyagi1998 2 жыл бұрын
Doing minimization of anything is way simpler and faster in terms of computation rather than computing maxima
@bashirsadeghi2821
@bashirsadeghi2821 4 ай бұрын
Great Tutorial.
@noamsalomon01
@noamsalomon01 2 жыл бұрын
Thank you, helped me alot
@dvrao7489
@dvrao7489 3 жыл бұрын
Really love this series man!! Just a quick question though why did we use fixed_noise and noise differently. In the training part can we not have used fixed_noise as input to generator because noise is noise right? Does it matter if we start from the same point?
@generichuman_
@generichuman_ Жыл бұрын
Fixed noise is used to display the images to track the progress of the GAN. Fixed means it doesn't change over time, so if you were to use this in training, you would be feeding the GAN the same vector over and over again, and the GAN would only be able to generate a single image, and the rest of the latent space would remain unexplored.
@shambhaviaggarwal9977
@shambhaviaggarwal9977 2 жыл бұрын
What changes will be there in the code if we use disc(fake).detach() instead? Will there be any changes in line 77? at 18:34
@rabia3746
@rabia3746 2 жыл бұрын
Hello. Thx for the video. I tried this code exactly except that i used 400 epoch. But still fake images are like noises. How did you get this results on the tensorboard. Can you please share the hyperparams that you used?
@christianc8265
@christianc8265 2 жыл бұрын
out of experience, mixing relu with tanh does not work super well, this is also a point you might add to your final possible improvements list, like only use tanh for the whole generator.
@taylorhawkes
@taylorhawkes 29 күн бұрын
Thanks!
@thederivo5545
@thederivo5545 26 күн бұрын
Hello, i love your videos as they are very precised and perfect but how do i view using colab instead of tensor flow
@pocketchamp
@pocketchamp 3 жыл бұрын
Thank you so much for the material, this is awesome! I have a small question. Why would it be `disc.zero_grad()` instead of `opt_disc.zero_grad()`? in general, are these 2 statements interchangeable?
@leviack4396
@leviack4396 Жыл бұрын
yeah ,i'm confused about that too, dude
@VuongNguyen-jr4gl
@VuongNguyen-jr4gl Жыл бұрын
they're the same
@minister1005
@minister1005 9 ай бұрын
Both are the same since it optimizes the model parameters
@HungDuong-dt3lg
@HungDuong-dt3lg 2 жыл бұрын
On line #66, why gen function only takes in one argument noise. I thought it must takes in two arguments z_dim and img_dim. Can you explain please?
@123epsilon
@123epsilon 3 жыл бұрын
Hi can you explain why we would use BCE loss on the Generator as well and why we would compare it to a tensor of 1s? It makes sense to me to use it for the discriminator as it is a classifier, but is the generator not doing some form of regression?
@ibrahimaba8966
@ibrahimaba8966 2 жыл бұрын
The formula is log(1 - D(G(Z))). So we use it on the discriminator.
@yardennegri874
@yardennegri874 Жыл бұрын
how do you get the images to show on tensorboard?
@tmspeeches8405
@tmspeeches8405 2 жыл бұрын
how do we get the training accuracy at each epoch ?
@deepudeepak1390
@deepudeepak1390 3 жыл бұрын
awesome!! one request from me ...can you make a video on text to image using GAN's please !!!
@mariamnaeem443
@mariamnaeem443 3 жыл бұрын
Nice video, thanks. Can you please make a video on RCGAN?
@cowmos9276
@cowmos9276 Жыл бұрын
thank you~
@abhayanandtripathi9562
@abhayanandtripathi9562 3 жыл бұрын
how can i include tensorboard features within the GAN ipynb file to visualize the log files?
@nitishrawat6872
@nitishrawat6872 3 жыл бұрын
If you're using Google Colab Just add these lines: %load_ext tensorboard # To load the tenserboard notebook extension %tensorboard --logdir logs # before training your model
@samernoureddine
@samernoureddine 2 жыл бұрын
When computing lossD, what is the difference in practice between summing versus averaging lossD_real and loss_Dfake? @15:20
@Astrovic1
@Astrovic1 Жыл бұрын
how is the song called at 20:00? sounds so chill made me move like on the dancefloor while at my working desk learning GANs with u
@maxim2727
@maxim2727 Жыл бұрын
Why is your tensorboard updating automatically the new images? For me I have to refesh the page in order for it to update
@joefahy4806
@joefahy4806 2 жыл бұрын
what program do you do this in?
@sidrasafdar7325
@sidrasafdar7325 2 жыл бұрын
Very good explanation of each and every line of code. Can you please make a video on how to optimize GANs with Whale Optimization Algorithm. i have to do my project in GAN and this is my base paper "Automatic Screening of COVID‑19 Using an Optimized Generative Adversarial Network". I have searched a lot about how to optimize GANs with WHO but couldn't find any related result. please help me as you have a detailed knowledge about GANs.
@mustafashaikh116
@mustafashaikh116 5 ай бұрын
Question : Why we use zero_grad with disc and dis and not opt_disc and opt_gen?
@ahsannadeem746
@ahsannadeem746 3 жыл бұрын
Is it possible to train this gan with a .CSV dataset?
@Huy-G-Le
@Huy-G-Le 2 жыл бұрын
The code run great, but how did you make those images at 20:37 appear???? I been trying to do that in google colab, the code work, but no image.
@ZOBAER496
@ZOBAER496 2 ай бұрын
Same question.
@Arya-cn4kk
@Arya-cn4kk Ай бұрын
@@ZOBAER496 %load_ext tensorboard %tensorboard --logdir logs run this in a seperte cell
@wongjerry3229
@wongjerry3229 2 жыл бұрын
I think in 18:22 usung detach is better. for one thing retain_graph = True cost more memories and for another if we dont use detach we optimize the paras in G when we train D
@minister1005
@minister1005 9 ай бұрын
if we use detach, what is the point of disc_fake? disc_fake = disc(fake.detach()).view(-1) and if we do a backward() we get no grads out of it(because fake.detach()'s require_grad=False) which means no update happens here
@minister1005
@minister1005 9 ай бұрын
Ah, my bad. fake.detach() won't get updated but disc()'s parameters will
@eyakaroui3718
@eyakaroui3718 3 жыл бұрын
How can I flip the labels 1 for fake and 0 for real ? Thanks a lot this video is helping me a lot !!! 😍
@morganhunt803
@morganhunt803 Ай бұрын
Why are we using 128 nodes in the Discriminator class? Isn't that kind of a random number? And why 256 in the Generator?
@SAINIVEDH
@SAINIVEDH 3 жыл бұрын
Is the intro eq. cross entropy loss function ?!
@MorisonMs
@MorisonMs 3 жыл бұрын
Question: 18:18 Code line 77. We have to compute disc(fake) twice? can't we simply write: "output = disc_fake"? (I thought we add retain_graph=True in order to avoid the computation of the disc(fake) twice)
@AladdinPersson
@AladdinPersson 3 жыл бұрын
We do retain_graph so that we don't have to compute fake twice so we can re-use the same image that has been generated. We send it through the discriminator again because we updated the discriminator, and they way I showed in the video is the most common setup I've seen when training GANs. Although it would probably also work if you did reuse disc_fake from previously
@MorisonMs
@MorisonMs 3 жыл бұрын
@@AladdinPersson Got you...! Thanks a lot
@travelthetropics6190
@travelthetropics6190 2 жыл бұрын
how would it be different if we use AdamW instead of Adam?
@nark4837
@nark4837 2 жыл бұрын
Hey, so this simple GAN generates any number? What I mean is, the neural networks have not learnt the features of 0, 1, 2, 3, ... individually, they have learnt what features make up a number in general? Then when z, the random sample from a distribution, is plugged into the generator, it generates a random number because of the noise it was given? Hence, the results could be better if you created a GAN pair for each individual number, which would obviously take a lot more training time and the networks would be mutually exclusive and not random, so you'd have a GAN pair that generates a fake version of every digit.
@sourabhbhattacharya9133
@sourabhbhattacharya9133 2 жыл бұрын
I had confusion regarding line 68 and 70 why are we creating ones and zeros in criterion? Please clarify this portion.... great work as always....
@kdubovetskyi
@kdubovetskyi 2 жыл бұрын
Roughly saying, we want the discriminator to estimate the *probability that its input is real*. Therefore the desired output for disc(real) is 1, and 0 for disc(fake).
@aras9319
@aras9319 Жыл бұрын
Hello. What should be different for non-square image data?
@pelodofonseca6106
@pelodofonseca6106 Жыл бұрын
CNNs instead of fc layers.
@gabrielyashim706
@gabrielyashim706 2 жыл бұрын
This video is was really helpful, but what if I don't want to use the MNIST dataset and I want to use my own dataset from my local machine, please how do I go about it?
@AladdinPersson
@AladdinPersson 2 жыл бұрын
I have separate videos on how to use custom datasets, for something written I highly recommend: pytorch.org/tutorials/beginner/data_loading_tutorial.html
@imdadood5705
@imdadood5705 3 жыл бұрын
Hello! I have been subscribed to you since a long time. I haven’t watched your videos other than machine learning from the scratch videos. What are prerequisites to start learning from this series??! People who are very well versed deep learning. How did you all learn? I am not intimidated by math... but it takes some time for me to understand. Give me some helpful tips please.In what order should I start learning deep learning? This would be a great help.
@AladdinPersson
@AladdinPersson 3 жыл бұрын
Hey, I got a video How to learn deep learning that answers your questions I think:)
@Zeoytaccount
@Zeoytaccount Жыл бұрын
What notebook prompt are you using to call up that TensorBoard UI?
@Zeoytaccount
@Zeoytaccount Жыл бұрын
Figured it out, for anyone with the same question. In a separate cell run: %load_ext tensorboard %tensorboard --logdir logs magic
@Arya-cn4kk
@Arya-cn4kk Ай бұрын
@@Zeoytaccount Oh Babe it works ,such a sweety wish I could send you a thankuuuuuu
@suryagaur4363
@suryagaur4363 3 жыл бұрын
Can you made a video on Cyclic GAN ?
@AladdinPersson
@AladdinPersson 3 жыл бұрын
A bit late but it's finished now. Paper walkthrough is up and implementation from scratch will be up in a few days :)
@NinjaTactiks
@NinjaTactiks 3 жыл бұрын
What version of CUDA are you using?
@AladdinPersson
@AladdinPersson 3 жыл бұрын
The latest one always pretty much, which as of right now is cuda 11.1 I think
@Arya-cn4kk
@Arya-cn4kk Ай бұрын
!python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))" - what is the relevance of this to the GAN you have worked on in this video
@privacywanted434
@privacywanted434 3 жыл бұрын
How did you get the tensorboard site to pop up?
@AladdinPersson
@AladdinPersson 3 жыл бұрын
Perhaps I didn't show it in the video but you have to run it through conda prompt (or terminal etc). I have more info on using tensorboard in a separate video so I was kind of assuming that people knew it but I could've been clearer on that!
@privacywanted434
@privacywanted434 3 жыл бұрын
@@AladdinPersson this is new for me so I’m still learning all the tools. Please keep doing tutorials btw!! You have been helping me learn AI so much faster due to your pytorch implementations.
@AladdinPersson
@AladdinPersson 3 жыл бұрын
@@privacywanted434 Thanks for saying that, I appreciate you 👊
@ZOBAER496
@ZOBAER496 2 ай бұрын
Do you have this GAN code available for downloading?
@tzachcohen9124
@tzachcohen9124 3 жыл бұрын
How can I transfer this code to work with RGB images? It keeps printing lines as an output after learning instead of images :(
@ibrahimaba8966
@ibrahimaba8966 2 жыл бұрын
you need to use dcgan instead of gan.
@utkarshjyani8350
@utkarshjyani8350 Жыл бұрын
for batch_idx, (real, _) in enumerate(loader): for this part its giving an error TypeError: 'module' object is not callable
@mustafasidhpuri1368
@mustafasidhpuri1368 3 жыл бұрын
in GANs generator loss should decrese and discriminator loss should increase is that so? i am little bit confused .
@AladdinPersson
@AladdinPersson 3 жыл бұрын
The loss in GANs don't tell us anything really (one will go up when the other goes down and vice-versa). The only thing you want to watch out for is if discriminator would go to 0 or something like that, so that would be the case if one of them "takes over"
@EllenReborn
@EllenReborn 3 жыл бұрын
How would I edit this if I wanted to use my own dataset?
@AladdinPersson
@AladdinPersson 3 жыл бұрын
If it's not a dataset included in Pytorch torchvision you could create a custom dataset class (it's not too difficult). I have separate video on custom datasets in Pytorch you could take a look at. Here is also a great official tutorial from Pytorch: pytorch.org/tutorials/beginner/data_loading_tutorial.html
@lker7489
@lker7489 2 жыл бұрын
wonderful intro to GAN, thank you very much! actually not feel a little confused what is z_dim...
@christianc8265
@christianc8265 2 жыл бұрын
these are the parameters you can change according to a known distribution to use the generator to produce images. I guess 64 is way to high for mnist. maybe you can use 10 so you can blend any of the digits.
@AliAhmed-mw2vc
@AliAhmed-mw2vc 7 ай бұрын
Please can someone tell which editor he is using?
@MorisonMs
@MorisonMs 3 жыл бұрын
11:25 You forgot to put right parenthesis.. Kidding :P Thanks for the video bro
@canozturk369
@canozturk369 3 ай бұрын
GREAT
@Carbon-XII
@Carbon-XII 3 жыл бұрын
8:00 - transforms.Normalize((0.1307,), (0.3081,)) will not work because of the following: * nn.Tanh() output of the Generator is (-1, 1) * MNIST values are [0, 1] * Normalize does the following for each channel: image = (image - mean) / std * So transforms.Normalize((0.5,), (0.5,)) converts [0, 1] to [-1, 1], which is ALMOST correct, because nn.Tanh() output of Generator (-1, 1) excluding one and minus one. * transforms.Normalize((0.1307,), (0.3081,)) converts [0, 1] to ≈ (-0.42, 2.82). But Generator can not generate values greater than 0.9999... ≈ 1, so it will not generate 2.8 for white color. That is why transforms.Normalize((0.1307,), (0.3081,)) will not work. P.S. To use transforms.Normalize((0.1307,), (0.3081,)) you should multiply nn.Tanh() with 2.83 ≈ nn.Tanh() * 2.83 ≈ (-2.83, 2.83)
@AladdinPersson
@AladdinPersson 3 жыл бұрын
This makes total sense, thanks for clarifying!
@drishtisharma3933
@drishtisharma3933 2 жыл бұрын
Thank you so much for explaining this... :)
@ethaneaston6443
@ethaneaston6443 Жыл бұрын
what does the parameter z_dim means?
@Sercil00
@Sercil00 3 жыл бұрын
Is it normal that this easily takes 1-2 hours for 50 epochs? I first ran it on my computer which unfortunately has no nvidia GPU. Then I tried it on Google Colab, which originally had it running on its CPU too. So I changed their Hardware acceleration to GPU, aaaaand... if it's faster, then not by much. Is that normal? Does this not benefit significantly from GPUs?
@parthrangarajan3241
@parthrangarajan3241 2 жыл бұрын
Hey, how did you overcome this error in colab? TypeError Traceback (most recent call last) in () 1 for epoch in range(num_epochs): ----> 2 for batch_idx, (real, _) in enumerate(loader): 3 real=real.view(-1, 784).to(device) 4 batch_sz= real.shape[0] 5 4 frames /usr/local/lib/python3.7/dist-packages/torchvision/datasets/mnist.py in __getitem__(self, index) 132 133 if self.transform is not None: --> 134 img = self.transform(img) 135 136 if self.target_transform is not None: TypeError: 'module' object is not callable
@parthrangarajan3241
@parthrangarajan3241 2 жыл бұрын
@@drishtisharma3933 Hey Drishti! Yes, I was able to overcome this error but I do not remember the exact changes I made to the code. I could share my colab notebook for your clarity. Honestly, I didn't try your approach. I was following the video as a code-along. Link: colab.research.google.com/drive/1l1Vt7mcoEQKFxxVbpQOeKZ-UiEHU9ggt?usp=sharing
@DIYGUY999
@DIYGUY999 3 жыл бұрын
Would you mind sharing the name of intro music? :D
@kae4881
@kae4881 3 жыл бұрын
SAME!
@beizhou2488
@beizhou2488 3 жыл бұрын
It is Straight Fuego by Matt Large
@DIYGUY999
@DIYGUY999 3 жыл бұрын
@@beizhou2488 Thankyou mah man.
@purnamakkena9553
@purnamakkena9553 2 жыл бұрын
I can't see tensorboard. I am running the same code on colab. Please help me. Thank You
@Arya-cn4kk
@Arya-cn4kk Ай бұрын
%load_ext tensorboard %tensorboard --logdir logs run this in a seperate cell , it works
@ABWXII
@ABWXII 2 жыл бұрын
hello sir can you tell me how to convert GANs generated dataset in to .jpg format??? please
@generichuman_
@generichuman_ Жыл бұрын
Be careful with jpgs in your training set. Jpg uses 8x8 blocks that introduce artifacts, either use very high quality jpgs, or even better, pngs
@judedavis92
@judedavis92 2 жыл бұрын
Ooh the jacobian
@thirashapw
@thirashapw 3 жыл бұрын
how can i run that tensorboard?
@Arya-cn4kk
@Arya-cn4kk Ай бұрын
same doubt
@virtualecho777
@virtualecho777 3 жыл бұрын
I have no idea what is happening but its soo interesting
@madhuvarun2790
@madhuvarun2790 3 жыл бұрын
At discriminator we want max log(D(real)) + log(1-d(g(z))). Since loss functions work by minimizing error we can minimize - (log(D(real)) + log(1-d(g(z)))). The bceloss is similar to min the above written loss. So it works fine. At Generator we want to max log(d(g(z))). Could you please explain how criterion(output, torch.ones_like(output)) maximizes log(d(g(z)))? because the loss function is ln =−wn [yn.logxn+(1−yn)⋅log(1−xn)]. According to your code aren't we trying to maximize -log(d(g(z)))? because there is a negative in loss function. shouldn't we add negative in our training phase? please explain me. I am stuck here
@madhuvarun2790
@madhuvarun2790 3 жыл бұрын
Nevermind, I understood it. Thanks
@asagar60
@asagar60 2 жыл бұрын
@@madhuvarun2790 can you please elaborate . as i see it on discriminator side, loss_real = - (log(D(real)) and loss_fake = - log(1-d(g(z)))).. but its still minimizing right ? I cant understand how thats maximizing the loss, the same doubt with generator loss
@madhuvarun2790
@madhuvarun2790 2 жыл бұрын
@@asagar60 Yes. It is minimizing the loss. I was wrong. At discriminator we are minimizing -(log(d(real)). At generator we are minimizing -log(d(g(z)))
@prakhar3134
@prakhar3134 Жыл бұрын
can someone explain what z_dim is actually?
@ruochenli5574
@ruochenli5574 2 жыл бұрын
How do you enter the Tensorboard ???
@Arya-cn4kk
@Arya-cn4kk Ай бұрын
im stuck there someone link a vid
@Arya-cn4kk
@Arya-cn4kk Ай бұрын
%load_ext tensorboard %tensorboard --logdir logs run this in a seperate cell it works
@hunterlee9413
@hunterlee9413 2 жыл бұрын
why my tensorflow couldn't open
@m11m
@m11m 3 жыл бұрын
I'm admittedly a noob to all of this, but I keep getting this "TypeError: __init__() takes 1 positional argument but 2 were given" and I can't figure out how to resolve the issue, any advice would be appreciated
@AladdinPersson
@AladdinPersson 3 жыл бұрын
Difficult to say w/o code, in this case it seems like you're sending in too many arguments haha
@generichuman_
@generichuman_ 3 жыл бұрын
If I had to guess, you might have a class method that doesn't have a "self" parameter
@deeshu3456
@deeshu3456 2 жыл бұрын
it seems like while defining the class method you originally coded a method which takes one argument , but while calling the same method as object you provided two arguments in there. eg. def lets_solve(error): pass #Instantiating an object now solution = lets_solve(error, YOU PROVIDED ONE EXTRA ARGUMENT HERE) YOU PROVIDED ONE EXTRA ARGUMENT HERE ----> denotes the extra argument which you shouldn't have provided going by the original code which takes just one arg. Hope this makes sense. Good luck!
@saurrav3801
@saurrav3801 3 жыл бұрын
🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥
@NamTran-cc1ml
@NamTran-cc1ml Жыл бұрын
why do we have (lossD_real + lossD_fake)/2
@Champignon1000
@Champignon1000 2 жыл бұрын
7:45 bruh :D
@ArunKumar-sg6jf
@ArunKumar-sg6jf 3 жыл бұрын
Nn.linear for what bro
@secretgame6434
@secretgame6434 Жыл бұрын
idont know much about pytorch but ill figure it out...
@tolotrasamuelrandriakotonj2014
@tolotrasamuelrandriakotonj2014 6 ай бұрын
impressive how people use vim to code ML
@talha_anwar
@talha_anwar 3 жыл бұрын
is not it should be optimizer.zero_grad instead of model.zero_grad
@AladdinPersson
@AladdinPersson 3 жыл бұрын
You can use both
@pantherwolfbioz13
@pantherwolfbioz13 2 жыл бұрын
Why do we maximize the generator loss? Shouldn't the generator be good at identifying the fake generated by descriminator?
@jamesadeke9873
@jamesadeke9873 Жыл бұрын
Generator don't identify. It only generates. To minimize loss, is to make the generator generate samples very close to real in order not to be identified by the discriminator
@flakky626
@flakky626 7 ай бұрын
Not pytorch;-; I gotta learn pytoch nonetheless
@hoaanduong3869
@hoaanduong3869 Жыл бұрын
Damn, I nearly heartbreak when i set wrong values for transforms.Normalizer
@ashekpc106
@ashekpc106 Жыл бұрын
please makea video about anime infogan
@redhunter408
@redhunter408 2 жыл бұрын
Re: (i just wanted to make sure that people understand that this is a joke...) | on lr = 3e-4
@wongjerry3229
@wongjerry3229 2 жыл бұрын
I think
@hnhparitosh
@hnhparitosh 2 жыл бұрын
sub
@xMreilly
@xMreilly 2 жыл бұрын
Where should i start? It sounds like you are just reading a book and not even going over anything.
@AladdinPersson
@AladdinPersson 2 жыл бұрын
then dont start bro
@AladdinPersson
@AladdinPersson 2 жыл бұрын
watch another video you resonate with more :)
@generichuman_
@generichuman_ Жыл бұрын
If you can't follow this, then you're not ready yet. Start with python basics and work your way up. Plenty of videos out there. Alladin's videos are gold, and when you're ready, you'll appreciate them more.
@donfeto7636
@donfeto7636 Жыл бұрын
you will need to call .detach() on the generator result to ensure that only the discriminator is updated! line 69 should pass fake. detach() so generator weights get removed from the computation graph and there is no need to retrain_graph of discriminator since you will not use it again I think
@zyctc000
@zyctc000 Жыл бұрын
Not really, he has two optimizers: opt_disc and opt_gen. Their .step() won’t affect each other
@donfeto7636
@donfeto7636 Жыл бұрын
@@zyctc000 opt.disc graph is connected to generator it will update generator layers weights Opt.disc see discriminator and generator as a 1 network Opt.gen optmizer will not affect discrimintor but not vice virsa
@jwc8963
@jwc8963 3 жыл бұрын
raised NotImplementedError when running through the line disc_real = disc(real).view(-1)
@brianjohnbraddock9901
@brianjohnbraddock9901 2 жыл бұрын
Thanks!
DCGAN implementation from scratch
35:38
Aladdin Persson
Рет қаралды 61 М.
I gave 127 interviews. Top 5 Algorithms they asked me.
8:36
Sahil & Sarra
Рет қаралды 590 М.
Шокирующая Речь Выпускника 😳📽️@CarrolltonTexas
00:43
Глеб Рандалайнен
Рет қаралды 9 МЛН
ПЕЙ МОЛОКО КАК ФОКУСНИК
00:37
Masomka
Рет қаралды 10 МЛН
NO NO NO YES! (50 MLN SUBSCRIBERS CHALLENGE!) #shorts
00:26
PANDA BOI
Рет қаралды 102 МЛН
Understand the Math and Theory of GANs in ~ 10 minutes
12:03
WelcomeAIOverlords
Рет қаралды 60 М.
PYTORCH COMMON MISTAKES - How To Save Time 🕒
19:12
Aladdin Persson
Рет қаралды 53 М.
Generative Adversarial Networks (GANs) - Computerphile
21:21
Computerphile
Рет қаралды 640 М.
[Classic] Generative Adversarial Networks (Paper Explained)
37:04
Yannic Kilcher
Рет қаралды 60 М.
A Friendly Introduction to Generative Adversarial Networks (GANs)
21:01
Serrano.Academy
Рет қаралды 241 М.
Pytorch Transformers from Scratch (Attention is all you need)
57:10
Aladdin Persson
Рет қаралды 290 М.
The Math Behind Generative Adversarial Networks Clearly Explained!
17:04
PyTorch in 100 Seconds
2:43
Fireship
Рет қаралды 821 М.
Шокирующая Речь Выпускника 😳📽️@CarrolltonTexas
00:43
Глеб Рандалайнен
Рет қаралды 9 МЛН