No video

Neural Networks Pt. 4: Multiple Inputs and Outputs

  Рет қаралды 165,447

StatQuest with Josh Starmer

StatQuest with Josh Starmer

Күн бұрын

Пікірлер: 264
@statquest
@statquest 3 жыл бұрын
The full Neural Networks playlist, from the basics to deep learning, is here: kzfaq.info/get/bejne/edd_mcxllrLKdKs.html Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
@Tapsthequant
@Tapsthequant 3 жыл бұрын
Now is the time for some shameless appreciation. Thank you Josh
@statquest
@statquest 3 жыл бұрын
Hooray!!! Thank you very much! BAM! :)
@alicecandeias2188
@alicecandeias2188 3 жыл бұрын
a brazilian bank supporting this statquest video: bam me, a brazilian watching the video: YO DOUBLE BAM
@statquest
@statquest 3 жыл бұрын
Muito bem!!! (Muito BAM!!! :)
@jpmagalhaes6645
@jpmagalhaes6645 3 жыл бұрын
@@statquest Another Brazilian watching this amazing channel: TRIPLE BAM! This Brazilian is a teacher and recommends this channel in all classes: SUPER BAM!
@statquest
@statquest 3 жыл бұрын
@@jpmagalhaes6645 Muito obrigado!!! :)
@TheKenigham
@TheKenigham 2 жыл бұрын
I even google to confirm if Itaú was actually a Brazilian company LOL
@djiodsjio
@djiodsjio 2 ай бұрын
@@statquest that was amazing, as a brazilian I appreciate this joke very much
@mingli8919
@mingli8919 3 жыл бұрын
your videos made me sincerely become interested in the subjects and want to learn more, not just because it's a useful skill that I had to force myself to learn, thank you, Sir!(salute)
@statquest
@statquest 3 жыл бұрын
Thank you very much and thank you for your support!!! :)
@cesarbarros2931
@cesarbarros2931 2 жыл бұрын
Hey, Sir Josh "Bam", you deserve an Oscar award, such meticulousness in conveying dense information in a paradoxically light and witty way. In my opinion, it seems to be an innovation in the process of transmitting non-trivial mathematical and related knowledge. Small video masterpieces with ultra-concatenated information at an extremely adjusted pace. I wish you even more the much-deserved recognition and success. Directly from Brazil, I send you my very special congratulations.
@statquest
@statquest 2 жыл бұрын
Muito obrigado!!! :)
@cesarbarros2931
@cesarbarros2931 2 жыл бұрын
@@statquest In Portuguese, "Muito obrigado!" - = Thank you very much! - can be replicated in the end by "Eu que agradeço", which brings a sense that the person who received great services or experienced great experiences is thankful in the end. As this seems to be the case...Eu que agradeço, Mr. Josh. Abraços fortes.
@statquest
@statquest 2 жыл бұрын
@@cesarbarros2931 VERY COOL! I've been to Brazil twice and hope I can return one day to learn (and speak) more Portuguese.
@gundeepdhillon9099
@gundeepdhillon9099 3 жыл бұрын
I really appreciate your attention to detail whether its your content or personally reading and acknowledging each and every comment on social media (be it YT or linkedin).. you sir are the best teacher and a great human being...#Respect #Mentor #BestTeacher
@statquest
@statquest 3 жыл бұрын
Thank you very much!!! :)
@paulakaorisiya1517
@paulakaorisiya1517 3 жыл бұрын
I was very surprised and happy to see that Itaú suport this content! I've beem working at Itaú for 6 years and now I am studying neural networks to improve some process here. I love your videos Josh :)
@statquest
@statquest 3 жыл бұрын
BAM! :)
@TeaTimeWithIroh
@TeaTimeWithIroh 3 жыл бұрын
Thanks for all you do Josh! These videos help lay out the foundation for me - and help make the actual math easier to understand :-) BAM!!
@statquest
@statquest 3 жыл бұрын
Thank you very much! :)
@84mchel
@84mchel 3 жыл бұрын
The amount of value you provide with these videos is galactic! Keep it up. I was literally looking for a visual representation of multi input nn and how the relu (shape) looks like. Hard to imagine when you have 3 inputs (eg pixels) its like a 4d relu shape?!
@statquest
@statquest 3 жыл бұрын
Yes, if we have 3 inputs, then we end up with a 4D shape and it's much harder to draw! :)
@juaneshberger9567
@juaneshberger9567 Жыл бұрын
The quality of these videos is incredible. Thanks, Josh!
@statquest
@statquest Жыл бұрын
Glad you like them!
@jennycotan7080
@jennycotan7080 8 ай бұрын
Sir... Mr. Starmer, maybe I'll give myself your book about Machine Learning as a gift for the Lunar New Year if I get a great result in the coming Maths final exam. Because your videos fit us tech kids so much!
@statquest
@statquest 8 ай бұрын
Good luck! :)
@minhtuanle9268
@minhtuanle9268 Жыл бұрын
for all the effort to make this video,you deserve my respect
@statquest
@statquest Жыл бұрын
Thank you so much 😀!
@mikaelbergman2093
@mikaelbergman2093 3 жыл бұрын
Hooray!!! Thank you Josh & StatQuest Land for this video! What an amazing approximately-Birthday-surprise! I really do like silly songs, mathematics, statistics, machine learning and I love StatQuest. Greetings from Mikael to every soulmate out there from Svalbard 78° North. BAM!!!
@statquest
@statquest 3 жыл бұрын
Hooray!!! Thank you Mikael!!! And thank you for helping me edit the first draft of this video!!! I'm looking forward to talking with you about Convolutional Neural Networks soon. :)
@gundeepdhillon9099
@gundeepdhillon9099 3 жыл бұрын
believe it or not these days I'm watching alot of vids on Svalbard..... whatta fantastic place and history... my fav vid is one on seed vault....amazing kzfaq.info/get/bejne/aMV_eNaXkpfVl40.html
@sinaro93
@sinaro93 Жыл бұрын
This is more than triple! This is quadruple, quintuple or even sextuple BAAAM!! I love the simplicity of your explanation.
@statquest
@statquest Жыл бұрын
Wow, thanks!
@tymothylim6550
@tymothylim6550 2 жыл бұрын
What a great video! Thanks for all the hard work plotting those 3D points :)
@statquest
@statquest 2 жыл бұрын
Hooray!!! I'm glad you appreciate the work! I spent a lot of time on this video. :)
@user-ry5zu1wo4e
@user-ry5zu1wo4e 3 ай бұрын
What an amazing explanation. Thank you so much
@statquest
@statquest 3 ай бұрын
Thanks!
@bijoydey479
@bijoydey479 3 жыл бұрын
One of the best teacher in statistics....👍👍👍
@statquest
@statquest 3 жыл бұрын
Thanks a lot 😊!
@ananthakrishnank3208
@ananthakrishnank3208 Жыл бұрын
Truely a master of machine learning.
@statquest
@statquest Жыл бұрын
Thank you! :)
@rafael_l0321
@rafael_l0321 2 жыл бұрын
Unexpected Brazilian sponsorship! A BAM from Brazil!
@statquest
@statquest 2 жыл бұрын
Muito obrigado!
@rohit2761
@rohit2761 3 жыл бұрын
Hello Josh, I cannot express my gratitude for finding your channel. I am literally Binge-watching to get the conceptual clarity. Like a greedy subscriber, I just wanna request to upload more *Deep learning* videos #DeepLearning CNN RNN ImageCV etc. Content is magnificent May God bless you.
@statquest
@statquest 3 жыл бұрын
Thanks! There is already a CNN video kzfaq.info/get/bejne/fq2ndbt1sKzPaX0.html and I hope to have an RNN video out soon.
@samuelyang1870
@samuelyang1870 Жыл бұрын
Amazing video, the quality of the explanation is way above the views you're getting. Keep it up!
@statquest
@statquest Жыл бұрын
Thanks, will do!
@harishbattula2672
@harishbattula2672 2 жыл бұрын
Thank you very much for the explanation, i recommended all you videos to my classmates.
@statquest
@statquest 2 жыл бұрын
Awesome, thank you!
@BlasterMate
@BlasterMate 3 жыл бұрын
I would never imagine that i could imagine a neural network.Thank you.
@statquest
@statquest 3 жыл бұрын
:)
@pedramporbaha
@pedramporbaha 2 жыл бұрын
Triple BAM!!! after several years of Ambiguous in machine learning, I found you! i love your contets ! In addittion, I love that you're multidimentional man(like data in thes video) and it caused me to loved you more, beacause of I am a music-composer,HammeredDulcimer player,math-lover pharmacist too ! You're inspirieg to me!🌸
@statquest
@statquest 2 жыл бұрын
Awesome! Thank you!
@joshwang3500
@joshwang3500 Жыл бұрын
Fantastic video, Josh!, these animations and accompanying text clearly help me explain the logic behind. Thank you so much for all you do !!
@statquest
@statquest Жыл бұрын
Thank you very much! :)
@twandekorte2077
@twandekorte2077 3 жыл бұрын
Great video! Your explanations are very intuitive as always. BAM
@statquest
@statquest 3 жыл бұрын
Thank you very much and thank you for your support!! BAM! :)
@gilao
@gilao Ай бұрын
Another great one! Thanks!
@statquest
@statquest Ай бұрын
Thank you! :)
@ngusumakofu1
@ngusumakofu1 2 жыл бұрын
I came here for the knowledge and the BAM!
@statquest
@statquest 2 жыл бұрын
Thanks! :)
@enzy7497
@enzy7497 Жыл бұрын
Amazing work. I loved this example so much! God bless you.
@statquest
@statquest Жыл бұрын
Thank you! :)
@TheGoogly70
@TheGoogly70 3 жыл бұрын
Great video. You’ve explained the complex concepts so simply. I hope there will be a video on how to determine the weights and biases in a neutral network such as for the Iris example.
@statquest
@statquest 3 жыл бұрын
We use backpropagation to determine the weights and biases, and backpropagation is covered in these videos: kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html kzfaq.info/get/bejne/n9-eZd2VprLNmWw.html kzfaq.info/get/bejne/fbGKorJ5va3HfKM.html kzfaq.info/get/bejne/rqh1m5lnu5_LiqM.html
@TheGoogly70
@TheGoogly70 3 жыл бұрын
@@statquest Great! Thanks a ton.
@NoirKuro_
@NoirKuro_ 6 ай бұрын
Love this video, compare with other explaination video, this is most simple and puurrrrrfffeeeecccttooo (i mean after watch this video im able to make my multistep multivariate deep neural network model even without cs background). Thank you *sorry for my "broken english"
@statquest
@statquest 6 ай бұрын
Glad you liked it!
@andrewdouglas9559
@andrewdouglas9559 Жыл бұрын
I can't imagine how much time it must take to make one of these videos.
@statquest
@statquest Жыл бұрын
It takes a lot of time, but it's fun! :)
@lucarauchenberger628
@lucarauchenberger628 2 жыл бұрын
so clearly explained! stunning!
@statquest
@statquest 2 жыл бұрын
Bam!
@youhadayoub9567
@youhadayoub9567 2 жыл бұрын
thanks a lot, you are really a life saver
@statquest
@statquest 2 жыл бұрын
You're welcome!
@Gautam1108
@Gautam1108 Жыл бұрын
Excellent!! Thank you so much Josh
@statquest
@statquest Жыл бұрын
Thanks! :)
@admggm
@admggm 3 жыл бұрын
question: 1.what happens with inputs from different types: discrete vs continuous? 2.what happens if we would like to have, for example, a "predominant color" as an input along with the widths? thanks a lot!
@Rufus1250
@Rufus1250 3 жыл бұрын
"predominant color" is a categorical value. Therefore you would need to do a one hot encoding before. e.g. blue = (0,1), red = (1,0) for 2 possible colors.
@statquest
@statquest 3 жыл бұрын
That's correct. The inputs for a categorical input would just be 0 or 1 (rather than values between 0 and 1).
@zchasez
@zchasez 3 жыл бұрын
Thanks for all the videos that you made! they've been a great help! That being said, is it possible to make a video about Accuracy, Recall, and Precision? I really can't wrap my head around these concepts. Thanks again!!
@statquest
@statquest 3 жыл бұрын
Yes, I can do that.
@bessa0
@bessa0 2 жыл бұрын
Dude, I love you.
@statquest
@statquest 2 жыл бұрын
:)
@spearchew
@spearchew 3 жыл бұрын
A great series on NN. I wonder what would happen if you found an iris in the woods that had features outside of the normalised "zero to one" range of our training data. If it was a very small iris, I guess our input would then have to be negative. If it was a freakishly big iris, our input values might be >>1.0.... Perhaps this would break the squiggle machine.
@statquest
@statquest 3 жыл бұрын
Probably. And that is, in general, a limitation of all machine learning methods. If new data is outside of the range of the original training data, your predictions are probably going to be pretty random.
@MilanMarojevic
@MilanMarojevic Жыл бұрын
Really useful ! Thanks :)
@statquest
@statquest Жыл бұрын
Glad to hear that!
@Ash-bc8vw
@Ash-bc8vw 2 жыл бұрын
Awesome video!
@statquest
@statquest 2 жыл бұрын
Thanks!
@nebuer54
@nebuer54 2 жыл бұрын
Awesome video as always! Beginner here, so a couple of questions - 1). Do the outputs refer to probability values? for eg, at 13:07 does the output value of 0.86 mean there's a 86% chance of the flower being a Versicolor, given this particular sepal and petal width? If so, is there a special case (distribution?) where the output probabilities sum to 1? 2) Does the number of inputs play a critical role in choice of any key component in the architecture - like which loss function to use? or which activation function to use?, etc. 3) At what point in an n-dimensional crinkled hyperspace does the universe go n-BAM?! just kidding. not a real question :D
@statquest
@statquest 2 жыл бұрын
1) In this case, the outputs are not probabilities (you can tell because they don't add up to 1). The next video in this series shows that it is very common to add "SoftMax" to this sort of Neural Network to give "probabilities". I put the "probabilities" in quotes because their interpretation comes with some caveats. For more details, see: kzfaq.info/get/bejne/gdZ7ospesZ_alZs.html
@statquest
@statquest 2 жыл бұрын
2) The inputs don't really affect the loss function or activation function. However, it might effect the number of hidden layers and nodes in each layer.
@thamburus7332
@thamburus7332 Жыл бұрын
Thank You
@statquest
@statquest Жыл бұрын
Welcome!
@pielang5524
@pielang5524 3 жыл бұрын
excellent explanation! Thank you
@statquest
@statquest 3 жыл бұрын
Thank you!
@ADHAM840
@ADHAM840 4 ай бұрын
what an amazing illustration for this specific topic, what i didn't get or follow is why the y-scale in each different iris type was different (0 and 2 in Setosa, -6 and 6 in Versicolor, -6 and 6 in Virginica ) ? , where these numbers came from ? thanks again for your style in explaining hard stuff that most of people take them for granted :)
@statquest
@statquest 4 ай бұрын
To learn why the y-axis values are different, see: kzfaq.info/get/bejne/edd_mcxllrLKdKs.html
@abdullahalmussawi5291
@abdullahalmussawi5291 8 ай бұрын
Hey Josh amazing video like always. can you answer my dumb question please :) on the previous video you applied the ReLU function after adding the final bias, why we did not do that in this video? does adding more than one input or output affect this ? thanks again for the amazing content.
@statquest
@statquest 8 ай бұрын
I didn't add a final ReLU because I didn't need it. There are no rules for building neural networks and you can just build them the way that works best with whatever data you have.
@revatinanda6318
@revatinanda6318 8 ай бұрын
@@statquest Always a Fan of your content and have suggested others to understand the basics of ML through your videos. Really appreciate your quick response on @abdullahalmussawi5291 query.... God bless you brother... :)
@naomyduarteg
@naomyduarteg Жыл бұрын
Love the comments such as "I thought they were all petals!" 🤣🤣🤣 Great series!
@statquest
@statquest Жыл бұрын
Thank you so much! BAM! :)
@lilaberkani4376
@lilaberkani4376 3 жыл бұрын
You should consider singing with Peobee Buffay sometime Hhahah ! I love ur videos
@statquest
@statquest 3 жыл бұрын
Thanks!
@beakmann
@beakmann 3 жыл бұрын
It's not a squiggle anymore :/
@statquest
@statquest 3 жыл бұрын
Nope, it's crinkled surface.
@dikaixue3050
@dikaixue3050 2 жыл бұрын
thank you
@statquest
@statquest 2 жыл бұрын
Bam!
@s25412
@s25412 3 жыл бұрын
Could you pls confirm @ 10:51 and 12:00, when you say "change the scale for y-axis," does that simply mean zooming in on that y-axis range and looking at values in that range only? Or does it involve mathematical manipulation of y values to fit that range?
@statquest
@statquest 3 жыл бұрын
Changing the scale on the y-axis just means zooming in on the part we are interested. The values remain the same, we just zooming in on them.
@robert-dr8569
@robert-dr8569 Жыл бұрын
I love your simple and clear explanations!
@statquest
@statquest Жыл бұрын
Thank you!
@user-go7lu8hq5l
@user-go7lu8hq5l 3 жыл бұрын
very useful
@statquest
@statquest 3 жыл бұрын
Thanks!
@Fan-vk9gx
@Fan-vk9gx 2 жыл бұрын
You are a genius! And a very kind one! Thank you for all these things you made. I was just wondering, can items in your store be shipped to Canada? You must have more fans and make more money in the near future, you deserve them!
@statquest
@statquest 2 жыл бұрын
Thanks! I'm pretty sure that items in my store can be shipped pretty much anywhere in the world, including Canada.
@a.lex.
@a.lex. 3 ай бұрын
Hi StatQuest, you said that by scaling the inputs between 0 and 1 it makes the math easier, but what would change if the inputs were not scaled. Also great series of videos :))
@statquest
@statquest 3 ай бұрын
Not much. The numbers would be larger and wouldn't fit so nicely in the small boxes I created.
@WALID0306
@WALID0306 9 ай бұрын
thanks !!
@statquest
@statquest 9 ай бұрын
Bam! :)
@BaerFlorian
@BaerFlorian 3 жыл бұрын
Thanks for the amazing video! Maybe you‘ll find some time to also make a video on how to apply backpropagation to a neural network with multiple inputs and outputs.
@statquest
@statquest 3 жыл бұрын
That's coming up in a few weeks. We'll do an example using cross entropy and softmax.
@RomaineGangaram
@RomaineGangaram 3 ай бұрын
Bruh you make this easy. How¿? I made a new lmm because of you! Shameless congratulations
@statquest
@statquest 3 ай бұрын
BAM! :)
@hoaxuan7074
@hoaxuan7074 3 жыл бұрын
If you understand ReLU as a switch you can work out by hand the 3 dot products the net collapses to for each particular input. If I had a laptop instead of a phone I'd do it for you.
@statquest
@statquest 3 жыл бұрын
Noted
@wesleysbr
@wesleysbr 7 ай бұрын
Another fantastic class Josh! Can I ask you something? In the case of classifying flowers into versicolor, setosa and virginica, to estimate the network parameters you needed to train the model with a response variable, right?
@wesleysbr
@wesleysbr 7 ай бұрын
Josh I found the answer: "I started out by creating a neural network with 3 outputs, one for setosa, one for versicolor and one for virginica. I then trained the neural network with backpropagation to get the full neural network used in this video. I then ran the same training data through the network to see which output gave high values for setosa and I then labeled that output "setosa"." Thanks
@statquest
@statquest 7 ай бұрын
yep
@vigneshvicky6720
@vigneshvicky6720 3 жыл бұрын
At last in this video only I came to know how to sum at last
@statquest
@statquest 3 жыл бұрын
:)
@AlberthAndrade
@AlberthAndrade 3 ай бұрын
Hey Josh, good evening! First, thanks for share your knowledgement with us! Could you please help with the Virginica score? You set +1 after the sum and, unfortunately, I was not able to understand why. Was this value set randomly? And Why do you not set new values for another plants? Thank you!
@statquest
@statquest 3 ай бұрын
All of the weights and biases in a neural network are always determined using something called Backpropagation. To learn more, see: kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html
@sameerrao20
@sameerrao20 9 ай бұрын
Thanks a lot , This is amazing , i had been following your book alongside( which is equally amazing as welll ) the lectures . but these topics are not present there ... Desperately waiting for the next book , is there any release date in hand ? Kindly suggest any revision alternatives till the 2nd addition of book comes out !!
@statquest
@statquest 9 ай бұрын
I'm working on a new book focused just on neural networks that covers the theory (like this video) but also has worked out code examples. However, it's still at least a year away. :(
@libalele3460
@libalele3460 Жыл бұрын
Great video once again! Is optimizing the weights and biases in a NN with several inputs the same process as a NN with just 1 input?
@statquest
@statquest Жыл бұрын
Yes. However, if you'd like to look at examples of how the derivatives are calculated, see: kzfaq.info/get/bejne/gdZ7ospesZ_alZs.html kzfaq.info/get/bejne/g5tpfaidqrbLeZs.html kzfaq.info/get/bejne/bKeihtykmtescYk.html kzfaq.info/get/bejne/rqh1m5lnu5_LiqM.html
@andersk
@andersk 2 жыл бұрын
Hi Josh, thanks again for an awesome video. At 8:13 you mention that these values for width are obviously scaled, so you would do the same with a validation set or a prediction set - is there no potential issue with the scaled new observation being a minus number? Really shooting in the dark here, but I'm thinking maybe somewhere in the neural network there could be a situation where taking away a very small width would be a number close to zero, but if you now have scaled negative values, the two minus signs would go to a plus and maybe incorrectly classify this flower with smaller petals than any in the training set as one with bigger ones because it went past the training limits?
@andersk
@andersk 2 жыл бұрын
I'm going on a real tangent here, so if there's nothing to worry about, a simple 'no' would be a fine answer :D thanks again!
@statquest
@statquest 2 жыл бұрын
I'm not really sure. My guess is that if you think you might run into this sort of problem, then you need to be careful with how you scale your data.
@andersk
@andersk 2 жыл бұрын
@@statquest will do, was only asking in case it was a known pitfall but too rare to put into the video. Thanks for your reply & all these videos, I'm sure you get a message every hour on this but you're really the best educator I've ever come across 🙏
@NimN0ms
@NimN0ms 3 жыл бұрын
Hello Josh, this might be completely out of left field, but if you take requests, could you explain Latent Class Analysis?
@statquest
@statquest 3 жыл бұрын
I'll keep that in mind.
@NimN0ms
@NimN0ms 3 жыл бұрын
@@statquest Thank you! I have watched your videos through my undergrad and still watch a lot of them as I am getting my PhD (Epidemiology)!!! Thanks for all you do!
@rodrigoamoedo8523
@rodrigoamoedo8523 3 жыл бұрын
grate video, but you point sepals and petals backwards. Keep at the good work, love your content
@statquest
@statquest 3 жыл бұрын
Can you provide me with a link that shows that I got the petals and sepals backwards? Pretty much very single page I found is consistent with what I present here. For example, see: images.app.goo.gl/iAbv954ML8dExUru9 images.app.goo.gl/ugN6JPWs6of1FBWj6 images.app.goo.gl/ZbKVgCkdnC5hdgBA9
@krrsh
@krrsh Жыл бұрын
How are you selecting the weights for multiplying and bias for adding to Y value for different outputs?
@statquest
@statquest Жыл бұрын
The weights and biases are all optimized via backpopagation, just like they are for every other neural network. For details about backpopagation, see: kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html kzfaq.info/get/bejne/n9-eZd2VprLNmWw.html and kzfaq.info/get/bejne/fbGKorJ5va3HfKM.html
@arhanahmed8123
@arhanahmed8123 2 ай бұрын
Well explained, but how do you visualize when we have more then 2 inputs in order to optimize our function? We cannot visualize more then 3 dimensions! Please explain
@statquest
@statquest 2 ай бұрын
I have no idea.
@DanielRico08
@DanielRico08 16 күн бұрын
One question, I understand that we still can use backpropagation to find W parameters as they are independent for each input, but what about bias (i.e b1), it is shared between both inputs, should we calculate derivatives but with the combination of both inputs? Thanks
@statquest
@statquest 15 күн бұрын
The input for the activation function ends up being: input_1 * w_1 + input_2 * w_2 + ... * input_n * w_n + bias_i. So the derivative, with respect to the bias = 0 + 0 + ... + 0 + 1 = 1 regardless of the number of inputs.
@iReaperYo
@iReaperYo 4 ай бұрын
Hi StatQuest, something I don't understand is why you put examples through the neural network to 'fit' the curve to the training set. Wouldn't applying a neural network with initialised weights inherently fit the training set? is this just for illustrative purposes to show that the curves can be formed through putting examples in to our function and getting an output /prediction back? is this essentially the simpler way to explain neural nets without explicitly showing us the equations that each activation would represent? or are you essentially plugging in the examples one would use to compare to the ground truth values in the test set? You're essentially showing us visually what curve the *current* parameters approximate /estimate to match the underlying function. But you're doing this step by step? are these curves you're getting fits of the test set?
@statquest
@statquest 4 ай бұрын
The main idea of neural networks is that they are functions that fit shapes to your data, and by running values through a "trained" neural network, I can illustrate both the shape and how that shape came to be. If you'd like to learn about training a neural network, see: kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html
@iReaperYo
@iReaperYo 3 ай бұрын
@@statquest This makes sense, thank you for your great work. I will give the video a watch, going through your series to learn NLP!
@statquest
@statquest 3 ай бұрын
@@iReaperYo Here's the whole playlist: kzfaq.info/get/bejne/edd_mcxllrLKdKs.html
@Mohamm-ed
@Mohamm-ed 3 жыл бұрын
I love this channel bacuse the songs. Thanks very much.. Hooray
@statquest
@statquest 3 жыл бұрын
Thank you! :)
@MandeepKaur-ks6lk
@MandeepKaur-ks6lk 2 ай бұрын
We understood the calculation of weights and biases. But how would i know about the nodes...how do i understand the logic to connect all the inputs to activation function and to the output.?..and how many hidden layes we need? And no example for more than one hidden layer Could you please help me here
@statquest
@statquest 2 ай бұрын
Designing neural networks is more of an art than a science - there are general guidelines, but generally speaking you find something that works on a related dataset and then train it with your own data. In other words, you rarely build your own neural network. However, if you are determined to build your own, the trade off is this - the more hidden layers and nodes within the hidden layers, the better your model will be able to fit any kind of data, no matter how complicated, but at the same time, you will increase the computation and training will be slow.
@JuanCamiloAcostaArango
@JuanCamiloAcostaArango 7 ай бұрын
Why you didn't use the ReLu function in the outputs like in the previous example with the doses? 🤔
@statquest
@statquest 7 ай бұрын
Because I didn't need to. There are no rules for building neural networks. You simply design something that works.
@4wanys
@4wanys 3 жыл бұрын
great video thank you, are you gonna to apply this series with python ?
@statquest
@statquest 3 жыл бұрын
Yes, soon
@victorreloadedCHANEL
@victorreloadedCHANEL 3 жыл бұрын
Good morning! I've tried to buy some study guides but there is no option "pay with credit card" after selecting "pay with paypal" and going to the login screen. How can we solve this? Thank you!
@statquest
@statquest 3 жыл бұрын
If you scroll down on the login screen you should see a button that says Pay With Debt or Credit Card. It is possible you did not scroll down far enough, though. It's hard to see, however, I just tried it and it worked. However, let me know if you are still having trouble - you can contact me via: statquest.org/contact/
@user-to4zj9tg8s
@user-to4zj9tg8s 11 ай бұрын
Thanks for your great videos. I have enjoyed all the previous videos , but have to agree I got a bit lost with this one. From what i understood here we first train the neural network to give a perfect fit for Setosa. So we will arrive at optinal values for the weights say w1,b1,w2,b2 , etc. After this we train for Versicolor . Wont this change the previous values of weights which we already optimized for Setosa ?
@statquest
@statquest 11 ай бұрын
We actually train all 3 outputs at the same time - so those weights work with all 3.
@user-to4zj9tg8s
@user-to4zj9tg8s 11 ай бұрын
@@statquest Thank you for a quick reply and clearing my confusion !!!
@user-bz8nm6eb6g
@user-bz8nm6eb6g 3 жыл бұрын
love it!
@statquest
@statquest 3 жыл бұрын
Thanks!
@michaelfreeman4460
@michaelfreeman4460 3 жыл бұрын
Looking forwart to see your take on LSTM and backpropagation through time! interesting staff there ^_^
@statquest
@statquest 3 жыл бұрын
I'll keep that in mind!
@alexroberts6416
@alexroberts6416 9 ай бұрын
I don't understand how you already started with the weights and bias. Did you already use back propagation etc.. with known data?
@statquest
@statquest 9 ай бұрын
All the weights and biases for all neural networks come from backpropagation applied a training dataset, so that's what I used here. If you'd like to learn more about back propagation, see: kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html
@Marcelscho
@Marcelscho 3 жыл бұрын
Hey! Please make a video on Expaction maximation. Thanks!
@statquest
@statquest 3 жыл бұрын
I'll keep that in mind.
@paulbrown5839
@paulbrown5839 3 жыл бұрын
At 08:42, Why did you pick Setosa first? How did you know the output neuron type should be Setosa? Is it because your sample for this forward pass is labelled as Setosa?
@statquest
@statquest 3 жыл бұрын
I started out by creating a neural network with 3 outputs, one for setosa, one for versicolor and one for virginica. I then trained the neural network with backpropagation to get the full neural network used in this video. I then ran the same training data through the network to see which output gave high values for setosa and I then labeled that output "setosa".
@r0cketRacoon
@r0cketRacoon 5 ай бұрын
how does the backward propagation work on multiple output? could u do another video of that?
@statquest
@statquest 5 ай бұрын
See: kzfaq.info/get/bejne/rqh1m5lnu5_LiqM.html
@r0cketRacoon
@r0cketRacoon 5 ай бұрын
@@statquest oh, really helpfull, tks
@NoNonsense_01
@NoNonsense_01 2 жыл бұрын
At 6:17 when output value is multiplied by negative 0.1 and the new y value is negative 0.16, shouldn't it be plotted below 0 on Setosa axis. Or, am I missing something?
@statquest
@statquest 2 жыл бұрын
It is plotted below zero. But since the number is close to zero, it may not be easy to see.
@NoNonsense_01
@NoNonsense_01 2 жыл бұрын
@@statquest Noted! Thanks for the response Mr. Starmer. Know that you are an incredible teacher and greatly appreciated by us!
@thenkindler001
@thenkindler001 Жыл бұрын
Still not sure how weights and biases are being initialised. Are you stipulating them at random or are they determined by the data and, if so, how?
@statquest
@statquest Жыл бұрын
In a neural network, weights and biases start out as random number, but are then optimized using Gradient Descent and Backpropagation. For details, see: kzfaq.info/get/bejne/qaqmZ8ll2Ji3cmw.html and kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html
@thenkindler001
@thenkindler001 Жыл бұрын
@@statquest BAM
@fuzzywuzzy318
@fuzzywuzzy318 3 жыл бұрын
my gods! so complicated multiple layer and nodes neural network buy you use a so eas to follow and understanding way to teach! many profS in universitIES get a high paid but not good teach as you ,and only made students feel them are stupid!!!!!!! BAM!!
@statquest
@statquest 3 жыл бұрын
Thank you! :)
@shivanshjayara6372
@shivanshjayara6372 3 жыл бұрын
here we have taken last bias 0, 0 and 1. So it is just for sample an coz we can have any optimise bias value then in that case output value will also get change...isn't it?
@statquest
@statquest 3 жыл бұрын
I'm not sure I understand your question. In this example, the neural network is trained given the Iris dataset. If we trained it with different data (or even just a different random seed for the same data), we would probably get different values for the biases.
@rs9130
@rs9130 2 жыл бұрын
hello author, i want to train a model to predict heatmap (mean square error loss) and binary segmentation (binary cross entropy loss). i tried to train model using multi branch (2 branch duplicates for 2 output). but the the final output will favour for only one type of output. For example when i train using model.fit with equal loss weights, the output is good for heatmap, but binary mask output is wrong and gives pixels 1 for the regions similar to heatmaps. And when i train using gradtape loop, the output is good for segmentation mask, but heatmaps are wrong and looks like masks. how can i solve this, please give me your suggestion. thank you
@statquest
@statquest 2 жыл бұрын
Unfortunately I have no idea.
@wendy6792
@wendy6792 2 жыл бұрын
Thank you for your nice explanation, could you please let me know how did you derive the value for those weights (e.g. x -2.5, x -1.5 etc)? Many thanks in advance.
@statquest
@statquest 2 жыл бұрын
The weights and biases were derived using backpropagation. For details, see: kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html kzfaq.info/get/bejne/n9-eZd2VprLNmWw.html and kzfaq.info/get/bejne/fbGKorJ5va3HfKM.html
@wendy6792
@wendy6792 2 жыл бұрын
@@statquest Thank you Josh! Will have a good look at them!
@isratara3933
@isratara3933 2 жыл бұрын
Hi, do you have the code for this please?
@statquest
@statquest 2 жыл бұрын
I'm currently working on a series of videos that show everything you need to know to create neural networks in PyTorch-Lightning.
@starkarabil9260
@starkarabil9260 3 жыл бұрын
thanks for this great video. I have a dummy question: How do we know in this sample that if we add ZERO this is the output for Setosa? 7:35
@statquest
@statquest 3 жыл бұрын
That bias term, 0, is the result of backpropagation. For details, see: kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html
@ganavimadduri7834
@ganavimadduri7834 3 жыл бұрын
Hi. Please make a video on lightgbm as well as on catboost.. 🙏🙏
@statquest
@statquest 3 жыл бұрын
I'll keep that in mind.
@sallu.mandya1995
@sallu.mandya1995 3 жыл бұрын
it would be great if you teach sql , ai , dl , high school maths and history toooooooooo
@statquest
@statquest 3 жыл бұрын
Maybe one day I will. :)
@arifproklamasi8120
@arifproklamasi8120 3 жыл бұрын
You get a high quality content for free, Bam
@statquest
@statquest 3 жыл бұрын
Thanks!
@MrCracou
@MrCracou 3 жыл бұрын
Fisher was here. Iris forever!
@statquest
@statquest 3 жыл бұрын
Bam!
@MrCracou
@MrCracou 3 жыл бұрын
@@statquest This is really excellent. Do you allow me to link those videos to my students?
@statquest
@statquest 3 жыл бұрын
@@MrCracou Of course. Please share the links with your students.
@user-ik8my9kb5h
@user-ik8my9kb5h 3 жыл бұрын
So if i get the theory right(in a vague sense), a neural network is just giving the computer a set of function(the nodes), the computer transforms them, and then fuse them in order to create new functions, one for each category so those functions have greater values than the others when the input is from that category. Did i get it right?
@statquest
@statquest 3 жыл бұрын
Yes, that pretty much sums it up.
@user-ik8my9kb5h
@user-ik8my9kb5h 3 жыл бұрын
@@statquest You are officially a life saver.
@minakshimathpal8698
@minakshimathpal8698 3 жыл бұрын
Hi josh ....Your videos on neural network are just awesome..but plzz help me to understand that how are you(or NN) is deciding the scale of y coordinate. like for setosa it was 0 to 1 and for other two species again it is different.
@statquest
@statquest 3 жыл бұрын
What time point, minutes and seconds, are you asking about?
@minakshimathpal8698
@minakshimathpal8698 3 жыл бұрын
@@statquest (9.44 to 9.57) and (3.44) and (12.1 to 12.9)
@abirh7161
@abirh7161 Жыл бұрын
How Weight get updated for multiple inputs and outputs.
@statquest
@statquest Жыл бұрын
Backpropagation, just like all other neural networks. However, now we have 3 terms (one for each output) we have to add together for each input value. That said, there are more elaborate ways to do this, and they are described in these videos: kzfaq.info/get/bejne/bKeihtykmtescYk.html and kzfaq.info/get/bejne/rqh1m5lnu5_LiqM.html
@hoaxuan7074
@hoaxuan7074 3 жыл бұрын
It's well worth studying all the math of the dot product including the statistical and DSP filtering aspects. If you look into the basement of the NN castle you are left a little shocked by how weak and crumbly its foundation is because even top researchers have started with NN books that begin you with the term weighted sum and work forward from there. Never to go back and look at the details. And as I said before ReLU is a sad misunderstood switch.
@statquest
@statquest 3 жыл бұрын
Noted
@hoaxuan7074
@hoaxuan7074 3 жыл бұрын
@@statquest As an example if you want to make the output of a dot product a specific value (say 1) for a specific input vector you can make the angle to the 'weight' vector zero. You may even get error correction in that case (reduction in variance for noise in (across) the input vector.) If you make the angle close to 90 degrees then the magnitude of the weight vector has to be large to get 1 out and the noise will be greatly magnified. The variance equation for linear combinations of random variables applies to the dot product. Understanding such things you may construct say a general associative memory out of the dot product. Eg. Vector to vector random projection, bipolar (+1,-1) binarization, then the dot product. To train find the recall error, divide by the number of dimensions, then add or subtract that to each weight to make the error zero as indicated by the +1 or -1 binarization. If you look into the matter you will see that you have added a little Gaussian noise to all the prior associations (CLT.) The RP+binarization is a locality sensitive hash. Close inputs only give a few bits different in the output. To understand the system you could consider the case of a full hash where the slightest change in the input produced a totally random change.
@mohamadsamaei555
@mohamadsamaei555 Жыл бұрын
wow. I didn't know there is living creature in Svalbard other than polar bear (((:
@statquest
@statquest Жыл бұрын
:)
Neural Networks Part 5: ArgMax and SoftMax
14:03
StatQuest with Josh Starmer
Рет қаралды 156 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,6 МЛН
SPONGEBOB POWER-UPS IN BRAWL STARS!!!
08:35
Brawl Stars
Рет қаралды 21 МЛН
English or Spanish 🤣
00:16
GL Show
Рет қаралды 7 МЛН
What will he say ? 😱 #smarthome #cleaning #homecleaning #gadgets
01:00
Backpropagation Details Pt. 1: Optimizing 3 parameters simultaneously.
18:32
StatQuest with Josh Starmer
Рет қаралды 200 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 392 М.
I Built a Neural Network from Scratch
9:15
Green Code
Рет қаралды 260 М.
Neural Network Architectures & Deep Learning
9:09
Steve Brunton
Рет қаралды 785 М.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 383 М.
Word Embedding and Word2Vec, Clearly Explained!!!
16:12
StatQuest with Josh Starmer
Рет қаралды 297 М.
Neural Networks Pt. 3: ReLU In Action!!!
8:58
StatQuest with Josh Starmer
Рет қаралды 261 М.