Neural Networks Part 5: ArgMax and SoftMax

  Рет қаралды 149,000

StatQuest with Josh Starmer

StatQuest with Josh Starmer

Күн бұрын

When your Neural Network has more than one output, then it is very common to train with SoftMax and, once trained, swap SoftMax out for ArgMax. This video give you all the details on these two methods so that you'll know when and why to use ArgMax or SoftMax.
NOTE: This StatQuest assumes that you already understand:
The main ideas behind Neural Networks: • The Essential Main Ide...
How Neural Networks work with multiple inputs and outputs: • Neural Networks Pt. 4:...
For a complete index of all the StatQuest videos, check out:
statquest.org/video-index/
If you'd like to support StatQuest, please consider...
Buying my book, The StatQuest Illustrated Guide to Machine Learning:
PDF - statquest.gumroad.com/l/wvtmc
Paperback - www.amazon.com/dp/B09ZCKR4H6
Kindle eBook - www.amazon.com/dp/B09ZG79HXC
Patreon: / statquest
...or...
KZfaq Membership: / @statquest
...a cool StatQuest t-shirt or sweatshirt:
shop.spreadshirt.com/statques...
...buying one or two of my songs (or go large and get a whole album!)
joshuastarmer.bandcamp.com/
...or just donating to StatQuest!
www.paypal.me/statquest
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
/ joshuastarmer
0:00 Awesome song and introduction
2:02 ArgMax
4:21 SoftMax
6:36 SoftMax properties
9:31 SoftMax general equation
10:20 SoftMax derivatives
#StatQuest #NeuralNetworks #ArgMax #SoftMax

Пікірлер: 229
@statquest
@statquest 2 жыл бұрын
The full Neural Networks playlist, from the basics to deep learning, is here: kzfaq.info/get/bejne/edd_mcxllrLKdKs.html Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
@mrglootie101
@mrglootie101 3 жыл бұрын
Can't wait for "cross entropy cleary explained!" BAM!
@statquest
@statquest 3 жыл бұрын
It's coming soon.
@AlbertHerrandoMoraira
@AlbertHerrandoMoraira 3 жыл бұрын
Your videos are awesome! Thank you for doing them and continue with the great work! 👍
@statquest
@statquest 3 жыл бұрын
Thank you very much! :)
@cara1362
@cara1362 3 жыл бұрын
The video is so impressive especially when you explain why we can't treat the output of softmax as a simple probability. Best tutorial ever for all the explanations in ML!!!
@statquest
@statquest 3 жыл бұрын
Thanks a lot!
@Aman-uk6fw
@Aman-uk6fw 3 жыл бұрын
No words for you man , you are doing a very great, and I totally fall in love with your music and way you teach, love from india❤️
@statquest
@statquest 3 жыл бұрын
Thank you very much! :)
@bryan6aero
@bryan6aero 2 жыл бұрын
Thank you! This is by far the clearest explanation of SoftMax I've found. I finally get it!
@statquest
@statquest 2 жыл бұрын
Thank you! :)
@201pulse
@201pulse 3 жыл бұрын
I just want to say that YOU are awesome. Best educational content on the web hands down.
@statquest
@statquest 3 жыл бұрын
Thank you very much! :)
@factsfigures2740
@factsfigures2740 3 жыл бұрын
Sir the way you teach is exceptionally creative thanks to you , my deep learning exam went well
@statquest
@statquest 3 жыл бұрын
TRIPLE BAM!!! Congratulations!!
@karansaxena96
@karansaxena96 2 жыл бұрын
Your way of explaining things made me subscribe you. Love to see topics explained in a simple yet funny way. Keep up the great work. And also.... *BAM*
@statquest
@statquest 2 жыл бұрын
Thank you very much! BAM! :)
@iReaperYo
@iReaperYo 2 ай бұрын
nice touch at the end. I didn't realise the use for ArgMax until you said it's nice for classifying new observations
@statquest
@statquest 2 ай бұрын
:)
@ishanbuddhika4317
@ishanbuddhika4317 3 жыл бұрын
Hi Josh, Your explanations are super awesome!!! You ruin barriers for statistics!!! Also they are super creative :). Many Thanks! Please keep it up. Thanks again. BAM!!!
@statquest
@statquest 3 жыл бұрын
Thank you! BAM! :)
@menchenkenner
@menchenkenner 3 жыл бұрын
Hey Josh, needless to say, your videos and tutorials are amazingly fun! Can you please create an video-series on Shapley values! Those are widely used in practise.
@statquest
@statquest 3 жыл бұрын
Thanks for your support and I'll keep that topic in mind! :)
@aswink112
@aswink112 3 жыл бұрын
Thanks Josh for the crystal clear explanation.
@statquest
@statquest 3 жыл бұрын
Glad it was helpful!
@NicholasHeeralal
@NicholasHeeralal 2 жыл бұрын
Your videos have been extremely helpful, thank you so much!!
@statquest
@statquest 2 жыл бұрын
I'm so glad!
@AndruXa
@AndruXa Жыл бұрын
universities offering AI/ML programs should just hire a program manager to sort and prioritize Josh Starmer's YT videos and organize exams
@statquest
@statquest Жыл бұрын
That would be awesome!
@RomaineGangaram
@RomaineGangaram Ай бұрын
You need a brilliant channel Josh
@patriciachang5079
@patriciachang5079 3 жыл бұрын
Thousand thanks for the explanation! Your explanation is much easier to understand, comparing to my lecturers! Could you make some videos about cost function? :)
@statquest
@statquest 3 жыл бұрын
Thank you! :)
@lucarauchenberger628
@lucarauchenberger628 2 жыл бұрын
this is all so well explained! just wow!
@statquest
@statquest 2 жыл бұрын
:)
@haadialiaqat4590
@haadialiaqat4590 2 жыл бұрын
Excellent vedio. Thank you for explaining so well.
@statquest
@statquest 2 жыл бұрын
Glad it was helpful!
@drccccccccc
@drccccccccc 2 жыл бұрын
you deserve a professor tittle!!! Fantastic
@statquest
@statquest 2 жыл бұрын
Thanks!
@srishylesh2935
@srishylesh2935 Жыл бұрын
Josh. Hands down genius. Im crying.
@statquest
@statquest Жыл бұрын
Thanks!
@ilkinhamid1072
@ilkinhamid1072 3 жыл бұрын
Thank You for awesome explanation
@statquest
@statquest 3 жыл бұрын
Glad it was helpful!
@user-se8ld5nn7o
@user-se8ld5nn7o 2 жыл бұрын
Hi! First of all, absolutely amazing video!
@statquest
@statquest 2 жыл бұрын
Hey, thanks!
@coralkuta7804
@coralkuta7804 Жыл бұрын
Just bought your book ! it's AMAZING !!! your videos too :)
@statquest
@statquest Жыл бұрын
Thank you so much! :)
@coralkuta7804
@coralkuta7804 Жыл бұрын
@@statquest I'm spreading your existance to all of my students friends ✌️
@palsshin
@palsshin 2 жыл бұрын
amazing as always!!
@statquest
@statquest 2 жыл бұрын
Thank you!
@faycalzaidi6459
@faycalzaidi6459 3 жыл бұрын
bonjour JOSH merci beaucoup pour cette belle explication.
@statquest
@statquest 3 жыл бұрын
Merci! BAM! :)
@jijie133
@jijie133 3 жыл бұрын
predicted probabilities, probabilities calibration. Great video.
@statquest
@statquest 3 жыл бұрын
Thank you! :)
@gurns681
@gurns681 2 жыл бұрын
Fantastic vid!
@statquest
@statquest 2 жыл бұрын
Thanks!
@amiryo8936
@amiryo8936 Жыл бұрын
Lovely video 👌
@statquest
@statquest Жыл бұрын
Thank you!
@junaidbutt3000
@junaidbutt3000 3 жыл бұрын
Great video as always Josh! Just to clarify something about the discussion around the 9:38 timestamp, you're taking i =1 (Setosa) as an example right? When updating all of the parameter values via backpropagation, we would need to compute the softmax derivatives for all i and with respect to all output values - is that correct? So we would also require the derivative for the softmax value Virginica with respect to raw values for setosa, versicolor and virginica and also the derivative for the softmax value Versicolor with respect to raw values for setosa, versicolor and virginica?
@statquest
@statquest 3 жыл бұрын
Yes, that is correct.
@hangchen
@hangchen Жыл бұрын
11:06 The best word of the century.
@statquest
@statquest Жыл бұрын
Haha! :)
@weisionglee360
@weisionglee360 Жыл бұрын
First, thank you for your amazingly well-planned and prepared course videos! They are invaluable! A question about SoftMax func. It seems to me, for single output, Softmax() will always return value "1", so can't be used for backpropagation, no?
@statquest
@statquest Жыл бұрын
If you only have a single output from your NN, then you wouldn't use Softmax to begin with. However, when you have more than one output, then the derivative works out. For details, see kzfaq.info/get/bejne/g5tpfaidqrbLeZs.html kzfaq.info/get/bejne/bKeihtykmtescYk.html and kzfaq.info/get/bejne/rqh1m5lnu5_LiqM.html
@BillHaug
@BillHaug Жыл бұрын
I saw the thumbnail and the pirate flag and immediately knew where you were going haha.
@statquest
@statquest Жыл бұрын
bam! :)
@qingfenglin
@qingfenglin 6 ай бұрын
Thanks!
@statquest
@statquest 6 ай бұрын
Thank you so much for supporting StatQuest! TRIPLE BAM!!! :)
@naughtrussel5787
@naughtrussel5787 9 ай бұрын
Cute bear next to formulae is the best way to explain math to me.
@statquest
@statquest 9 ай бұрын
bam!
@AnujFalcon
@AnujFalcon 2 жыл бұрын
Thanks.
@statquest
@statquest 2 жыл бұрын
Any time!
@shivamkumar-rn2ve
@shivamkumar-rn2ve 2 жыл бұрын
BAM you cleared all my doubt
@statquest
@statquest 2 жыл бұрын
:)
@francismikaelmagueflor1749
@francismikaelmagueflor1749 Жыл бұрын
low key kinda proud that I did the derivative before you even asked where it came from xd
@statquest
@statquest Жыл бұрын
bam! :)
@martynasvenckus423
@martynasvenckus423 2 жыл бұрын
Hi Josh, thanks for great video as always. The only thing I wanted to ask is about argmax function. The way you describe it works implies that argmax returns a vector of 0s (having 1 in the position of maximum value) which is of the same length as the input vector. However, the way argmax works in numpy or pytorch libraries is by returning a scalar value indicating the position instead of a vector. Given this difference, what is the true behaviour of argmax? Thanks
@statquest
@statquest 2 жыл бұрын
In both cases, argmax identifies the element with the largest value.
@travel6142
@travel6142 2 жыл бұрын
Thank you for this video. I understood the logic behind softmax. While backpropagating from loss to softmax and then from softmax to the raw input, for example for setosa we have 3 derivates (as you mentioned in video). After calculating them (derivate of setosa wrt to the 3 classes), what do we do? We sum them up? Or multiply, or, ... ?
@statquest
@statquest 2 жыл бұрын
See: kzfaq.info/get/bejne/rqh1m5lnu5_LiqM.html
@travel6142
@travel6142 2 жыл бұрын
@@statquest I will check it, thank you!
@abhishekm4996
@abhishekm4996 3 жыл бұрын
Thanks..🥳
@statquest
@statquest 3 жыл бұрын
:)
@breakingBro325
@breakingBro325 8 ай бұрын
Hello Josh, really nice video, could I ask you what software you used to create the video? I want to take notes by using the same thing you used and learn some presentation skills from it.
@statquest
@statquest 8 ай бұрын
I give away all of my secrets in this video: kzfaq.info/get/bejne/mdh8i614kqulmJ8.html
@zhenhuahuang291
@zhenhuahuang291 3 жыл бұрын
Could you do some videos of R or SAS for Neural Network using ReLU and Softmax activiation functions?
@statquest
@statquest 3 жыл бұрын
I plan on doing on in R soon.
@elemenohpi8510
@elemenohpi8510 5 ай бұрын
Thank you for the video. Quick question, as far as I understood, argmax and softmax are applied to the outputs of the last layer. Couldn't we use Argmax but train the network with back propagation with the outputs before argmax is applied?
@statquest
@statquest 5 ай бұрын
Yes, and that is often the case.
@anshulbisht4130
@anshulbisht4130 Жыл бұрын
Hey josh , Q1) if we are classifying N class then do our NN give us N-1 decision surface ? Q2) when we get our query point Xq , we pass it through all decision surface and get value predicted by each surface ?
@statquest
@statquest Жыл бұрын
A1) See: kzfaq.info/get/bejne/bpl8jLVelq_HmnU.html A2) See A1.
@hunterswartz6389
@hunterswartz6389 2 жыл бұрын
Nice
@statquest
@statquest 2 жыл бұрын
Thanks
@joaoperin8313
@joaoperin8313 Жыл бұрын
We need to minimize SSR to Regression problems using Neural Network -> when we have a quantitative response, We use SoftMax , ArgMax and CrossEntropy to Classification problems using Neural Network -> when we have a qualitative response. I think is something in this line...
@statquest
@statquest Жыл бұрын
Yep, that's pretty much the idea.
@dianaayt
@dianaayt 9 ай бұрын
hi! Does softmax has any limitations? It seems to good to be true and when that happens it usually isn't good haha I've seem some like being sensitive to outliers but I don't quite understand why. Is it if the raw numbers had some outlier?
@statquest
@statquest 9 ай бұрын
What do you mean by "too good to be true"? What seems too good to be true about the softmax function?
@alonsomartinez9588
@alonsomartinez9588 Жыл бұрын
It would be good to remind people what 'e' is in this vid as well as what the current value of it is! People could mistake error of the network vs entropy?
@statquest
@statquest Жыл бұрын
Ok. Thanks for the tip!
@fndpires
@fndpires 2 жыл бұрын
Come on people, buy his songs, subscribe to the channel, thumbs UP, give him some money! Look what hes doing. HUGE DAMN!
@statquest
@statquest 2 жыл бұрын
Thanks for the support!!! :)
@janeli2487
@janeli2487 Жыл бұрын
Hey @StatQuest, I am a bit confused about ArgMax function and why its derivative is 0. The argmax function that I used in python return the index of the max value which I would assume is different from what the ArgMax function you mentioned here. What is the explicit function of the ArgMax in your video?
@statquest
@statquest Жыл бұрын
Regardless of whether or not your function sets the largest output to 1 and everything else to 0, or just returns the index of the largest output and ignores everything else, the the output is is constant until the threshold is met, then switches at that point (is discontinuous) and is then constant again. Thus, either way, the derivative is 0.
@beshosamir8978
@beshosamir8978 Жыл бұрын
Hi Josh , I have some doubts here , Why we needed to use softmax at all in training ?why we didn't continue to use SSR like a backpropagation main idea ? is there any problem with SSR , so it made us had to transform the output to something else to work with ?
@statquest
@statquest Жыл бұрын
SoftMax allows us to use Cross Entropy as a loss function, which I believe makes training easier when there are multiple classifications.
@jennycotan7080
@jennycotan7080 6 ай бұрын
That pirate joke! Moving on in the fields of Maths...
@statquest
@statquest 6 ай бұрын
:)
@Kagmajn
@Kagmajn Жыл бұрын
nice
@statquest
@statquest Жыл бұрын
Thanks!
@tianchengsun3767
@tianchengsun3767 2 жыл бұрын
looks that softmax is very similar to logistic regression? correct me if I am wrong? Could you give a brief explanation? Thank you so much
@statquest
@statquest 2 жыл бұрын
It's quite different. Logistic regression doesn't just take a bunch of random values and convert them into "probabilities". For details, see: kzfaq.info/sun/PLblh5JKOoLUKxzEP5HA2d-Li7IJkHfXSe
@rachelcyr4306
@rachelcyr4306 3 жыл бұрын
Do you have anything on soft max logistic regression????
@statquest
@statquest 3 жыл бұрын
Not yet.
@deniz.7200
@deniz.7200 18 сағат бұрын
Bro just create a course out of this videos with some additional content (like a bootcamp). I think that would sell very well :)
@statquest
@statquest 17 сағат бұрын
Thanks! I'm currently putting it all (plus a few bonus things) in a book right now and I hope to have it out by the end of the year.
@pranjalpatil9659
@pranjalpatil9659 2 жыл бұрын
I wish Josh taught me all the maths I've ever learned
@statquest
@statquest 2 жыл бұрын
:)
@bingochipspass08
@bingochipspass08 2 жыл бұрын
Not all heroes wear capes!
@statquest
@statquest 2 жыл бұрын
bam!
@EEBADUGANIVANJARIAKANKSH
@EEBADUGANIVANJARIAKANKSH 3 жыл бұрын
let say i have the chance to increase ur subscriber, I will make it to 1M (small BAM!), {10^0} no no I will change it to 10M (BAM!) {10^1} but I guess ur channel should have at least 100M subs (Double BAM) {10^2} 0, 1, 2 denotes the Standard of BAM! Jokes apart, I really think this is one of the most useful channel I have ever seen, I like the way he structures his videos for explaining the concept. Sometimes even my Professors look at these videos for reference. That's how good the channel is!!!!!
@statquest
@statquest 3 жыл бұрын
Thank you! :)
@averagegamer9513
@averagegamer9513 Жыл бұрын
I have a question. Why is the softmax function necessary? It seems like you could directly calculate probabilities between 0 and 1 summing to 1 without the exponential function, so why do we use it?
@statquest
@statquest Жыл бұрын
Sure, there are other ways you could solve this problem. However, the SoftMax function has a derivative that is relatively easy to compute, and that makes it relatively easy to work with in terms of using Backpropagation.
@averagegamer9513
@averagegamer9513 Жыл бұрын
@@statquest Thanks for the explanation, and great video!
@ritwikpm
@ritwikpm 8 ай бұрын
We minimise cross entropy (= - log likelihood) to fit both Neural Networks and Logistic Regression. Logistic regression can also theoretically converge to different parameter estimates based on initial weights - just like neural networks. But we still consider their output to be a representation of probability - specifically because they are fit to maximise log likelihood. Why can't similar logic be applied to Neural Network classification. The parameter estimates might vary, but as long as we are maximising log likelihood (and minimising the most common loss cross entropy), are we not predicting probabilities...?
@statquest
@statquest 8 ай бұрын
To be honest, I don't really know. But if I had to guess, it might have something to do with the fact that Logistic Regression fits a relatively simple and easy to understand shape to the data that doesn't allow non-linearities in the sense that the predicted probabilities don't start low, then go up and then go low again. In contrast, neural networks have no limit on the shape they can fit to the data and allow all kinds of non-linearities.
@mountaindrew_
@mountaindrew_ Жыл бұрын
Is SSR used mainly for single output neural networks?
@statquest
@statquest Жыл бұрын
it depends on what you are predicting.
@porkypig7170
@porkypig7170 Жыл бұрын
I’m getting 0.11 (rounded), not 0.10 as the softmax for versicolor using this calculation: e^-0,4/(e^1,43+e^-0,4+e^0,23) Is it correct? Just double checking to make sure I’m making the right calculations
@statquest
@statquest Жыл бұрын
1.1 is correct.
@MADaniel717
@MADaniel717 3 жыл бұрын
How do I tune the other weights and biases altogether?
@statquest
@statquest 3 жыл бұрын
Like this: kzfaq.info/get/bejne/f7Rii9Bzza-wpGg.html kzfaq.info/get/bejne/n9-eZd2VprLNmWw.html kzfaq.info/get/bejne/fbGKorJ5va3HfKM.html kzfaq.info/get/bejne/rqh1m5lnu5_LiqM.html
@AdrianDolinay
@AdrianDolinay 2 жыл бұрын
Great thumbnail lol
@statquest
@statquest 2 жыл бұрын
:)
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
I know that a feedforward neural net with 1 hidden layer is universal approximator but can you tell me why we use nonlinear activaton function in 2nd hidden layer in neural net with 2 hidden layer ,beacuse the neurons in 1st hidden layer has leaned nonlinear function with respect to inputs and the 2nd hidden layer is just doing linear combination thus a linear combination of nonlinear function with respect to inputs is a nonlinear function ,then why we use activation function in 2nd layer in 2 layer neural net?
@statquest
@statquest 3 жыл бұрын
I think the more activation functions we have, the more flexibility we have in the model.
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
@@statquest means we can use second just for linear combination of nonlinear function(learned from previous layer neurons)so to learn more complex nonlinear function,but this won't provide more flexibility than if we have used activation function with linear combination.
@brahimmatougui1195
@brahimmatougui1195 10 ай бұрын
but sometimes we need to give probabilities along with the model prediction, especially for multiclass prediction. if we can not trust the probabilities (8:11) given by the model what should we do? In other words, If I want to assign probabilities to each class provided in the output, how would I go about doing it?
@statquest
@statquest 10 ай бұрын
These "probabilities" follow the definition of "probability" (they are between 0 and 1 and add up to 1) - so if that is good enough, then you are good to go. However, if you want to use them in a setting where you can interpret them as "given these input values, 95% of the time the species is X", then you should use a different model. Possibly logistic regression would be a better fit.
@brahimmatougui1195
@brahimmatougui1195 10 ай бұрын
@@statquest Thank you for your prompt answer
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
I want to know in pure mathematics , do neurons learns functions with certain superpositions, width,height and slope (controlled by neurons through weights and biases) such that when we combine them we'll get a approximation for the function we're trying to approximate?
@statquest
@statquest 3 жыл бұрын
Neural Networks are considered "universal function approximators".
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
@@statquest I mean they approximate function by learning certain simpler function with ceratain superposition ,slope,height,width(controlled by weights and biases) so that when we combine them we get a approximation for the function we're trying to approximate?
@statquest
@statquest 3 жыл бұрын
@@Anujkumar-my1wi To be honest, I'm probably the worst person to ask about these sorts of things. I know that, through weights and biases, we create a wide variety of non-linear functions that are added together to create a complicated function that approximates the training data. However, I'm not sure that's what you're looking for.
@Anujkumar-my1wi
@Anujkumar-my1wi 3 жыл бұрын
@@statquest No , i just wanted to ask whether that's the way a neural net works mathematically.
@statquest
@statquest 3 жыл бұрын
@@Anujkumar-my1wi I'm still a little confused, because mathematically, Neural Networks do exactly what I describe in these videos. I'm not dumbing down the math, this is the real deal, so what you see here is what Neural Networks do mathematically.
@tuananhvt1997
@tuananhvt1997 Жыл бұрын
>Setosa, Versicolor, Virginica I notice that reference 🤔
@statquest
@statquest Жыл бұрын
I'm not sure I understand what you are getting at.
@csmatyi
@csmatyi 2 жыл бұрын
what happens when you run the NN with softmax and 2 outputs have the same value?
@statquest
@statquest 2 жыл бұрын
Then they'll have the same softmax output.
@yourfutureself4327
@yourfutureself4327 Жыл бұрын
💚
@statquest
@statquest Жыл бұрын
:)
@julescesar4779
@julescesar4779 2 жыл бұрын
@statquest
@statquest 2 жыл бұрын
double bam! :)
@shubhamtalks9718
@shubhamtalks9718 3 жыл бұрын
Why not do the normalization of raw output values? What is the benefit of first doing exponentiation and then normalization?
@statquest
@statquest 3 жыл бұрын
I believe that the exponentiation ensures that the SoftMax function will be continuous for all input values.
@shubhamtalks9718
@shubhamtalks9718 3 жыл бұрын
@@statquest Will it be discontinuous if we do normalization of raw output values?
@statquest
@statquest 3 жыл бұрын
@@shubhamtalks9718 If two of the 3 outputs are 0, then we'll get ArgMax, and that's no good.
@shubhamtalks9718
@shubhamtalks9718 3 жыл бұрын
@@statquest BAM!!! Got it. Thanks.
@andrewdunbar828
@andrewdunbar828 Жыл бұрын
Does the output range depend on the activation function? Looks like ReLU but I think it can't happen with sigmoids.
@statquest
@statquest Жыл бұрын
The output range of what?
@andrewdunbar828
@andrewdunbar828 Жыл бұрын
@@statquest The output nodes. Right at the start around 1:40
@statquest
@statquest Жыл бұрын
@@andrewdunbar828 Because the activation functions are in the middle, and then, after them, we multiply those values by weights and add biases, that, in theory, could be anything, we could definitely end up with numbers > 1 and < 0 even if the activation functions were sigmoids. For example, if the last bias term before the output for setosa was +100, then we could easily end up with output values > 100.
@andrewdunbar828
@andrewdunbar828 Жыл бұрын
@@statquest Hmm I have much to learn (-:
@Itachi-uchihaeterno
@Itachi-uchihaeterno 5 ай бұрын
More videos , Autoencoders and GANs
@statquest
@statquest 5 ай бұрын
I'll keep those topics in mind.
@CreativePuppyYT
@CreativePuppyYT 3 жыл бұрын
You forgot to add this video to the machine learning playlist
@statquest
@statquest 3 жыл бұрын
Thanks! I'm still in the middle of the neural network series of videos. Hopefully when they are done (in a few weeks) I'll get the playlists organized properly.
@alrzhr
@alrzhr 11 ай бұрын
This guy is different :)))
@statquest
@statquest 11 ай бұрын
:)
@YuriPedan
@YuriPedan 3 жыл бұрын
Somehow "Part 4 Multiple inputs and outputs" video is not available for me :(
@statquest
@statquest 3 жыл бұрын
Thanks for pointing that out. I've fixed the link: kzfaq.info/get/bejne/bpl8jLVelq_HmnU.html
@YuriPedan
@YuriPedan 3 жыл бұрын
@@statquest Thank you very much!
@luciferpyro4057
@luciferpyro4057 3 жыл бұрын
What does e stand for in the softmax equation? did I miss something ? Is "e" suppose to represent euler's number = 2.7182818284590452353602874713527... ?
@statquest
@statquest 3 жыл бұрын
'e' is Euler's number. 'e', and the natural log (log base 'e'), are used throughout machine learning (and statistics) because their derivatives are so easy to work with.
@luciferpyro4057
@luciferpyro4057 3 жыл бұрын
@@statquest Thanks
@Rictoo
@Rictoo 5 ай бұрын
I have a question! At 3:35 you say "ArgMax will output 1 for any other value greater than 0.23" - but shouldn't it be "greater than 1.43", because ArgMax points to the value that is the highest in the set of outputs? Other related question: And then is the intuition that if we know the true value of Virginica (e.g., if the training sample was truly Virginica), then if the ArgMax is 0 for Virginica on that training example (because we predicted it wrong), then we essentially "Wouldn't know how to get to the right answer", because we have no slope pointing towards the right answer? We're just told "You're wrong. Not telling you _how_ wrong, just wrong." which isn't helpful for learning.
@statquest
@statquest 5 ай бұрын
At 3:34 I say "> 0.23", because 0.23 is the second largest number, and any number larger than it, will be the one selected by argmax. If, instead, I had said "> 1.43", then nothing would be selected, since 1.43 is the largest number and nothing is larger. And your intuition for the second part is correct.
@Rictoo
@Rictoo 4 ай бұрын
Ohhh, thanks. Now I understand that the Argmax function you're plotting there is the Argmax of the Setosa class, not Versicolor (I think?). I was initially under the impression it was for the Versicolor class.@@statquest
@srewashilahiri2567
@srewashilahiri2567 2 жыл бұрын
If we start with different values for weights and biases then why will the optimum values be different if we have a global minimum for each through gradient descent? What am I missing?
@statquest
@statquest 2 жыл бұрын
There are lots of local minimums that we can get stuck in, and there may be several that are almost as good as the global minimum.
@srewashilahiri2567
@srewashilahiri2567 2 жыл бұрын
@@statquest Did some reading and got your point completely....thanks for the videos...not sure if learning ML could get any easier or better!
@statquest
@statquest 2 жыл бұрын
@@srewashilahiri2567 bam!
@yashikajain5997
@yashikajain5997 2 жыл бұрын
@@statquest Stucking in the local minima would depend on the cost function? If we use Cross-entropy as the loss function, then because it is a convex function, it will definitely converge to the global minima. And, in this case can we trust the accuracy of these 'probabilities'? This is what I am thinking, please correct me if I am wrong. Thank You
@statquest
@statquest 2 жыл бұрын
@@yashikajain5997 Unfortunately it's not that simple. Cross-Entropy, like SSR, is convex in very simple situations, but the entire Neural Network is non-linear with respect to the parameters so regardless of the loss function, we can end up with a strange shape that has local minima that we can get stuck in.
@gummybear8883
@gummybear8883 2 жыл бұрын
Anybody knows what is the equivalent of argmax in tensorflow's activation arguments ? They only have softmax in there.
@statquest
@statquest 2 жыл бұрын
There's probably a base "max" function in Python or numpy you could use.
@gummybear8883
@gummybear8883 2 жыл бұрын
@@statquest Thanks for the suggestion Josh. I bought your new sketch book and I think it is very clever. I thought it would have been much better, if the book cover was hard bound. Overall, thank you for making these videos.
@statquest
@statquest 2 жыл бұрын
@@gummybear8883 Thanks! I would have loved to have made a hardback addition, but I'm self publishing and it was not an option.
@Janeilliams
@Janeilliams 2 жыл бұрын
can you show or share the python implementaion
@statquest
@statquest 2 жыл бұрын
I'm working on one.
@howardkennedy4540
@howardkennedy4540 3 жыл бұрын
Why is the versicolor softmax value +0.10 vs -0.10? The math indicates a negative value.
@statquest
@statquest 3 жыл бұрын
SoftMax values are always positive and between 0 and 1. Can you explain how you got a negative value?
@howardkennedy4540
@howardkennedy4540 3 жыл бұрын
@@statquest I misunderstood your notation and missed your comment on e raised to the power. My apologies.
@ayushupadhyay9501
@ayushupadhyay9501 2 жыл бұрын
Bam bam bam
@statquest
@statquest 2 жыл бұрын
:)
@nelsonmcnamara
@nelsonmcnamara 5 ай бұрын
Hello comment section. Would anyone know, or can point me to the right direction if I actually want the Probability (no quote), instead of "Probability"? Imagine if I am predicting the probability of Red Sox winning or Kim winning the Presidential Election, how would I approach that?
@statquest
@statquest 5 ай бұрын
If you want real probabilities, than you don't want to use a neural network. Instead, consider using something like linear regression kzfaq.info/get/bejne/pNFidrR6udPDlaM.html or logistic regression kzfaq.info/get/bejne/r6-JfrVl2M3eeWw.html
@austinoquinn815
@austinoquinn815 Жыл бұрын
Why do we bother applying either of these? cant we just train with raw outputs rather than using softmax and just take the highest valued node as the answer rather than argmax?
@statquest
@statquest Жыл бұрын
That's a valid question and the answer has to do with how softmax feeds into Cross Entropy, and cross entropy is easer to train than the raw output values. For details on all of this, see: kzfaq.info/get/bejne/bKeihtykmtescYk.html
@felipe_marra
@felipe_marra 7 ай бұрын
up
@statquest
@statquest 7 ай бұрын
:)
@phoenixado9708
@phoenixado9708 2 жыл бұрын
So where's hardmax and hardplus
@statquest
@statquest 2 жыл бұрын
Great questions! ;)
@Xayuap
@Xayuap Жыл бұрын
¡ B A M ! 😳
@statquest
@statquest Жыл бұрын
Gracias!
@BlackHermit
@BlackHermit 2 жыл бұрын
Arrrrrrrg! .)
@statquest
@statquest 2 жыл бұрын
bam!
@alternativepotato
@alternativepotato 3 жыл бұрын
heh, setosas value after softmax is 0.69
@statquest
@statquest 3 жыл бұрын
:)
@Alchemist10241
@Alchemist10241 2 жыл бұрын
6:33 This teddy bear eats raw outputs, digests them using Vitamin e (not E) and then sh*ts them between flag zero and flag one. 😁
@statquest
@statquest 2 жыл бұрын
dang
@Anonymous-tm7jp
@Anonymous-tm7jp 10 ай бұрын
AAAARRRRRGGGG!!! mAx😂😂
@statquest
@statquest 10 ай бұрын
:)
@charansahitlenka6446
@charansahitlenka6446 Жыл бұрын
at 6:51 softmax takes 1.43 and gives out 0.69, heavy sus
@statquest
@statquest Жыл бұрын
?
@terjeoseberg990
@terjeoseberg990 7 ай бұрын
Nobody likes derivatives that are totally lame. Especially gradient decent.
@statquest
@statquest 7 ай бұрын
bam! :)
@jijie133
@jijie133 3 жыл бұрын
toilet paper. so funny.
@statquest
@statquest 3 жыл бұрын
:)
@allyourcode
@allyourcode 3 жыл бұрын
ArgMax and SoftMax seem rather pointless since you can already tell which classification the NN is predicting from its raw output; just look for the the greatest output. SoftMax is just going to lull people into the false sense that the outputs are probabilities. In reality, there is nothing super special about its choice of the exp function to force everything to be positive (plus a normalization factor to force everything to add up to 1). Any (differentiable) function f where f(x) >= 0 would have worked just as well as exp.
@allyourcode
@allyourcode 3 жыл бұрын
Ah. The next video explains.
@statquest
@statquest 3 жыл бұрын
yep
@Salmanul_
@Salmanul_ 7 ай бұрын
Thanks!
@statquest
@statquest 7 ай бұрын
Bam! :)
Neural Networks Part 6: Cross Entropy
9:31
StatQuest with Josh Starmer
Рет қаралды 226 М.
The SoftMax Derivative, Step-by-Step!!!
7:13
StatQuest with Josh Starmer
Рет қаралды 72 М.
Was ist im Eis versteckt? 🧊 Coole Winter-Gadgets von Amazon
00:37
SMOL German
Рет қаралды 13 МЛН
Just try to use a cool gadget 😍
00:33
123 GO! SHORTS
Рет қаралды 85 МЛН
MEU IRMÃO FICOU FAMOSO
00:52
Matheus Kriwat
Рет қаралды 37 МЛН
MEGA BOXES ARE BACK!!!
08:53
Brawl Stars
Рет қаралды 34 МЛН
Recurrent Neural Networks (RNNs), Clearly Explained!!!
16:37
StatQuest with Josh Starmer
Рет қаралды 497 М.
Word Embedding and Word2Vec, Clearly Explained!!!
16:12
StatQuest with Josh Starmer
Рет қаралды 270 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,5 МЛН
Why Do Neural Networks Love the Softmax?
10:47
Mutual Information
Рет қаралды 63 М.
I Built a Neural Network from Scratch
9:15
Green Code
Рет қаралды 135 М.
Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
36:15
StatQuest with Josh Starmer
Рет қаралды 608 М.
Softmax Function Explained In Depth with 3D Visuals
17:39
Elliot Waite
Рет қаралды 35 М.
Long Short-Term Memory (LSTM), Clearly Explained
20:45
StatQuest with Josh Starmer
Рет қаралды 499 М.
Neural Networks Part 8: Image Classification with Convolutional Neural Networks (CNNs)
15:24
Was ist im Eis versteckt? 🧊 Coole Winter-Gadgets von Amazon
00:37
SMOL German
Рет қаралды 13 МЛН