As usual, Josh provides the best explanations available. I've done a bunch of online courses - and I always end up referring back to StatQuest to truly understand. I purchased the StatQuest Illustrated PDF and use it all the time. Highly recommended for a) learning these concepts and b) having an index of all of the key topics to know for work and interviews (at end of PDF).
@statquest2 сағат бұрын
Thank you very much! :)
@peteradah10455 сағат бұрын
God bless you KZfaqrs....really I can't explain how much I've learnt so much..
@statquest2 сағат бұрын
Thanks!
@faridfouad96386 сағат бұрын
I have to say, thank you so much, infinite BAM!!
@statquest2 сағат бұрын
You're welcome!
@gabrielcrone67538 сағат бұрын
Hi, Josh. Excellent video! So helpful and clear! 😄I am using a new version of randomForrest, and I cannot seem to locate within my model object the err.rate vector. When I write, "model$err.rate", it returns nothing. Do you know if there are equivalent objects now inside of the model to extract the error rate info? Thanks!
@statquest2 сағат бұрын
What is the exact version you are using? 4.7-1.1 has err.rate. You can see it in the documentation here: cran.r-project.org/web/packages/randomForest/randomForest.pdf
@supahotfire888615 сағат бұрын
So there's a 6% correlation between sniffing rocks and a mouse's weight? Lol
@statquest9 сағат бұрын
:)
@Nodgelol118 сағат бұрын
great explaination with outstanding graphical representations and very funny presentation style ;) thank you very much!
@statquest16 сағат бұрын
Thanks!
@mousquetaire8619 сағат бұрын
The book "Weapons of Math Destruction" by Cathy O'Neil, is worth reading, to understand how data analytics can be misused, with malign consequences.
@statquest16 сағат бұрын
noted!
@mardavpatel195119 сағат бұрын
TRIPPLE BAAMM ❤❤
@statquest16 сағат бұрын
:)
@zongyili462922 сағат бұрын
Thank you for the informative video. I have a question about estimating parameters in statistical analysis. Could you explain the difference between: Using Maximum Likelihood Estimation (MLE) to estimate the parameters of a normal distribution, where the variance formula includes n in the denominator, and Calculating an unbiased estimate of the population variance from a sample, where n−1 is used in the denominator? Both methods use measurements to estimate the parameters of the population from which the measurements are derived, yet they approach the calculation of variance differently. This can be confusing. Could you clarify this?
@statquest16 сағат бұрын
I have a video about why we use n-1 when we estimate variance from a small set of data here: kzfaq.info/get/bejne/qa6Cdcpnp86vmn0.html
@user-ry5zu1wo4e22 сағат бұрын
What an amazing explanation. Thank you so much
@statquest16 сағат бұрын
Thanks!
@martinotanasini371623 сағат бұрын
Thanks for the great video! Do I understand correctly that the sample size determined from the power analysis depends on the means and variances estimated from the first experiment? How do I deal with the fact that this first experiment could result in estimating means and variances far away from the population's?
@statquest16 сағат бұрын
You have to start with some general sense of the variation in the data. It doesn't have to be perfect, but it's the best you can do.
@legendary_gameron6760Күн бұрын
Can you or any one help me, how can I use this method to adjust all perameters simontanously of a 2 hidden layer containing nural network and also do I need to calculate manually and then plug in the values in my network. You may need a video to explain😅😅......
@statquest17 сағат бұрын
I've got some code examples for how to use cross entropy in PyTorch here: lightning.ai/lightning-ai/studios/statquest-build-and-train-a-neural-network-with-multiple-inputs-and-outputs
@RahulVerma-JordanКүн бұрын
This should also reach to the AI/ML scientists behind these algorithms.
@statquest17 сағат бұрын
:)
@fisicaparalavida108Күн бұрын
those grapsh are excellent. how much work doing it. Thank you so much!
@statquest17 сағат бұрын
Thanks! Lots of work goes into these.
@esmeluo6574Күн бұрын
This is such a great video! I do have one question, it looks like encoder calculate self attention for all words in the input (regardless of ordering) but decoder only compute self attention related to the words that appeared previously, it this the correct understanding? It also looked like the work for encoder can be parallelized but decoder is sequential (since the second token use first token as an input. Is this the correct understanding?
@statquest17 сағат бұрын
Those are the main ideas. The encoder can see all of the input at the same time, and the decoder can only see the output as it is generated. During inference, the encoder can process the the input in parallel and the decoder does its work sequentially. However, during training (which is the hard, time consuming part), the decoder can do its work in parallel using something called "masked self-attention", which I describe in the "decoder-only transformer" video: kzfaq.info/get/bejne/mLdlddKg0b6dcZs.html
@RahulVerma-JordanКүн бұрын
If I watched your videos during my college, my career trajectory would be totally different. BIG BAM!!!!
@statquest17 сағат бұрын
Thanks!
@larissacury7714Күн бұрын
❤
@statquest17 сағат бұрын
:)
@birdost8448Күн бұрын
You are the best!!!!!❤🎉😊
@statquest17 сағат бұрын
Thanks! :)
@alexandradragonstone6015Күн бұрын
when getting the average why there is n-1 in the denominator ?
@statquestКүн бұрын
For PCA, this is just convention. However, it has roots in how variance is estimated in general, which I try to explain here: kzfaq.info/get/bejne/qa6Cdcpnp86vmn0.html
@jacobamarjan2325Күн бұрын
I've been struggling to understand neural networks until i stumbled upon this video. This is the best explanation with the best presentation (I agree fully on using easy to understand visualization instead of those fancy one). I don't usually write comments, but I feel the need to thank you for this. Thank you so much!
@statquestКүн бұрын
Thank you very much! :)
@user-wf2co1fq6sКүн бұрын
i found out your channel today by my Dad's recommendation
@statquestКүн бұрын
bam!
@mortyk182Күн бұрын
woah this was some amazing teaching skills sir, you're totally gifted with that
@statquestКүн бұрын
Thanks! 😃
@RoyalYoutube_PROКүн бұрын
Has the mystery of 'n-1' been resolved yet?
@statquestКүн бұрын
Not yet. The best I can do is give you a link: online.stat.psu.edu/stat415/lesson/1/1.3
@ukkyukangКүн бұрын
Thanks!
@statquestКүн бұрын
bam! :)
@user-rf8jf1ot3tКүн бұрын
I love this video. Simple and clear.
@statquestКүн бұрын
Thanks!
@dy8576Күн бұрын
I keep coming back to this video, every time i forget about the inner workings, and its always as easy to regather everything. What content!
@statquestКүн бұрын
Glad it's helpful! :)
@PuneetMehraКүн бұрын
This video specifically was too difficult to understand. For me :(
@statquestКүн бұрын
Sorry to hear that. It might be helpful if you watched this one first: kzfaq.info/get/bejne/fqpzgriJqpmsfYE.html
@sidasmad2389Күн бұрын
You've an amazing way of breaking down things and I can't believe how entertaining you made it.
@statquestКүн бұрын
Thank you very much!
@sidasmad2389Күн бұрын
Just found your channel a few minutes ago and boy am I already questing on and on. Thank you for the amazing lessons!
@statquestКүн бұрын
Bam! :)
@sandeeppatra4577Күн бұрын
Hey, The lecture was great, i completely understood the concept of ChIP Seq, I have on doubt, lets say if the DNA binding protein is unknown, for example if its a novel transcription factor and we don't have much information about it. How can we raise antibodies against that protein if its completely new and also how can we identify the DNA sequence subsequently?
@statquestКүн бұрын
There may be methods that can just determine protein-bound regions, in a general sense.
@ertugruledits3349Күн бұрын
BAM!:) you are our exam saviour 😅
@statquestКүн бұрын
Good luck!
@koustubhmuktibodh4901Күн бұрын
Great explanation.
@statquestКүн бұрын
Thanks!
@enum4794Күн бұрын
i wonder how will LSTM works with 20 units in this case, i hope u can explain it to me thankyou. Btw thanks for the great content!
@statquestКүн бұрын
I talk about how to stack and use multiple LSTMs in my video on encoder-decoder networks: kzfaq.info/get/bejne/gp54ftqWv6-znZs.html
@TheWayOfNaNКүн бұрын
kzfaq.info/get/bejne/fZmVo69ettCwlZc.html
@statquestКүн бұрын
bam! :)
@khanghuy73842 күн бұрын
u saved my life
@statquestКүн бұрын
bam! :)
@amirammar66872 күн бұрын
You exemplify what a lecturer should be.
@statquestКүн бұрын
Thanks!
@yuvalalmog60002 күн бұрын
Will you ever make videos on the subjects of Reinforcement learning, NLP or generative models?
@statquestКүн бұрын
I think you could argue that this video is about NLP and is also a generative model, and I'll keep the other topic in mind.
@yuvalalmog6000Күн бұрын
@@statquest I"ll explain myself better as I admit I phrased it poorly. For deep learning and machine learning you made amazing videos that covered the subjects from basic aspects to advanced ones - thus essentially teaching the whole subject in a fun, creative & enjoyable sequence of videos that can help beginners know it from top to bottom. However, for NLP for example you did talk about specific subjects like word embedding or auto-translation, but there are other topics (mostly older things) in that field that are important to learn such as n-grams & HMM. So my question was not only about specific advanced topics that connect to others, but rather about a full course that covers the basics of the subject as well. Sorry for my bad phrasing and thank you both for your quick answer and amazing videos! 😄
@statquestКүн бұрын
@@yuvalalmog6000 I hope to one day cover HMMs.
@AccessKelly2 күн бұрын
My cat liked this song.
@statquestКүн бұрын
bam! :)
@studentgaming31072 күн бұрын
dislike for the shitty intro's but good video though
@statquestКүн бұрын
noted
@sweetlemon46252 күн бұрын
i watched it on 2x speed to save time ...except I had to repeat it 3 times
@statquestКүн бұрын
That's pretty funny. Bam?
@meets82 күн бұрын
THANK YOU SOOOOOOOO MUCH!!!! So so so grateful I found you!
@statquestКүн бұрын
Glad I could help!
@ID10T_6B2 күн бұрын
This is the only video on youtube that explains how such a complicated thing works so simply.
@statquest2 күн бұрын
bam! :)
@PuneetMehra2 күн бұрын
This is your 9th video after Intro and you directly jumped to "large p-value", without explaining what is p-value and what is "large p-value", "t-test", etc in any of the previous 8 videos!
@statquest2 күн бұрын
Sorry about that. You'll start learning about those in the next video.
@andile39822 күн бұрын
Been studying for my exams and have really struggled with this section. Thanks Josh. QUADRUPLE BAM !
@statquest2 күн бұрын
Hahaha! BAM! :)
@halkerable2 күн бұрын
Thank you Josh - these videos are so helpful. Is there a statistical test for this question: "is the slope coefficient of a linear regression model equal to 1?" The context is for a quantitative test, comparing it to a series of analytes of known quantity. I'd like to know if the test has a bias, e.g. it over-quantitates higher concentrations (or under-quantitates lower concentration samples) Thanks again
@statquest2 күн бұрын
I don't believe that there is one, but you can plot the residuals and see if you see a bias there (the residuals should be normally distributed - and you can test that with something called a K-S Test).
@jorenmaes4982 күн бұрын
I just noticed when you said "please subscribe" at the end of the video, the subscribe button lit up:)