Tutorial 12- Stochastic Gradient Descent vs Gradient Descent

  Рет қаралды 210,973

Krish Naik

Krish Naik

5 жыл бұрын

Below are the various playlist created on ML,Data Science and Deep Learning. Please subscribe and support the channel. Happy Learning!
Deep Learning Playlist: • Tutorial 1- Introducti...
Data Science Projects playlist: • Generative Adversarial...
NLP playlist: • Natural Language Proce...
Statistics Playlist: • Population vs Sample i...
Feature Engineering playlist: • Feature Engineering in...
Computer Vision playlist: • OpenCV Installation | ...
Data Science Interview Question playlist: • Complete Life Cycle of...
You can buy my book on Finance with Machine Learning and Deep Learning from the below url
amazon url: www.amazon.in/Hands-Python-Fi...
🙏🙏🙏🙏🙏🙏🙏🙏
YOU JUST NEED TO DO
3 THINGS to support my channel
LIKE
SHARE
&
SUBSCRIBE
TO MY KZfaq CHANNEL

Пікірлер: 96
@ravindrav1895
@ravindrav1895 2 жыл бұрын
whenever i am confused with some topics , i come back to this channel and watch your videos and it helps me a lot sir .Thank you sir for an amazing explanation
@nagesh866
@nagesh866 3 жыл бұрын
what an amazing teacher you are. Crystal clear.
@BalaguruGupta
@BalaguruGupta 3 жыл бұрын
Amazing explanation Sir! You'll always be the hero for the AI Enthusiasts. Thanks a lot!
@shashanktripathi3034
@shashanktripathi3034 3 жыл бұрын
Krish sir your youtube channel is just like GITA for me as one gets all the answers to life in GITA I get all my doubts cleared on your channel. Thank you, SIr.
@kartikdave659
@kartikdave659 3 жыл бұрын
after becoming member how can i get the data science material, can you please tell me?
@saurabhnigudkar6115
@saurabhnigudkar6115 4 жыл бұрын
Best Deep Learning playlist on youtube
@lakshminarasimhanvenkatakr3754
@lakshminarasimhanvenkatakr3754 4 жыл бұрын
This is excellent explanation so that anyone can understand with so much granular level of details.
@ajithtolroy5441
@ajithtolroy5441 4 жыл бұрын
I saw many videos but this one is quite comprehensible and informative
@fedisalhi6320
@fedisalhi6320 4 жыл бұрын
Excellent explanation, it was really helpful thank you.
@nitayg1326
@nitayg1326 4 жыл бұрын
My God! Finally am clear about GD SGD and mini batch SGD!
@archanamaurya89
@archanamaurya89 3 жыл бұрын
This video is such a light bulb moment for me :D Thank you so very much!!
@VVV-wx3ui
@VVV-wx3ui 4 жыл бұрын
Superb...simply superb. understood the concept now from the Loss function. Well don Krish.
@Skandawin78
@Skandawin78 4 жыл бұрын
Your vidoes are excellent reference to brush up these concepts
@severnsevern1445
@severnsevern1445 3 жыл бұрын
Great explanation . Very clear . Thank!
@allaboutdata2050
@allaboutdata2050 4 жыл бұрын
What an explaination 🧡 . Great !! Awesome !! .
@taranilakshmi9680
@taranilakshmi9680 4 жыл бұрын
Explained very well. Thankyou.
@khuloodnasher1606
@khuloodnasher1606 4 жыл бұрын
Really this is the best video i'v seen ever explaining the concept better than famous. school
@gayathrijpl
@gayathrijpl Жыл бұрын
such a clean way of explanation
@tonyzhang2501
@tonyzhang2501 3 жыл бұрын
Thank you, It is clear explanation. I got it!
@sandipansarkar9211
@sandipansarkar9211 4 жыл бұрын
Thanks Krish. Good video.I want to use all this knowledge in my next batch of deep learning by ineuron
@gauravsingh2425
@gauravsingh2425 4 жыл бұрын
Thanks Krish !!! very nice explanation
@chinmaybhat9636
@chinmaybhat9636 4 жыл бұрын
Awesome @KrishNaik Sir.
@rabidub733
@rabidub733 3 ай бұрын
thanks for this! great explanation
@Kurtmind
@Kurtmind 2 жыл бұрын
Excellent explanation Sir!
@ArthurCor-ts2bg
@ArthurCor-ts2bg 4 жыл бұрын
Krish you concise subject most meaningfully
@koustavdutta5317
@koustavdutta5317 3 жыл бұрын
Hi Krish, one request to you ...like this playlist, please make long videos for the ML Playlist with the Loss Functions , Optimizers used in various ML Algorithms --> mainly in case of Classification Algorithms
@uttamchoudhary5229
@uttamchoudhary5229 5 жыл бұрын
Great video man 👍👍..Please keep it up. I am waiting for next videos
@guytonedhai
@guytonedhai Жыл бұрын
How are you so good at explaining 😭😭😭😭😭 Thanks a lot ♥♥♥
@vinuvarshith6412
@vinuvarshith6412 Жыл бұрын
Top notch explanation!
@ashwanikumar-zh1mq
@ashwanikumar-zh1mq 3 жыл бұрын
Good Good clearly explained nobody can explained like this
@bhavanapurohit2627
@bhavanapurohit2627 3 жыл бұрын
Hi, is it completely theoretical or will you code in further sessions?
@nikkitha92
@nikkitha92 4 жыл бұрын
Sir your videos are amazing. Can you please explain about latest methodologies such as BERT , ELMO
@akfvc8712
@akfvc8712 3 жыл бұрын
greate video excelent effort. appreciated!!
@aditisrivastava7079
@aditisrivastava7079 4 жыл бұрын
Just wanted to ask to ask if you could also suggest some good resources online that we can read which could bring more clarity.......
@rdf1616
@rdf1616 4 жыл бұрын
good explanation! thankss
@rameshthamizhselvan2458
@rameshthamizhselvan2458 4 жыл бұрын
Excellent!
@nansonspunk
@nansonspunk Жыл бұрын
yes i really liked this explanation thanks
@sreejus8218
@sreejus8218 3 жыл бұрын
If we use a sample of output to find the loss, will we use its derivative for changing whole weight or change the weights of the respective output
@alsabtilaila1923
@alsabtilaila1923 3 жыл бұрын
Great one!
@syedsaqlainabatool3399
@syedsaqlainabatool3399 3 жыл бұрын
This is what i was looking for
@response2u
@response2u 2 жыл бұрын
Thank you, sir!
@ting-yuhsu4229
@ting-yuhsu4229 4 жыл бұрын
You are AWESOME! :)
@aminuabdulsalami4325
@aminuabdulsalami4325 4 жыл бұрын
Great guy.
@Anand-uw2uc
@Anand-uw2uc 4 жыл бұрын
Good Explanation! But you did not speak much about when to use SGD although you clarified better on GD and Mini Batch SGD
@vishaldas6346
@vishaldas6346 3 жыл бұрын
There is nothing much to explain about SGD when you are talking about 1 datapoint at a time while considering dataset of 1000 datapoints.
@RaviRanjan_ssj4
@RaviRanjan_ssj4 4 жыл бұрын
great video !!
@praneethcj6544
@praneethcj6544 4 жыл бұрын
Perfect ..!!!
@achrafkmout9398
@achrafkmout9398 3 жыл бұрын
very good explanation
@vishaljhaveri7565
@vishaljhaveri7565 2 жыл бұрын
Thank you sir.
@SandeepKashyap-ek2hx
@SandeepKashyap-ek2hx 2 жыл бұрын
You are a HERO sir
@vineetagarwal18
@vineetagarwal18 Жыл бұрын
Great Sir
@ruchikalalit1304
@ruchikalalit1304 4 жыл бұрын
have you make the videos of practical implementation of all the work if so please share the links
@siddharthachatterjee9959
@siddharthachatterjee9959 4 жыл бұрын
Good attempt 👍. Please record with camera on manual focus.
@jiayuzhou6051
@jiayuzhou6051 Ай бұрын
the only video that explains
@rababmaroc3354
@rababmaroc3354 4 жыл бұрын
thank you very much for your efforts. please how can we solve a portfolio allocation problem using this algorithm? please answer me
@goodnewsdaily-tamil1990
@goodnewsdaily-tamil1990 Жыл бұрын
1000 likes for you man👏👍
@percyjardine5724
@percyjardine5724 3 жыл бұрын
thanks Krish
@louerleseigneur4532
@louerleseigneur4532 3 жыл бұрын
Thanks buddy
@phaneendra3700
@phaneendra3700 3 жыл бұрын
hats off man
@sathvikambati3464
@sathvikambati3464 Жыл бұрын
Thanks
@AjanUnderscore
@AjanUnderscore 2 жыл бұрын
Thank u sir 🙏🙏🙌🧠🐈
@thanicssubakar6303
@thanicssubakar6303 5 жыл бұрын
Nice bro
@rohitsaini8480
@rohitsaini8480 Жыл бұрын
Sir, please solve my problem, in my view we are doing gradient descent to find the best value of m (slop in case of linear regression, considering b = 0) so if we use all the point then we must came to know at which point the value of m is less, so why we have to use learning rate to update weight because we already know the best value.
@muhammedsahalot8683
@muhammedsahalot8683 Ай бұрын
which have more convergence speed SGD or GD ?
@muralimohan6974
@muralimohan6974 3 жыл бұрын
How can we take k inputs at the same time
@r7918
@r7918 3 жыл бұрын
I have 1 question regarding this topic. Is this concept applicable to linear regression, right?
@pareesepathak7348
@pareesepathak7348 3 жыл бұрын
can you share the paper for reference and also can you share the resources for deep learning for image processing.
@manojsalunke2842
@manojsalunke2842 4 жыл бұрын
9.28 time, you said sgd will take time to converge than gd, then which is fast , sgd or gd????
@abhrapuitandy3327
@abhrapuitandy3327 4 жыл бұрын
please do tell about stochastic gradient ascent also
@bijaynayak6473
@bijaynayak6473 4 жыл бұрын
Hello Sir, could you share the link for the code where you explained, these videos series are very nice with short of the period we can cover so many concepts. :)
@_JoyshreeMozumder
@_JoyshreeMozumder 3 жыл бұрын
what is resource of data point?
@ankitbiswas8380
@ankitbiswas8380 2 жыл бұрын
when you mentioned SGD takes place in linear regression . I didnt understand that comment . Even in your linear regression videos for the mean square error we are having sum of squares for all data points . So how SGD got linked in linear regression ?
@shubhangiagrawal336
@shubhangiagrawal336 3 жыл бұрын
good video
@yukeshnepal4885
@yukeshnepal4885 4 жыл бұрын
8:58 , using GD it converge quickly and while using mini-batch SGD it follows zigzag path, How??
@kannanparthipan7907
@kannanparthipan7907 4 жыл бұрын
In case of mini batch sgd, we are considering only some points so some deviations will be there in the calculation compared to usual gradient descent where we are considering all values. Simple example GD is like total population and mini SGD is like sample population, it will never be equal and in sample population some deviation always will be there in distribution compared to total population distribution. We cant use GD everywhere, due to time computation factor, using mini SGD will give approximate correct result.
@bhargavpotluri5147
@bhargavpotluri5147 4 жыл бұрын
@@kannanparthipan7907 Deviation will be there in the final output or in the final converge result. Question is why do we have during the process of convergence. Also for every epoch if we consider different samples then understood that there can be zig zag results in the process of convergence. But if only one sample of k records are considered then why is that zig zag during convergence?
@bhargavpotluri5147
@bhargavpotluri5147 4 жыл бұрын
Ok now I got it. For every iteration, samples are picked at random, so is zig zag. Just gone through other artciles
@jsverma143
@jsverma143 4 жыл бұрын
negative weights and positive weights best explained as-- since the angle of tangent is more than 90 degree in left side of the curve so this results in -ve values and for other its less than 90 degree so it would be +ve
@a.sharan8876
@a.sharan8876 Жыл бұрын
py:28: RuntimeWarning: overflow encountered in scalar power cost = (1/n)*sum([value**2 for value in(y-y_predicted)]) hey bro . ia m stuck here with this error , i could not understand the error itself, if you suggests me some solution. .... just now i started to practice a ml algorthm.
@minakshiboruah1356
@minakshiboruah1356 3 жыл бұрын
@12:02 Sir it should bemini batch stocastic g.d.
@khushboosoni2788
@khushboosoni2788 Жыл бұрын
sir can you explain me SPGD algorithm please
@samiabidah4197
@samiabidah4197 3 жыл бұрын
please what the difference between GD and Batch GD !
@soheljagirdar8830
@soheljagirdar8830 3 жыл бұрын
4:17 SGD have minimum 256 records to find error / minima you said it's 1 record at a time
@pramodyadav4422
@pramodyadav4422 3 жыл бұрын
I read few articles which says In "SGD a randomly one data point is picked from the whole data set at each iteration". 256 records which you're talking about may be Mini Batch SGD "It is also common to sample a small number of data points instead of just one point at each step and that is called “mini-batch” gradient descent."
@tejasvigupta07
@tejasvigupta07 3 жыл бұрын
@@pramodyadav4422 yeah ,even I have read that in SCD only one data point is selected and updated in each iteration instead of all.
@funpoint3966
@funpoint3966 4 ай бұрын
please workout your camera issue it seems like it is set to auto focus resulting in a little disturbance.
@shekharkumar1902
@shekharkumar1902 4 жыл бұрын
Confusing one !
@atchutram9894
@atchutram9894 4 жыл бұрын
Switch the auto focus feature in your camera. It is distracting.
@devaryan2201
@devaryan2201 2 жыл бұрын
do change your method of teaching seems like someone has read a book and just trying to copy thatt content from ones side .....use your own ideologies for it :)
@chalapathinagavarmabhupath8432
@chalapathinagavarmabhupath8432 4 жыл бұрын
our videos are good but camara was bad
@KKKK-jr1nm
@KKKK-jr1nm 4 жыл бұрын
Why dont you buy him a new one ?
@chalapathinagavarmabhupath8432
@chalapathinagavarmabhupath8432 4 жыл бұрын
Pora eri poka
THE POLICE TAKES ME! feat @PANDAGIRLOFFICIAL #shorts
00:31
PANDA BOI
Рет қаралды 25 МЛН
WHO LAUGHS LAST LAUGHS BEST 😎 #comedy
00:18
HaHaWhat
Рет қаралды 21 МЛН
Happy 4th of July 😂
00:12
Pink Shirt Girl
Рет қаралды 61 МЛН
Русалка
01:00
История одного вокалиста
Рет қаралды 6 МЛН
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,3 МЛН
25. Stochastic Gradient Descent
53:03
MIT OpenCourseWare
Рет қаралды 83 М.
Intro to Gradient Descent || Optimizing High-Dimensional Equations
11:04
Dr. Trefor Bazett
Рет қаралды 62 М.
Gradient Descent Explained
7:05
IBM Technology
Рет қаралды 60 М.
Machine Learning vs Deep Learning
7:50
IBM Technology
Рет қаралды 650 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 364 М.
THE POLICE TAKES ME! feat @PANDAGIRLOFFICIAL #shorts
00:31
PANDA BOI
Рет қаралды 25 МЛН