No video

Soft Margin SVM : Data Science Concepts

  Рет қаралды 49,157

ritvikmath

ritvikmath

Күн бұрын

Пікірлер: 110
@paulbrown5839
@paulbrown5839 3 жыл бұрын
This guy deserves to be paid for this stuff. It's brilliant.
@ritvikmath
@ritvikmath 3 жыл бұрын
Haha, glad you think so!
@nitishnitish9172
@nitishnitish9172 Жыл бұрын
Absolutely, I have the same thing in my mind
@caiocfp
@caiocfp 3 жыл бұрын
You are a great teacher, hope this channel thrives!
@ritvikmath
@ritvikmath 3 жыл бұрын
I hope so too!
@maurosobreira8695
@maurosobreira8695 2 жыл бұрын
Third video on SVM from this guy and I'm now a subscriber. Best explanation so far, and I watched a bunch before getting to these videos! Two thumbs up!
@user-db2gu5wi4p
@user-db2gu5wi4p 14 күн бұрын
Its great to be in times like these. Wonderful learning resources available on the internet for free. Some of my favourite learning resources: - 1) 3Blue1Brown 2) MIT Courses and the latest entrant: - 3) Ritvik Math Thanks for posting these videos!
@santiagolicea3814
@santiagolicea3814 9 ай бұрын
You're the absolute best at explaining complex things in such an easy way, it's even relaxing
@harshithg5455
@harshithg5455 3 жыл бұрын
Came here after Andrew Ng s videos. Found yours to be way more intuitive. Brilliant
@akshiwakoti7851
@akshiwakoti7851 2 жыл бұрын
Thanks for making SVM easy. You’re a great communicator.
@ian1955
@ian1955 Ай бұрын
I can't believe how good of an explanation this is. Great job! Keep it up!
@random_uploads97
@random_uploads97 2 жыл бұрын
Loved both hard margin and soft margin videos, everything is clear in 25 minutes collectively. Thanks a lot Ritvik! May your channel thrive more, will share a word for you.
@blairt8101
@blairt8101 2 ай бұрын
you saved my life, I will watch all your videos before my exam on machine learning
@giovannibianco5996
@giovannibianco5996 4 ай бұрын
Definitely best video about svm I' ve found online; better than my university lectures (sadly). Great job!
@stanlukash33
@stanlukash33 3 жыл бұрын
You deserve more subs and likes. Thank you for this!
@ritvikmath
@ritvikmath 3 жыл бұрын
I appreciate that!
@Pazurrr1501
@Pazurrr1501 2 жыл бұрын
This videos are real hidden gems. And they deserve to be not hidden any more..
@josephgill8674
@josephgill8674 3 жыл бұрын
Thank you from an MSC Data Science student at Exeter University in exam season
@thankyouthankyou1172
@thankyouthankyou1172 9 ай бұрын
this teacher deserves Nobel price!
@ashhabkhan
@ashhabkhan 2 жыл бұрын
explaining complex concepts in a simple manner. That is how these topics must be taught. Wow!
@yaadwinder300
@yaadwinder300 2 жыл бұрын
the search to find a good youtube video on SVM has finally ended, gotta watch other topics too.
@58_hananirfan45
@58_hananirfan45 Жыл бұрын
This man has single handedly saved my life.
@xxshogunflames
@xxshogunflames 3 жыл бұрын
Awesome video, thank you for clarifying these topics for us. The format is pristine and I get a lot from the different ways you present information because by the second or third video I have a good foundation for the tougher parts to chew. Again, thank you!
@jackli8603
@jackli8603 2 жыл бұрын
Thank you so much!!!! You are a life saver!!! I had been troubled by the soft margin svm for a week until your video explained to me very clearly. What I didn't understand was the lambda part but now I do!!! THANKSSSSSSSSSSSSSSSSS
@MrGhost-do1rw
@MrGhost-do1rw Жыл бұрын
I came here to understand lambda and I am not disappointed. Thank you.
@ritvikmath
@ritvikmath Жыл бұрын
Of course!
@Rohit-fr2ky
@Rohit-fr2ky Жыл бұрын
Thanks a lot, i mightn't be able to understand SVM,without this..
@bytesizedbraincog
@bytesizedbraincog Жыл бұрын
You are gem to the data science community!
@gdivadnosdivad6185
@gdivadnosdivad6185 9 ай бұрын
You are the best! Please consider teaching at a university!
@johnmosugu
@johnmosugu Жыл бұрын
Thank you very much, Ritvik, for simplifying this topic and even ML. God bless you more and more
@rohit2761
@rohit2761 2 жыл бұрын
What an amazing video. Absolutely Gold. Please make more videos. Never stop making one
@maheshsonawane8737
@maheshsonawane8737 11 ай бұрын
🌟MAgnificient🌟Very nice Thanks helps in interview questions.
@xt.7933
@xt.7933 6 ай бұрын
This is clearly explained!! Love your teaching. One question here, how do you choose lamda? What is the impact of a higher or lower lamda?
@caseyglick5957
@caseyglick5957 3 жыл бұрын
Your board work is great! Why are you using an L2 loss for w, rather than L1 based on what showed up in the previous video?
@vldanl
@vldanl 3 жыл бұрын
I guess that it's because L2 loss is much easier to derive, rather than L1. And also L1 is not differentiable if w=0
@caseyglick5957
@caseyglick5957 3 жыл бұрын
Thanks! Having smooth derivatives does help a lot.
@DerIntergalaktische
@DerIntergalaktische 2 жыл бұрын
@@vldanl Isn't the Hinge loss part already pretty hard to derive? Compared to ||w||?
@mikeyu6347
@mikeyu6347 10 ай бұрын
best teacher, very articulate. looking forward to more videos
@xviktorxx
@xviktorxx 3 жыл бұрын
Great videos, will you be also talking about kernel trick?
@ritvikmath
@ritvikmath 3 жыл бұрын
Yes I will! It is on the agenda
@aminr23
@aminr23 5 ай бұрын
greatest teacher ever
@ritvikmath
@ritvikmath 5 ай бұрын
wow thanks!
@kankersan1466
@kankersan1466 3 жыл бұрын
underrated chanel
@ritvikmath
@ritvikmath 3 жыл бұрын
Hopefully not for long :D
@bztomato3131
@bztomato3131 Ай бұрын
I have a clear vision about svm now, thanks a lot, appreciate you, won't you talk about how to minimize those things?
@ledinhanhtan
@ledinhanhtan 8 ай бұрын
Brilliant explanation! Thank you!
@yifanzhao9942
@yifanzhao9942 3 жыл бұрын
Shoutout to my previous TA!! Also do you mind uploading pictures of whiteboard only for future videos, as it might be easier for us to check notes? Thank you!
@ritvikmath
@ritvikmath 3 жыл бұрын
Hi Yifan! Hope you're doing well. Yes for the newer videos I am remembering to show the final whiteboard only
@aashishprasad9491
@aashishprasad9491 3 жыл бұрын
you are a great teacher, I dont know why youtube doesnt reccomend your videos. Also please try some social media marketing.
@chunqingshi2726
@chunqingshi2726 Жыл бұрын
cystal clear, thanks a lot
@stevenconradellis
@stevenconradellis Жыл бұрын
These explanations are so brilliantly and intuitively given, making daunting-looking equations and concepts understandable. Thank you @ritvikmath, you are truly a gift to data science.
@RiteshSingh-ru1sk
@RiteshSingh-ru1sk 3 жыл бұрын
Gem of lectures!
@cyanider069
@cyanider069 11 ай бұрын
You are really good at this man
@nukagvilia5215
@nukagvilia5215 2 жыл бұрын
Your videos are the best!
@adithyagiri7933
@adithyagiri7933 3 жыл бұрын
great job man...keep bringing us these kinds of amazing stuff
@xintang7741
@xintang7741 9 ай бұрын
Well explained! Very helpful!
@e555t66
@e555t66 Жыл бұрын
Really explained well. If you want to get the theoretical concepts one could try doing the MIT micromasters. It’s rigorous and demands 10 to 15 hours a week.
@danalex2991
@danalex2991 2 жыл бұрын
AMAAZING VIDEOO ! You are so awesome.
@vantongerent
@vantongerent 2 жыл бұрын
So good.
@gabeguo6222
@gabeguo6222 Жыл бұрын
GOAT!
@tule3835
@tule3835 10 ай бұрын
Question about Lambda: Does that mean when Lambda is LARGE -> We care more about MisClassfication Error. When Lambda is SMALL, we care about Minimize the Weight Vector and Maximize the Margin ???
@huyvuquang2041
@huyvuquang2041 Жыл бұрын
Thanks so much for your amazing works. Keep it up.
@Cerivitus
@Cerivitus 2 жыл бұрын
Why are we minimizing ||w|| to the power of 2 for soft SVM but only ||w|| for hard SVM?
@venkat5230
@venkat5230 3 жыл бұрын
Wow great lecture clear explanation...tq rit
@axadify
@axadify 3 жыл бұрын
Such a brilliant explanation!
@Greatasfather
@Greatasfather 2 жыл бұрын
I love this. Thank you so much. Helped me a lot
@xiaoranlin8918
@xiaoranlin8918 2 жыл бұрын
Great clarification video
@user-ug8uy2cv3s
@user-ug8uy2cv3s Жыл бұрын
great explanation thank you
@matthewcarnahan1
@matthewcarnahan1 4 ай бұрын
The margin for a hard margin SVM is pretty intuitive. But not with soft margin SVM. With hard margin, it's a rule that both margin lines must lie on at least one of their respective points. I think with soft margin, there's a rule that for any value of lambda, at least one of the margin lines must lie on at least one of their respective points, but it's not mandatory that both do. Do you concur?
@honeyBadger582
@honeyBadger582 3 жыл бұрын
Great video! I have a question. The optimisation formula for soft-margin svm that I usually see in textbooks is : min ||w|| + c * sum over theta. How does the equation in your videos relate to this one? Is it pretty much the same just with different symbols or is it different? Thanks!
@mv829
@mv829 2 жыл бұрын
thank you for this video, very helpful!
@moatasem444
@moatasem444 Жыл бұрын
شرح رائع ❤❤❤
@sergecliverkana4694
@sergecliverkana4694 3 жыл бұрын
Awesome Thank you very much
@ritvikmath
@ritvikmath 3 жыл бұрын
You are very welcome
@jaivratsingh9966
@jaivratsingh9966 11 ай бұрын
Excellent
@504036465
@504036465 3 жыл бұрын
Nice video..Thank you..
@mashakozlovtseva4378
@mashakozlovtseva4378 3 жыл бұрын
Very detailed explanation! I'd like to know, how are we hoing to find w and b params? Using gradient descent or another technique?
@stanlukash33
@stanlukash33 3 жыл бұрын
I had the same question
@COMIRecords
@COMIRecords 3 жыл бұрын
I think you can find optimal params in 2 ways: the first one consists in minimizing with respect to w and b the primal formulation problem, and the second one consists in maximizing with respect to a certain alpha (which is a Lagrange multiplier) the dual formulation of the problem. In the second case, once you have computed the optimal alpha, you can replace it in the equation of w (written in function of alpha) and you will find the optimal w. In order to find the best b you have to rearrange some conditions, but i am not sure about that.
@eltonlobo8697
@eltonlobo8697 2 жыл бұрын
Can use Gradient descent and update weights and bias for every example like shown in this video: kzfaq.info/get/bejne/i75gmZxzs6jHo40.html
@lilianaaa98
@lilianaaa98 3 ай бұрын
thanks a lot !
@user-ws8jm8uq4c
@user-ws8jm8uq4c 2 жыл бұрын
how can we still have some data between margins even after rescaling w vector so that min |w^Tx + b| = 1? doesnt it mean we find the closest possbile data points to the hyperplane and rescale the w vectors so that the distrant from closest data points to the hyperplane falls into 1? this way, there shouldn't be any plots between margins... could u help correct ?
@houyao2147
@houyao2147 3 жыл бұрын
Perfect!
@ahmetcihan8025
@ahmetcihan8025 3 жыл бұрын
Just perfect mate
@shriqam
@shriqam 2 жыл бұрын
Hi Ritvikmath, Many thanks for the wonderful, I really love the simple notations you have used for the equations which make them very easy to understand. Can you suggest any books/courses that follow the similar notation to yours or can you please provide the source which helped you in creating these contents..? Thanks in Advance
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 жыл бұрын
Awesome.
@achams123
@achams123 3 жыл бұрын
what was Vapnik on when he invented this?
@codeschool3964
@codeschool3964 4 ай бұрын
Explained 3 hours lecture in less than 1 hour
@dawitabdisa7262
@dawitabdisa7262 Жыл бұрын
hello, thank you for tutorials . how to apply SVM model to classify an alpha data, to realize the detection of driver’s sleepless? very looking forward for your reply.
@FEchtyy
@FEchtyy 2 жыл бұрын
Great explanation!
@vantongerent
@vantongerent 2 жыл бұрын
How do you choose your support vectors, if they are no longer the closest vector to the decision boundary? Does the value of "1" get generated automatically when you plug the values of X and Y in? Or is there some scaling that takes place to set one of the vectors value to "1"?
@amankushwaha8927
@amankushwaha8927 2 жыл бұрын
Thanks
@DerIntergalaktische
@DerIntergalaktische 2 жыл бұрын
The margin is taken into account twice in a weird way. The obvious one is the lambda ||w||. But the hingeloss has the margin as a unit of measurement. So if a datapoint is at distance five from the support vector, the hinge loss can drastically change depending on the size of the margin. Is this double accounting of the margin intended? Should there be a normalization for this? I believe deviding the hinge loss by ||w|| should work.
@arundas7760
@arundas7760 3 жыл бұрын
Very good, thanks
@adilmuhammad6078
@adilmuhammad6078 Жыл бұрын
Very nice!!!
@ritvikmath
@ritvikmath Жыл бұрын
Thank you! Cheers!
@loveen3186
@loveen3186 Жыл бұрын
amazing
@ritvikmath
@ritvikmath Жыл бұрын
Thank you! Cheers!
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 жыл бұрын
What if you made observations based upon latent variables? Could that remove the need for parameter lambda for a prior?
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 жыл бұрын
Where does the kernel come in?
@PF-vn4qz
@PF-vn4qz 3 жыл бұрын
so can we mathematically solve for the vector w and the value b the soft-margin svm optimisation problem? and if so can anyone point where to read up on this?
@muralikrishna2691
@muralikrishna2691 2 жыл бұрын
Is hinge loss is differentiable
@Ranshin077
@Ranshin077 3 жыл бұрын
I love your board work, but you should really have an image of the board without you in it, or just delay your walk into the picture after a second or two at the beginning so I can snag shot for my notes a bit easier, lol.
@ritvikmath
@ritvikmath 3 жыл бұрын
Noted! I'm starting to remember this for my new videos. Thanks!
@redherring0077
@redherring0077 2 жыл бұрын
Haha. I have dedicated a whole hard disk for ritvik’s data science videos. I just hope he is going to write a book or even better do an end to end data science course on coursera😍😍
@juanguang5633
@juanguang5633 Жыл бұрын
could be nicer if you talk about slack variables
@sushantpatil2566
@sushantpatil2566 Жыл бұрын
EUREKA!
@junli1865
@junli1865 2 жыл бұрын
Thank you !
@philosopher_sage_07
@philosopher_sage_07 Жыл бұрын
@paulhetherington3854
@paulhetherington3854 4 ай бұрын
/p''P' 2'z mrjn hlf txt ~arch tmp XP < VIN 58#/ /mrjn djz bx 2'Cn'' < avn cg kntrl ~ ferris R''/ /smltz 2'Cn'' wth abv mrjn djz bx ~arch tmp/ /r'' intr sktz of visocity ++ mrjn hlf txt vrchal/ /XP(cR'' mrjn VIN 58# ~ rchz ferris avn cg kntr cntr LN'' hlf txt ++ symbol/
SVM (The Math) : Data Science Concepts
10:19
ritvikmath
Рет қаралды 99 М.
SVM Dual : Data Science Concepts
15:32
ritvikmath
Рет қаралды 47 М.
王子原来是假正经#艾莎
00:39
在逃的公主
Рет қаралды 15 МЛН
КТО ЛЮБИТ ГРИБЫ?? #shorts
00:24
Паша Осадчий
Рет қаралды 3,7 МЛН
Чёрная ДЫРА 🕳️ | WICSUR #shorts
00:49
Бискас
Рет қаралды 6 МЛН
Parenting hacks and gadgets against mosquitoes 🦟👶
00:21
Let's GLOW!
Рет қаралды 13 МЛН
Support Vector Machines Part 1 (of 3): Main Ideas!!!
20:32
StatQuest with Josh Starmer
Рет қаралды 1,3 МЛН
Loss Functions : Data Science Basics
16:40
ritvikmath
Рет қаралды 32 М.
Stanford's FREE data science book and course are the best yet
4:52
Python Programmer
Рет қаралды 693 М.
Support Vector Machines: All you need to know!
14:58
Intuitive Machine Learning
Рет қаралды 142 М.
EM Algorithm : Data Science Concepts
24:08
ritvikmath
Рет қаралды 68 М.
Dual formulation for Soft Margin SVM
21:37
IIT Madras - B.S. Degree Programme
Рет қаралды 9 М.
SVM Kernels : Data Science Concepts
12:02
ritvikmath
Рет қаралды 71 М.
Support Vector Machines (SVMs): A friendly introduction
30:58
Serrano.Academy
Рет қаралды 89 М.
The Kernel Trick in Support Vector Machine (SVM)
3:18
Visually Explained
Рет қаралды 253 М.
Support Vector Machines (SVM) - the basics | simply explained
28:44
王子原来是假正经#艾莎
00:39
在逃的公主
Рет қаралды 15 МЛН