Xgboost Classification Indepth Maths Intuition- Machine Learning Algorithms🔥🔥🔥🔥

  Рет қаралды 190,738

Krish Naik

Krish Naik

3 жыл бұрын

XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. In prediction problems involving unstructured data (images, text, etc.) artificial neural networks tend to outperform all other algorithms or frameworks.
All Playlist In My channel
Complete ML Playlist : • Complete Machine Learn...
Complete NLP Playlist: • Natural Language Proce...
Docker End To End Implementation: • Docker End to End Impl...
Live stream Playlist: • Pytorch
Machine Learning Pipelines: • Docker End to End Impl...
Pytorch Playlist: • Pytorch
Feature Engineering : • Feature Engineering
Live Projects : • Live Projects
Kaggle competition : • Kaggle Competitions
Mongodb with Python : • MongoDb with Python
MySQL With Python : • MYSQL Database With Py...
Deployment Architectures: • Deployment Architectur...
Amazon sagemaker : • Amazon SageMaker
Please donate if you want to support the channel through GPay UPID,
Gpay: krishnaik06@okicici
Discord Server Link: / discord
Telegram link: t.me/joinchat/N77M7xRvYUd403D...
Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more
/ @krishnaik06
Please do subscribe my other channel too
/ @krishnaikhindi
Connect with me here:
Twitter: / krishnaik06
Facebook: / krishnaik06
instagram: / krishnaik06
#xgboostclassifier
#xgboost

Пікірлер: 166
@krishnaik06
@krishnaik06 3 жыл бұрын
We are near 250k. Please do subscribe my channel and share with all your friends. :)
@_curiosity...8731
@_curiosity...8731 3 жыл бұрын
Krish Naik please make video on decisions tree pruning with mathematical details
@ArunKumar-sg6jf
@ArunKumar-sg6jf 3 жыл бұрын
Lgbm is Missing
@yashkhandelwal3877
@yashkhandelwal3877 3 жыл бұрын
@@tamildramaclips8548 Depends on your college. Which college with these branches are you talking about?
@yashkhandelwal3877
@yashkhandelwal3877 3 жыл бұрын
@@tamildramaclips8548 You should definitely go with ECE. Since AI DS is a very new branch there is no surety how your college would groom the students with this branch. Also your college is not a national level college. So you shouldn't take any risk. That's all my suggestion.
@hirdhaymodi
@hirdhaymodi 3 жыл бұрын
sir could you make any video for a roadmap of machine learning engineer??
@animeshsharma7332
@animeshsharma7332 3 жыл бұрын
Man, this guy is now coming in my dreams. Who else have been binge watching his channel for months?
@gauravpatil2926
@gauravpatil2926 3 жыл бұрын
😂😂
@thepresistence5935
@thepresistence5935 2 жыл бұрын
I am learnng from him for data science
@geekyprogrammer4831
@geekyprogrammer4831 2 жыл бұрын
Same here 😂😂😂 But this man should be given nobel prize for inspiring the present and future generations!
@gandhalijoshi9242
@gandhalijoshi9242 2 жыл бұрын
I have started following his machine learning series..And it's very nice.. I am also doing data science course simultaneously . His videos are helping a lot .
@shaelanderchauhan1963
@shaelanderchauhan1963 2 жыл бұрын
HAHAHAHA ! You are being haunted by Ghost Naik
@bhavikdudhrejiya852
@bhavikdudhrejiya852 3 жыл бұрын
Great video. Understood in depth I have jotted down the processing steps from this video: 1. We have a Data 2. Constructing base leaner 3. Base learner takes probability 0.5 & computing residual 4. Constructing Decision as per below Computing Similarity Weights: ∑(Residual)^2 / ∑P(1-P) + lambda - Computing Similarity Weight of Root Node - Computing Similarity Weight of left side decision node & its leaf node - Computing Similarity Weight of right side decision node & its leaf node Computing Gain = Leaf1 Similarity W + Leaf2 Similarity W - Root Node Similarity W - Computing Gain of Root Node & left side of decision node and its leaf node - Computing Gain of Root Node & right side of decision node and its leaf node - Computing Gain of other combination of features of decision node and its leaf node - Selecting the Root Node, Decision node and leaf node have high information gain 5. Predicting the probability = Sigmoid(log(odd) of Prediction of Base Learner + learning rate(Prediction of Decision Tree)) 6. Predicting residual = Previous residual - Predicted Probability 7. Running the iteration from point 2 to 6 and at the end of the iteration, The residual will be the minimal. 8. Test Prediction on the model of iteration have minimal residual
@manojsamal7248
@manojsamal7248 2 жыл бұрын
what if there are no. of classification in output (0,1,2,3) the average will be 1.5 but this is more than 1 i.e this cant be probality which 0.5 to base learner that time what we should do..? ]
@pawanthakur-df2yk
@pawanthakur-df2yk 2 жыл бұрын
Thank you🙏
@manojrangera5955
@manojrangera5955 2 жыл бұрын
@@manojsamal7248 yes bro..same question ...did you get the answer of this?..please let me know..
@manojsamal7248
@manojsamal7248 2 жыл бұрын
@@manojrangera5955 not yet bro
@manojrangera5955
@manojrangera5955 2 жыл бұрын
@@manojsamal7248 I was thinking if there are 4 classes then probability will be 1/4 = .25 and if there are 5 then 1/5 =.20 because we are calculating probability ..I will confirm this but I think this is right..
@johnnyfry2
@johnnyfry2 3 жыл бұрын
Great work Krish. Don't ever lose your passion for teaching, you're a natural. I appreciate how you simplify the details.
@yashkhandelwal3877
@yashkhandelwal3877 3 жыл бұрын
Hats off to you Krish for doing so much hardwork so that we can learn each and every concept of ML, DataScience!
@mrzaidivlogs
@mrzaidivlogs 3 жыл бұрын
How do u stay so focused , strong and learn everything in a very efficient way?
@yasharya8228
@yasharya8228 10 ай бұрын
Nation wants to know🙃
@nareshjadhav4962
@nareshjadhav4962 3 жыл бұрын
I was desparately waiting for this since last 7 months...now I will complete mashine learning playlist💥 Than you Krish..god bless you😀
@moindalvs
@moindalvs 2 жыл бұрын
Thanks a lot, for eveyrthing you do. You did turn off the fan so that it doesn't interrupt the audio, you were sweating and breathing heavily with all this trouble and hardship you deserve more. I wish you success in life and a healthy and a prosperous life.
@yashkhant5874
@yashkhant5874 3 жыл бұрын
Great Explanation sir... keep contributing to the community. We love your videos and most importantly you are serving your experience is the best thing.
@gulzarahmedbutt7213
@gulzarahmedbutt7213 3 жыл бұрын
I've learned a lot from Mr.Krish. You're doing great and Keep up the good work. You make people love Machine Learning. Hats Off to you! Love from Pakistan.
@felixzhao9070
@felixzhao9070 2 жыл бұрын
This is pure gold! Thanks for the tutorial!
@dhruvenkalpeshkumarparvati4874
@dhruvenkalpeshkumarparvati4874 3 жыл бұрын
Just what I was waiting for 🔥
@mohitjoshi4209
@mohitjoshi4209 3 жыл бұрын
So much to learn from a single video, hats off to you sir
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
Very very important to crack in product based companies.Great explantion too.Thanks
@marijatosic217
@marijatosic217 3 жыл бұрын
This was amazing, I literally feel like I'm sitting in your class at a Uni.
@nukulkhadse5253
@nukulkhadse5253 3 жыл бұрын
Hey Krish, you should also have a video about Similarity Based Modelling (SBM) and Multivariate State Estimation Technique (MSET). They are actually widely used in the industries since 90s. There are many research papers to validate that. They also calculate similarity weight and residuals.
@amitsahoo1989
@amitsahoo1989 3 жыл бұрын
Hi krish, i have been watching ur videos for the last few months and it has helped me a lot in my interviews. A special thanks from my end. In this video, at 10:54 min 0.33 - 0.14 should be 0.19.
@gshan994
@gshan994 3 жыл бұрын
yes indeed bdw were u a fresher when u went for an interview?
@abhishek_maity
@abhishek_maity 3 жыл бұрын
Great.... Clear explanation !! Thanks a lot 😄
@davidd2702
@davidd2702 2 жыл бұрын
Thank you for your fabulous video! I enjoy it and understand well! Could you tell me if the output from the xgb classifier gives 'confidence' in a specific output (allowing you to assign a class) ? or is this functionally equivalent to statistical probability of an event occuring?
@annusrivastava4425
@annusrivastava4425 10 ай бұрын
hi, have one doubt, for p(1-p) + lambda in denominator to calculate similarity weight, if the residual is -0.5 it should be 0.5(1-(-0.5))= .75? or the negative sign does not matter?
@navyamokmod1317
@navyamokmod1317 2 ай бұрын
In the denominator, we are not taking residuals for calculation, p = probability which is 0.5
@sajidchoudhary1165
@sajidchoudhary1165 3 жыл бұрын
i am most happiest person to see this videos thank you
@frozen1860
@frozen1860 3 жыл бұрын
Sir the way you teaching us is more better than any varsity classes. pls do a practical implementation on XGBoost. sir pls it will be very helpful for us...
@ishitachakraborty1362
@ishitachakraborty1362 3 жыл бұрын
Please do a indepth maths intuition video on catboost
@BatBallBites
@BatBallBites 3 жыл бұрын
agree
@thisismuchbetter2194
@thisismuchbetter2194 3 жыл бұрын
I don't know why people don't talk about Catboost and LightGBM much..
@stabgan
@stabgan 3 жыл бұрын
Congratulations on your new job in E&Y. Checked you on LinkedIn. Very impressive profile.
@mohamedgaal5340
@mohamedgaal5340 Жыл бұрын
Thank You, Krish. Well explained!
@joeljoseph26
@joeljoseph26 3 жыл бұрын
Guys, please watch for the mistake. There is a mistake made at 16:10 i.e. For credit >50 (G,B) = {-0.5,0.5} its not three, there is only two. The information gain for the right side is 0.67. However, you chose the right node. Btw, your teaching very simple and understandable. Keep doing more videos. Love your content.
@mihirjha1486
@mihirjha1486 Жыл бұрын
Loved It. Thank You!
@modhua4497
@modhua4497 3 жыл бұрын
Good! Could you make a video explain the difference between XGB and Gradients Boosting? Thanks
@muhammadsaqib2961
@muhammadsaqib2961 3 жыл бұрын
Quite amazing and clear explanation
@amitupadhyay6511
@amitupadhyay6511 2 жыл бұрын
its tough to understand in first attempt ,but thanks for giving the outline so clearly, I will watch it untill I understand I implement it from scratch .
@narendradamodardasmodi3286
@narendradamodardasmodi3286 3 жыл бұрын
Thanks, Krish for building the nation Towards AI Journey.
@ajayrana4296
@ajayrana4296 3 жыл бұрын
chutiya nokri bhi to de
@ajiths1689
@ajiths1689 3 жыл бұрын
what should be the new probability value we need to consider when we are considering the second decision tree?
@vishnukv6537
@vishnukv6537 3 жыл бұрын
Sir you are too pleasant and amazing in teaching
@antonym9744
@antonym9744 3 жыл бұрын
Amazing !!!
@RahulKumar-hb8cl
@RahulKumar-hb8cl 3 жыл бұрын
Sir, How will the Prob value( 0.5 for the base tree ) be updated in each tree?
@nitinahlawat2479
@nitinahlawat2479 3 жыл бұрын
Really Data science Bisham Pitama🙏 Respect you a lot👍
@accentureprep1092
@accentureprep1092 2 жыл бұрын
Hi @krish First of all kudos to you Great video Can you tell me how xgboost is different from Aprori alogrithm or does it cover every combination as in Aprori cover ( ie it's covers all the combination while creating tree as Aprori will cover for same problem statement) Thanks and love your work Keep rocking
@shashwattiwari4346
@shashwattiwari4346 3 жыл бұрын
"Day 1 or 1 Day your Choice" Thanks a lot Krish!
@islamicinterestofficial
@islamicinterestofficial 3 жыл бұрын
what does this mean?
@nothing8919
@nothing8919 3 жыл бұрын
thank you alot sir, you are my best teacher
@alokranjanthakur5746
@alokranjanthakur5746 3 жыл бұрын
Sir can you refer some NLP projects using python. I mean with live implementation
@datakube3053
@datakube3053 3 жыл бұрын
thank you so much
@gardeninglessons3949
@gardeninglessons3949 3 жыл бұрын
sir please make a video on differences in all the boosting techniques , they are elaborate and couldn't find out the exact differences
@mohittahilramani9956
@mohittahilramani9956 Жыл бұрын
Seriously thank u so much
@Amansingh-tr1cf
@Amansingh-tr1cf 3 жыл бұрын
the most awaited video
@ManoharKumar-cw3ed
@ManoharKumar-cw3ed 3 жыл бұрын
Thank you sir! I have a question in this how we predict the probability value at the begging from 0-1
@sohinimitra7559
@sohinimitra7559 3 жыл бұрын
Can you please do a video on feature selection approaches? Especially the use of Mutual Information. Thanks. Great videos!!
@ShahnawazKhan-xl6ij
@ShahnawazKhan-xl6ij 3 жыл бұрын
Great
@ArunKumar-sg6jf
@ArunKumar-sg6jf 3 жыл бұрын
How u determine value of pr in base model
@brunojosebertora7935
@brunojosebertora7935 2 жыл бұрын
Krish, I have a question: when you compute the output value you are catching the similarity weighted. I think it is incorrect for classification, isn't it? To compute the output you shouldn't square the residuals. THANKS for the video!!
@jainitafulwadwa8181
@jainitafulwadwa8181 3 жыл бұрын
The similarity score is not the output value, there is a different formula for calculating the output based on residuals, you just have to remove the square in the numerator of the similarity score function.
@raneshmitra8156
@raneshmitra8156 3 жыл бұрын
Super explanation
@bayazjafarli3867
@bayazjafarli3867 2 жыл бұрын
Hi, thank you very much for this explanation! Great video! But I have one question. In 19:39 you first wrote 0 which is the probability of first row then you added learning rate*similarity weight. My question is instead of 0 shouldn't we write 0.5 which is the average probability of first (base model). 0.5+learning rate*similarity. Please correct me if I am wrong.
@rutvikvatsa767
@rutvikvatsa767 Жыл бұрын
base model comes after we put the first probability (0.5) through log(odds) at bottom right corner. Hence it is 0
@nandangupta727
@nandangupta727 3 жыл бұрын
Thank you so much for such a step to step explanation. but I have a quick question what would we do if we have continuous variable than categorical. would we proceed as we do in decision tree for continuous features? or it's not recommended to use XGBoost in case of continuous features?
@thepresistence5935
@thepresistence5935 2 жыл бұрын
i think we use all the models and will take the result by comparing those, I think It will be better for that.
@subratakar4392
@subratakar4392 2 жыл бұрын
for continous data, like salary , first it will sort that particular column in ascending, then for each consucutive value will create an avg.Now each avg will be taken as a spliting condition. The one where the gain is the highest will be considered for the split . Like suppose you have 5 salaries 10,20,30,40,50. first splt would be on salary
@dulangikanchana8237
@dulangikanchana8237 3 жыл бұрын
can you do a video difference between statistical models and machine learning models
@arshaachu6351
@arshaachu6351 5 ай бұрын
Is there any detailed videos about Adaboost regressor and gradient boosting classifier? Please help me
@vishaldas6346
@vishaldas6346 3 жыл бұрын
Hi Krish, I have a doubt, can you please confirm if XGBOOST is a part of ensemble technique or not as while importing from the library we are doing it separately not from sklearn library.
@krishnaik06
@krishnaik06 3 жыл бұрын
It is a seperate library
@vishaldas6346
@vishaldas6346 3 жыл бұрын
@@krishnaik06 but is it an ensemble technique?
@gshan994
@gshan994 3 жыл бұрын
@@vishaldas6346 what is XGBoost and where does it fit in the world of ML? Gradient Boosting Machines fit into a category of ML called Ensemble Learning, which is a branch of ML methods that train and predict with many models at once to produce a single superior output.
@ayanmullick9202
@ayanmullick9202 2 жыл бұрын
You are legend sir.
@sheikhshah2593
@sheikhshah2593 8 ай бұрын
Great sir🔥🔥
@ppersia18
@ppersia18 3 жыл бұрын
1st view 1st like krish sir op
@mainakray6452
@mainakray6452 3 жыл бұрын
the max_depth in xgboost for each tree is 2? plz answer ,
@titangamezone4379
@titangamezone4379 3 жыл бұрын
sir please make a video on gradient boosting for classification problem
@tarabalam9962
@tarabalam9962 6 ай бұрын
Please upload a video on Light GBM.
@satwikram2479
@satwikram2479 3 жыл бұрын
Finally❤
@ashwanikumar-zh1mq
@ashwanikumar-zh1mq 3 жыл бұрын
When I training data first calculate residual and create dt but here we are not able to see how it classified the point and in this it say when new data point is come I am confused in this
@edwinokwaro9944
@edwinokwaro9944 Жыл бұрын
is the formula for similarity score of the root node correct? since this is a classification problem?
@durjoybhattacharya250
@durjoybhattacharya250 Жыл бұрын
How do you decide on the Learning Rate parameter?
@saimanohar3363
@saimanohar3363 2 жыл бұрын
Grt teacher. Just a doubt, can't we take the credit as first node?
@amitshende5161
@amitshende5161 3 жыл бұрын
It's lambda as hyper parameter, which u mentioned as alpha...
@saptarshisanyal4869
@saptarshisanyal4869 2 жыл бұрын
Statquest Light !!!! Fantastic effort though.
@ashwinkrishnan4285
@ashwinkrishnan4285 3 жыл бұрын
Hi Krish, I have a doubt here. Here all the input features (salary, credit) are categorical. so we are making the decision tree easily based on the categories. Say suppose if we get the salary feature as continuous like 30k, 50k and not like 50k, how this split of decision tree will be done.
@shubhambavishi5982
@shubhambavishi5982 3 жыл бұрын
Check out decision tree algorithm video in ml playlist. Inside it, he has mentioned how to handle numerical features..
@vishaldas6346
@vishaldas6346 3 жыл бұрын
Hi Ashwin, for numerical features, you have to set a threshold for each value by taking the average of adjacent values for example for 30k - 40k you have to take (30+40)/2 i.e 35k and create a decision tree by setting value less than 35k i.e
@SRAVANAM_KEERTHANAM_SMARANAM
@SRAVANAM_KEERTHANAM_SMARANAM 3 жыл бұрын
Dear Krish, We have a course on machine learning. Around 40000 people subscribe to this course. But since they dont understand many of them will drop out in the middle. Why dont you start creating videos parallel to what is taught in the class and make a playlist for it. So that you can easily many views with one shot. Are u interested in this.
@jamalnuman
@jamalnuman 5 ай бұрын
great
@govind1706
@govind1706 3 жыл бұрын
Finally !!!!
@hemantsharma7986
@hemantsharma7986 3 жыл бұрын
isnt gradient boosting and xgboost same with miner difference?
@seniorprog9144
@seniorprog9144 3 жыл бұрын
Sir . krish Do you have a code that deal with more than one target ( y1,y2,.. Y is 2 columns or 3 columns . (two target , three target )
@VinodRS01
@VinodRS01 3 жыл бұрын
Sir how does the model chooses which similarity weight should be multiplied with learning rate . Thank you sir u r doing great by helping us🙂
@vishaldas6346
@vishaldas6346 3 жыл бұрын
its not the similarity weight which is multiplied, its the Output of the leaf node. Similiraity weight is used to calculate the Gain for splitting the nodes of the decision tree.
@KOTESWARARAOMAKKENAPHD
@KOTESWARARAOMAKKENAPHD Жыл бұрын
is any other value except 0 as a hyperparameter in XGboost algorithm
@subhodipgiri2924
@subhodipgiri2924 3 жыл бұрын
how can we subtract probability of a value from that value. if suppose i take approvals in terms of Y and N then also their probability remains same at 0.5. but we cannot subtract 0.5 from Y or N. I did not get your concept of subtracting the probability from value.
@Acumentutorial
@Acumentutorial 2 жыл бұрын
Wht is the role of lambda in the similarity weight here.
@adityarajora7219
@adityarajora7219 2 жыл бұрын
How is Pr gonna change please explain!!!!
@ajayrana4296
@ajayrana4296 3 жыл бұрын
what is similarity weight why we use it what is its advantage what is the intution behind it
@hackernova1532
@hackernova1532 3 жыл бұрын
What is lambda in similarity weight formula ...pls some one answer
@swethanandyala
@swethanandyala Жыл бұрын
Hi sir @Krish Naik. What will be the initial probability when there are multiple classes....if anyone knows the answer please share...
@IamGaneshSingh
@IamGaneshSingh 2 жыл бұрын
This video is "pretty much important!"
@shivanshsingh5555
@shivanshsingh5555 3 жыл бұрын
Can anyone tell me whether 'Pr' and 'Prob' in the denominator is the same thing?
@KOTESWARARAOMAKKENAPHD
@KOTESWARARAOMAKKENAPHD Жыл бұрын
what is the need of LOG(odd) function
@adireddy694
@adireddy694 3 жыл бұрын
How you have calculated the probability ?? How you have got 0.5 ??
@belxismarquez4447
@belxismarquez4447 Жыл бұрын
Please subtitle the videos in Spanish. There is a community that speaks Spanish and listens to your videos
@REHAN-ANSARI-
@REHAN-ANSARI- Жыл бұрын
XG-Boost is the secret of my energy
@mayurpardeshi395
@mayurpardeshi395 3 жыл бұрын
how krish calculating gain ??
@dheerendrasinghbhadauria9798
@dheerendrasinghbhadauria9798 3 жыл бұрын
How he is taking probability = 0.5 in the whole process. What is the calculation of that probability??
@pratikbhansali4086
@pratikbhansali4086 3 жыл бұрын
U didn't upload gradient boosting classification videos i. e part 3 and part 4 of gradient boosting
@deepsarkar2003
@deepsarkar2003 3 жыл бұрын
Can anyone explain to me the video during 21:38 Mins ( 0-0.6)=-0.6 right not 0.4 right? or did I get it wrong Please Advise
@sudiptodas6272
@sudiptodas6272 3 жыл бұрын
I got the same question .
@mohana4179
@mohana4179 2 жыл бұрын
Please put lgbm mathematical explanation sir
@datakube3053
@datakube3053 3 жыл бұрын
250k coming soon
@naveenvinayak1088
@naveenvinayak1088 3 жыл бұрын
Krish How do u stay so focused
@chiranjivikumar3690
@chiranjivikumar3690 2 жыл бұрын
What's is the use ?
@cynthiac2174
@cynthiac2174 Жыл бұрын
can someone please help me clearing this ... why negative sign has not been considered while calculating similarity weight?
@doofsCat
@doofsCat Жыл бұрын
In the numerator of calculation it is considered and in the denominator of the SW as we multiply with the 1 minus of the prob... negative or positive result remains unchanged...
@suridianpratama7855
@suridianpratama7855 Жыл бұрын
how xgboost work in multiclass?
@olfchandan
@olfchandan 3 жыл бұрын
Why does it work?
@yourvibe2844
@yourvibe2844 3 жыл бұрын
you've not taken user-defined gamma subtraction after calculating gain in order to prune.
Gradient Boosting In Depth Intuition- Part 1 Machine Learning
11:20
ОСКАР vs БАДАБУМЧИК БОЙ!  УВЕЗЛИ на СКОРОЙ!
13:45
Бадабумчик
Рет қаралды 6 МЛН
Was ist im Eis versteckt? 🧊 Coole Winter-Gadgets von Amazon
00:37
SMOL German
Рет қаралды 39 МЛН
3M❤️ #thankyou #shorts
00:16
ウエスP -Mr Uekusa- Wes-P
Рет қаралды 15 МЛН
WHO LAUGHS LAST LAUGHS BEST 😎 #comedy
00:18
HaHaWhat
Рет қаралды 21 МЛН
Random Forest Algorithm Clearly Explained!
8:01
Normalized Nerd
Рет қаралды 573 М.
How I’d learn ML in 2024 (if I could start over)
7:05
Boris Meinardus
Рет қаралды 991 М.
XGBoost Overview| Overview Of XGBoost algorithm in ensemble learning
8:20
Unfold Data Science
Рет қаралды 53 М.
How to train XGBoost models in Python
18:57
Lianne and Justin
Рет қаралды 30 М.
Maths behind XGBoost|XGBoost algorithm explained with Data Step by Step
16:40
Machine Learning vs Deep Learning
7:50
IBM Technology
Рет қаралды 650 М.
How To Self Study AI FAST
12:54
Tina Huang
Рет қаралды 489 М.
XGBoost Made Easy | Extreme Gradient Boosting | AWS SageMaker
21:38
Prof. Ryan Ahmed
Рет қаралды 36 М.
Support Vector Machine Classifier Indepth Intution In Hindi| Krish Naik
25:24
ОСКАР vs БАДАБУМЧИК БОЙ!  УВЕЗЛИ на СКОРОЙ!
13:45
Бадабумчик
Рет қаралды 6 МЛН