No video

Maths behind XGBoost|XGBoost algorithm explained with Data Step by Step

  Рет қаралды 44,228

Unfold Data Science

Unfold Data Science

Күн бұрын

Пікірлер: 173
@prateeksachdeva1611
@prateeksachdeva1611 Жыл бұрын
This channel has become one of my favorite platforms to learn ml, owing to the crisp explanation by Aman.
@oluwafemiolasupo4018
@oluwafemiolasupo4018 22 күн бұрын
Nice one here. Thank you for the simplicity employed in explaining the core concepts.
@cenxuneff
@cenxuneff 2 күн бұрын
Excellent explanation
@abiramimuthu6199
@abiramimuthu6199 3 жыл бұрын
Thanks a lot aman. Great video, Teaching is an art and you are doing justice to that every time by breaking down the concept to little steps and explaining it in a way it reaches everyone. keep up your good work.........i am expecting more videos in your NLP playlist
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks a ton Abirami. Hope you and your family are staying safe and good.
@nikhilpawar7876
@nikhilpawar7876 2 жыл бұрын
Looking at the content & no. Of subscribers. Highely underrated
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Kindly share within your groups Nikhil, that may help. tq
@chdoculus
@chdoculus Жыл бұрын
listen to this video 3 times.. lot of insights. Thank you.
@vivekkumaryadav8802
@vivekkumaryadav8802 2 жыл бұрын
YOU ARE TRUE KNOWLEDGE
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Thanks Vivek, cheers.
@asafjerbi1867
@asafjerbi1867 2 жыл бұрын
Hi, excellent explanation but I have some points which are not clear to me yet. 1. how you choose the criteria to split the XGBoost tree by? for instance, you chose 'age
@santoshvjadhav
@santoshvjadhav 10 ай бұрын
Gr8 video Sir. You have explained it clearly and in a very simple way. Thanks a lot 🙏
@UnfoldDataScience
@UnfoldDataScience 10 ай бұрын
So nice of you Santosh. Please share with friends.
@animeshbagchi7881
@animeshbagchi7881 3 жыл бұрын
one of the best explanation on complex intuition of XG Boost.....
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Animesh.
@user-px3ux9we6e
@user-px3ux9we6e 7 ай бұрын
Thanks a lot for this excellent video! I am still curious about how xgboost can achieve parallelization and how it handles missing values as you mentioned before. Looking forward to your new videos!
@maruthiprasad8184
@maruthiprasad8184 8 ай бұрын
Superb simple explanation, Thank you very much
@nikhilpawar7876
@nikhilpawar7876 2 жыл бұрын
Definitely best & most understandable explanation of XGB🔥
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Cheers Nikhil.
@ruchitagarg4871
@ruchitagarg4871 3 жыл бұрын
Very nicely explained, Thanks Sir. One of the best videos I have seen on KZfaq.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Ruchita.
@Krishna-pm8ty
@Krishna-pm8ty 2 жыл бұрын
Very NIce Explanation!
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Thanks Krishna.
@Krishna-pm8ty
@Krishna-pm8ty 2 жыл бұрын
@@UnfoldDataScience Simplyifying concept with out loosing the complexity. One of the best explanation in youtube. Your channel really deserves more visibility. All the very best Aman.
@samarkhan2509
@samarkhan2509 3 жыл бұрын
very nice video.brief..concise..to the point..agree with others..probably the best explanation so far on youtube .way to go bro
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Your comments are my motivation Samar. Thanks for motivating.
@killeraudiofile8094
@killeraudiofile8094 3 жыл бұрын
Thanks a lot for this. Very helpful for me as I am brushing up on ML theory for interviewing. Awesome work!
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Glad it was helpful!
@sachin29596
@sachin29596 2 жыл бұрын
Sir, In formula new prediction=old prediction + Learning rate * output. I didn't understand how to get the output value as 6 for the second record. Could you explain once again.
@srk5702
@srk5702 7 ай бұрын
formula = sum of residuals/no. of residuals
@miroslavstimac4384
@miroslavstimac4384 3 жыл бұрын
Excellent explanation.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks a lot for watching Miroslav.
@sadhnarai8757
@sadhnarai8757 2 жыл бұрын
Very good Aman
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Thank you.
@subhz1
@subhz1 3 жыл бұрын
Nice explanation!!!! Can you please make a video on XG Boost, Gradient boost where the dependent variable is binary/categorical in nature say Good/Bad(0,1)
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Great suggestion Subhadip. Noted.
@ranajaydas8906
@ranajaydas8906 3 жыл бұрын
Sir, can you please tell why you didnt square the SR in12:08. And can you tell how the output at 14:02 is 6? What does the output actually mean?
@avneshdarsh9880
@avneshdarsh9880 3 жыл бұрын
output value calculated as the average of residuals in our case (4+8)/2=6
@akashkewar
@akashkewar 3 жыл бұрын
1) We square the sum of residuals when we compute similarity score and not when we make a prediction. 2) As we are making a prediction, and assuming lambda is 0, prediction is just an average of all the values (residuals) in a particular leaf. 3) output means residuals, we predict the residuals (because it is a boosting algorithm) such that the weighted sum of all the residuals is close to the target variable as much as possible (final prediction by our model).
@utkarshsaboo45
@utkarshsaboo45 2 жыл бұрын
@@akashkewar Are you sure we "square the sum" and not "sum the square"? The "square of sum" in the video doesn't make sense!
@avinashajmera2775
@avinashajmera2775 Жыл бұрын
Hi aman please clear this
@preranatiwary7690
@preranatiwary7690 4 жыл бұрын
Hey good one again! Continue your good work.. Thanks
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks for the feedback.
@drsivalavishnumurthy34
@drsivalavishnumurthy34 4 жыл бұрын
Good explanation sir.Kindly make a video on SVM and alternate decision tree
@datadriven597
@datadriven597 2 жыл бұрын
Awesome indepth explanantion, keep up the good work man!
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Glad you liked it!
@sachink9102
@sachink9102 6 ай бұрын
Q1. how to interpret Similarity Score. Q2. what is meaning of High Similarity Score and Low Similarity Score
@sandipansarkar9211
@sandipansarkar9211 2 жыл бұрын
finished watching
@HrisavBhowmick
@HrisavBhowmick 3 жыл бұрын
12:01 why not square of sum of residuals as u said in the formula?
@ahmedidris305
@ahmedidris305 Жыл бұрын
At 10:27 I don't understand why the similarity score after the split is affected by the change of the lambda value before the split "Why the similarity score after the split will go down?". As I understood from the video, the split rule has nothing to do with the lambda value, therefore if lambda value changed, the split remains the same. the only thing changes is the gain, when the lambda value goes up, then the smilarity score before the split decreases and the gain increases because the deducted value (similarity score before the split) decreases when lambda value gets higher.
@JoshDenesly
@JoshDenesly 3 жыл бұрын
This the best vedio in XGboost.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Vish.
@pradeeppaladi8513
@pradeeppaladi8513 Жыл бұрын
Thanks a lot for the lecture. Can you please clarify as to what happens in case of a classification problem? I mean what about the residuals in case of a classification problem as there will no residuals in them. How do we interpret these learnings for a classification problem?
@ahteshaikh1193
@ahteshaikh1193 Жыл бұрын
Thanks for the excellent work!!
@rafibasha4145
@rafibasha4145 Жыл бұрын
Hi Aman,thanks for the video,please explain how lambda controls overfitting
@Madhuram_Qualityoflife
@Madhuram_Qualityoflife 4 жыл бұрын
I like the way u explain complex concepts in simple way. Thanks
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks Vibhaas, ur comments motivate me :)
@Vipulghadi
@Vipulghadi Жыл бұрын
Sir you take only one feature for prediction,what if data have more than 1 feature,on which crieteria ,the model select the feature,Is information gain like approach is use or any other approach...... Please explain sir
@logeshr4923
@logeshr4923 27 күн бұрын
can u do one for xgboost classification
@sumitkumardash119
@sumitkumardash119 3 жыл бұрын
can you just describe the loss function for it ?
@babusivaprakasam9846
@babusivaprakasam9846 3 жыл бұрын
Straight to the point. Thanks
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Welcome Babu.
@parvsharma8767
@parvsharma8767 3 жыл бұрын
Thanks a lot brother..god bless u for ur information
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Always welcome Parv.
@chdoculus
@chdoculus Жыл бұрын
one question- what is the output in the last formula of new prediction. which output it is?
@omarsalam7586
@omarsalam7586 11 ай бұрын
thank you could you explain how to do feature important using XGboost
@SunilKumar-mz6kr
@SunilKumar-mz6kr 3 жыл бұрын
Great explanation
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Glad it was helpful Sunil. You're very welcome Goundo. If possible, Please share the link within data science groups. Thanks again.
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
great explanation
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks a lot Sandipan.
@cgqqqq
@cgqqqq 3 жыл бұрын
you are a god...
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
IT'S TOO MUCH :)
@dhineshmathiyalagan6415
@dhineshmathiyalagan6415 3 жыл бұрын
Very informative. Thanks for explaining the concept such that it is understood easily. I just to want to understand a effect of outlier on the base value(Model 0). Since mean value(which is high in the presence of outlier) is considered initially to calculate the residuals and for prediction, wouldn't it have greater impact ?. Please share your insights.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Yes, exactly Dhinesh, there will be outlier impact hence better to take care of it before starting training.
@akshatagrawal6701
@akshatagrawal6701 Жыл бұрын
Dear Aman ji , One question please ... SS value is SR suqare but when you are calculating for the 11 value you do only sum of residual but not doing their square so please explain how it come for 6 . if do square of SR then value can be different
@UnfoldDataScience
@UnfoldDataScience Жыл бұрын
I will check - I may have possibly made mistake, did u check previous comments?
@akshatagrawal6701
@akshatagrawal6701 Жыл бұрын
@@UnfoldDataScience Thanks to Aman Ji for reading your viewers' comments and respecting their doubts. I think in one comment you gave the full paper link and one more link for more detail so I will check from there... thanks
@atomicbreath4360
@atomicbreath4360 3 жыл бұрын
Sir what exactly is difference between base model trees created in gradient boosting and xgboost.? Do gradient boosting also use this above formula which you have shown in the video
@surajprusty6904
@surajprusty6904 2 жыл бұрын
If we take mean as the criteria then sum of the residual will always be zero if values are taken as it is(with signs)
@ambarkumar7805
@ambarkumar7805 2 жыл бұрын
Is the same procedure for classification?
@debojitmandal8670
@debojitmandal8670 Жыл бұрын
Hello sir i dint follow the concept of how it's handling the outlier as you said it handles the outlier but u have not explained how as lamda increases the similarity score decreases but how is it imaoacting or taking care of the outliers i dint follow it as i couldn't understand the relationship bw them Second let's say a new data comes i.e 11 so it goes to the branch greater then 10 and again will a new similarity score be count d bcs u have a 3rd data i.e 11 So (4+8+11)^2/3+0
@jatin7836
@jatin7836 3 жыл бұрын
very explanatory video, great work bro, I just need to ask one thing, that output thing at the end, how we got 6 as the output because we divide (4+8)^2 / 2+0 = 72 ,if we do not square this we get 6, but the formula is with square right?, so how we got that 6 as the output? it must be something else(may be72 i think), please explain.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks jatin, will check that. Thanks for pointing out.
@himanshuarora6822
@himanshuarora6822 3 жыл бұрын
Hi Aman, Jatin is correct. It should be 72 instead of just 6. If we take 72, the value of residual is (34-51.6)= -17.6. Please see and suggest if I am correct. Also, is the value of residual is decreasing in this case from 4 to -17.6. How to further reduce it so that it is closer to 0
@r.h.5172
@r.h.5172 3 жыл бұрын
@@himanshuarora6822 I have the same doubt. Is this cleared somewhere? Aman, could you pls. explain.
@avneshdarsh9880
@avneshdarsh9880 3 жыл бұрын
output value calculated as the average of residuals in our case (4+8)/2=6
@ganeshkharad
@ganeshkharad 11 ай бұрын
too good...!!
@UnfoldDataScience
@UnfoldDataScience 11 ай бұрын
Thanks Ganesh
@tempura_edward4330
@tempura_edward4330 3 жыл бұрын
Very clear ! Thank you !✨🙏
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
You’re welcome 😊. Please share my videos in various data science groups you are part of, that will motivate me to create more content :)
@awanishkumar6308
@awanishkumar6308 3 жыл бұрын
Can we apply L1 and K2 regularization technique to any algorithms whether its Linear regression, xgboost, gboost, random forest or etc?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Not directly, there are different regularization parameters we can tune in various algo.
@giridharreddy8113
@giridharreddy8113 4 жыл бұрын
Probably the best in youtube. It would be really great if you could make a video of books where you have learnt from and if possible provide book links to amazon.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Giridhar. On books, please find my recommendation below,. you will find links to buy in description of same video: kzfaq.info/get/bejne/oKqnpM2evJeqk5s.html
@jayitabhattacharyya4313
@jayitabhattacharyya4313 3 жыл бұрын
From where is output 6 coming? The similarity score for 2nd branch was 72 according to your formula. I fail to understand please help
@harshagarwal8170
@harshagarwal8170 3 жыл бұрын
you can see from previous tree (4+8)/2+0 = 6 here lambda is 0 as said by him ....
@tempura_edward4330
@tempura_edward4330 3 жыл бұрын
I think he means to calculate the new prediction is just: (4+8)/#R, not calculating the similarity score. I got confused too. 😁
@ppsheth91
@ppsheth91 4 жыл бұрын
Hello Sir, Really very nice explanation for such a complicated algorithm. Hardly there is any video which describes indepth intuition for Xgboosting.. Thanks a lot Sir.. One doubt : Can u explain how the classification for any new record will take place from test data set? Can you create such videos for Catboost and Light GBM ?
@RameshYadav-fx5vn
@RameshYadav-fx5vn 4 жыл бұрын
Really very nice
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks Prayag, I will add videos on Catboost and Light GBM as well.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thank you.
@akhilnooney534
@akhilnooney534 3 жыл бұрын
Do we calculate IG and Entropy for splitting criteria?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
No, python does for us.
@AMVSAGOs
@AMVSAGOs 3 жыл бұрын
Hai Aman, can you please tell us, why the data should be normally distributed. and how does it affects the ML models?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Model gets a wider range to learn from. To keep it simple.
@rajeev264u
@rajeev264u 3 жыл бұрын
Thanks Aman for sharing your knowledge. Great learning. Can you please explain the relation between min_child_weight and Gamma. Do we still need to tune min_child_weight if we are using Gamma values for tuning as the tree is getting pruned by using a higher Gamma?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Hi Rajeev, about tuning your hyperparameter, you should try with different combinations to see what works good for your model. We can not take a generic approach for all data.
@nishidutta3484
@nishidutta3484 3 жыл бұрын
Hey Aman, you talked about missing value treatment in XG boost in your previous video..how does XG boost treat missing values?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
HI NIshi, Sorry for late reply. That will be little long explanation. Please check below link for understanding more: datascience.stackexchange.com/questions/15305/how-does-xgboost-learn-what-are-the-inputs-for-missing-values
@vishnukv6537
@vishnukv6537 3 жыл бұрын
good explanation :)
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Glad you liked it Vishnu.
@gauravverma365
@gauravverma365 2 жыл бұрын
Can we generate the mathematical equations between adopted inputs and output parameters after successful implementation of xgboost?
@pacsSaanihaamariyam
@pacsSaanihaamariyam Жыл бұрын
In the end , the new prediction value is subtracted with iq to find the new residue value when new predictions are done, why is the residue value calculated only for 34 and why not 20 amd 38
@souravbiswas6892
@souravbiswas6892 4 жыл бұрын
Awesome explanation 👍 although it was bit complicated. can you create videos on poisson regression and survival analysis?
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks Sourav. Yes I will put that in my list.
@shanmukhchandrayama3903
@shanmukhchandrayama3903 2 жыл бұрын
Sir, Can you please how this xgboost works for logistic regression.
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Do u mean classfication?
@shanmukhchandrayama3903
@shanmukhchandrayama3903 2 жыл бұрын
@@UnfoldDataScience yeah my bad yes., thanks for taking time and replying sir.
@vikram5970
@vikram5970 2 жыл бұрын
Hi Sir,i couldnot find the link to 'how gradient boost works' the theoretical explanation. i found the one which exlains about why the XGboost is fast and has high performance. can you please give me the link to how XGBoost works.
@JoshDenesly
@JoshDenesly 3 жыл бұрын
Please make vedio on "Pipeline" of building model and how it is implement in Production
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Noted.
@letsplay0711
@letsplay0711 Жыл бұрын
14:22 , I think output is (12) square, 144/2+0, 72. Please Correct me if wrong...
@UnfoldDataScience
@UnfoldDataScience Жыл бұрын
Need to check
@drsivalavishnumurthy34
@drsivalavishnumurthy34 3 жыл бұрын
Sir nice video sir.pls make a video of the dependent variable is categorical that is yes or no .
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Ok Vishnu.
@TarashankarSenapati-yz8rv
@TarashankarSenapati-yz8rv Жыл бұрын
Sir ,how 6 is coming ,u missed to square the sum of 4 and 8, please tell me
@mayanksriv00
@mayanksriv00 3 жыл бұрын
sir, please do cover light GBM and is advantage over XGboost
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Mayank. Noted.
@rafsunahmad4855
@rafsunahmad4855 3 жыл бұрын
is knowing the math behind algorithm must or just knowing that how algorithms works is enough? please please please give a reply.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Knowing math is must.
@rafsunahmad4855
@rafsunahmad4855 3 жыл бұрын
I'm confused because other told me that if I want to do a job which is related to research means improve machine leaning or create new algorithm then I must learn behind the math means how math working behind of an algorithm but for normal data science job it will be enough that how an algorithm work but knowing how math working behind of an algorithm is not must. please give a reply
@lifeisbeautiful1111
@lifeisbeautiful1111 9 ай бұрын
hi can u pls explain how the output is 6 for the second observation in the table?
@ujjwalgoel6359
@ujjwalgoel6359 7 ай бұрын
yes i was looking for same question cuz the output for right node was 72
@ujjwalgoel6359
@ujjwalgoel6359 7 ай бұрын
do u know the answer?
@Amit-dl4vd
@Amit-dl4vd 2 жыл бұрын
Where is gradient descent happening in the algorithm?
@rahuljaiswal141
@rahuljaiswal141 4 жыл бұрын
Can you make end to end clustering algorithm. How to select variable, no of clusters and then final deployment
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Hello, Thanks for feedback, I will note this topic and create video in coming week for sure.
@shivanshjayara6372
@shivanshjayara6372 3 жыл бұрын
I dont understood the how the tree will decide which is to be the root node....if it depend on the I.G. then i got it and second thing is.....it will be better if you take more than 3 records in that example.........like 5-6 coz im not able to get whether every row is getting into operation or whole row at once
@dvp1678
@dvp1678 2 жыл бұрын
At 7:19 is it not Age < 10 instead of Age > 10?
@Hu.aventuras
@Hu.aventuras 2 жыл бұрын
Hi Aman!!!, have a question, how can i predict a gender with mobile data phone with an XGBoost algorithm
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
You need to create data such that your target column is gender and is the u can run xgboost classifier.
@shivanshjayara6372
@shivanshjayara6372 3 жыл бұрын
i dont understand formula: firstly u used (sum of res square) & secondly u used only (sum of res) what is reason?
@harivgl
@harivgl 3 жыл бұрын
Are all models M1, M2 etc. the same model, data and tree and features used?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Depends on what your M1, M2 are, usually same.
@hiteshyerekar2204
@hiteshyerekar2204 3 жыл бұрын
Hi Aman it's only change one residual i.e 2.2 what about remaining how we get remaining residuals ?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
In the similar way, I just gave one example Hitesh.
@ajaybhatt6820
@ajaybhatt6820 4 жыл бұрын
sir please make vedios on RNN ,LSTM
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Hi Ajay, It will come for sure
@saurabhdeokar3791
@saurabhdeokar3791 2 жыл бұрын
In new prediction, which value you take as output?
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Which part of the video Saurabh?
@saurabhdeokar3791
@saurabhdeokar3791 2 жыл бұрын
@@UnfoldDataScience Maths part...
@71shubham
@71shubham 3 жыл бұрын
How did we decide age splitting criterion?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Just for example I took here.
@januaralamien9421
@januaralamien9421 3 жыл бұрын
XGBOOST = Algorithm or Framework? please explain
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Internally a framework however the implementation is available in python hence we call algorithm.
@AhmadHassan-on6zq
@AhmadHassan-on6zq 3 жыл бұрын
🙇‍♂️
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
:)
@tapaspal8623
@tapaspal8623 4 жыл бұрын
Hi Aman Sir, Can you please explain how parallelism happens since it runs in sequencial manner. Next model requires previous models output. Thanks, Tapas
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Hi Tapas, parallelism not in terms on model training , I was talking about parallelism in terms of hardware for example using multi cores of the processor, not to be confused with model training.
@YashSharma-xb2os
@YashSharma-xb2os 4 жыл бұрын
Hi aman, please make a telegram group or whatsapp group where we can connect and join with you and ask queries.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
I ll create Yash for sure.
@geetisudhaparida2523
@geetisudhaparida2523 3 жыл бұрын
From where 6 value has come??
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Where 6? can u point me to timing in video?
@chanpreetsingh93
@chanpreetsingh93 4 жыл бұрын
How to set or calculate gamma value?
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Good question, Its subjective based on how model is behaving with Data, We can give a range and decide to tune it.
@navneetgupta4669
@navneetgupta4669 3 жыл бұрын
the learning rate was 72 (144/2). how did it change to 6?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Hi Navneet, can u let me know the time in the video, I will play and check that part.
@navneetgupta4669
@navneetgupta4669 3 жыл бұрын
@@UnfoldDataScience after 12:00 When you added another input (11). You took lambda as zero but forgot to square the numerator.
@avneshdarsh9880
@avneshdarsh9880 3 жыл бұрын
output value calculated as the average of residuals in our case (4+8)/2=6
@bestcakesdesign
@bestcakesdesign 3 жыл бұрын
Where are you working sir?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Hi Nikul, please ask queries related to Data Science only.
@rohitkaushik2172
@rohitkaushik2172 2 жыл бұрын
Please don't copy the examples please present new examples
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Feedback Taken. HNY
Gradient Boost Machine Learning|How Gradient boost work in Machine Learning
14:11
ROLLING DOWN
00:20
Natan por Aí
Рет қаралды 11 МЛН
This Dumbbell Is Impossible To Lift!
01:00
Stokes Twins
Рет қаралды 36 МЛН
wow so cute 🥰
00:20
dednahype
Рет қаралды 30 МЛН
Is Data Science Dying | Is data analytics dying | Is data science worth it
9:37
Tell Me About Yourself | Best Answer (from former CEO)
5:15
The Companies Expert
Рет қаралды 6 МЛН
XGBoost in Python from Start to Finish
56:43
StatQuest with Josh Starmer
Рет қаралды 223 М.
XGBoost Made Easy | Extreme Gradient Boosting | AWS SageMaker
21:38
Prof. Ryan Ahmed
Рет қаралды 38 М.
ROLLING DOWN
00:20
Natan por Aí
Рет қаралды 11 МЛН