No video

Gradient Boosting and XGBoost in Machine Learning: Easy Explanation for Data Science Interviews

  Рет қаралды 34,805

Emma Ding

Emma Ding

Күн бұрын

Пікірлер: 29
@anand3064
@anand3064 7 ай бұрын
Beautifully written notes
@PhucHoang-ng4vh
@PhucHoang-ng4vh 9 ай бұрын
just read out loud, no explanation at all
@jet3111
@jet3111 Жыл бұрын
Thank you for the very informative video. It came up at my interview yesterday. I also got a question on time series forecasting and preventing data leakage. I think it would great to have a video about it.
@emma_ding
@emma_ding Жыл бұрын
Many of you have asked me to share my presentation notes, and now… I have them for you! Download all the PDFs of my Notion pages at www.emmading.com/get-all-my-free-resources. Enjoy!
@SanuSatyam
@SanuSatyam Жыл бұрын
Thanks a lot. Can you please make a video on Time Series Analysis? Thanks in Advance!
@zhenwang5872
@zhenwang5872 Жыл бұрын
I usually watch Emma's video when I doing revision.
@jennyhuang7603
@jennyhuang7603 Жыл бұрын
For 5:10, why the MSE delta r_i is Y-F(X) instead of 2*(Y-F(X))? or is the coefficent doesn't matter?
@emmafan713
@emmafan713 Жыл бұрын
I am confused about the notation, so h_i is a function to predict r_i and r_i is the gradient of the loss function w.r.t the last prediction F(X). so h_i should be similar to r_i why h_i is similar to gradient of r_i
@Heinz3792
@Heinz3792 5 ай бұрын
I believe there is an error in this video. r_i is the gradient of the loss function w.r.t. the CURRENT F(X), i.e. F_i(X). The NEXT weak model h_i+1 is then trained to be able to predict r_i, the PREVIOUS residual. Alternatively all this could be written with i-1 instead of i, and i instead of i+1. TLDR: Emma should have called the first step "compute residual r_i-1", not r_i. And in the gradient formula, she should have written r_i-1.
@kandiahchandrakumaran8521
@kandiahchandrakumaran8521 3 ай бұрын
Excellent video Many thanks. Could you kindly make a video for time to event with survival SVM, RSF, or XGBLC?
@Leo-xd9et
@Leo-xd9et Жыл бұрын
Really like the way you use Notion!
@emma_ding
@emma_ding Жыл бұрын
Thanks for the feedback, Leo! I tried out a bunch of different presentation methods before this one, so I'm glad to hear you're finding this platform useful! 😊
@user-hq4ge6no3p
@user-hq4ge6no3p 3 ай бұрын
An excellent video
@annialevko5771
@annialevko5771 11 ай бұрын
Hi! I have a question, how does the parallel tree building work? Because based in the gradient boosting it needs to calculate the error from the previous model in order to create the new one, so I dont really understand in which way is this parallelized
@shashizanje
@shashizanje 5 ай бұрын
Its parallelized in such a way that , during formation of tree , it can work parallel....means it can work on multiple independent features parellaly to reduce the computation time....suppose if it has to find root node, it has to check information gain of every single independent feature and then decide which feature would be best for root node...so in this case instead of calculating information gain one by one, it can parallely calculate IG of multiple features....
@aaronsayeb6566
@aaronsayeb6566 2 ай бұрын
there is a mistake in the representation of algorithm. the equation for ri, L(Y, F(X)), and grad ri = Y-F(X) can't hold true at the same time. I think ri= Y-F(X) and grade ri should be something else (right?)
@elvykamunyokomanunebo1441
@elvykamunyokomanunebo1441 Жыл бұрын
Hi Emma, I'm struggling to understand how to build a model on residuals: 1) Do I predict the residuals and then get the mse of the residuals? What would be the point/use of that? 2) Do I somehow re-run the model considering some factor that focuses on accounting for more of the variability e.g. adding more features(important features) which reduce mse/residual? Then re-running the model adding a new feature to account for remaining residual until there is no more reduction in mse/residual?
@poshsims4016
@poshsims4016 Жыл бұрын
Ask Chat GPT every question you just typed. Preferably GPT-4
@Heinz3792
@Heinz3792 5 ай бұрын
It's important to understand what the residual is. The residual is a vector giving a magnitude of the prediction error AND the direction, i.e. the gradient. Thus, regarding your questions: 1) we predict the residual with a weak model, h, in order to know in what direction to move the prediction of the overall model F_i(X) so that it is reduced. We assume h makes a decent prediction, and thus we treat it like the gradient. 2) we then calculate alpha, the regulation parameter, in order to know HOW FAR to move in the direction of the gradient which h provides. I.e., how much weight to give model h. Minimizing the loss function gives us this value, and keeps us from over or undershooting the step size.
@objectobjectobject4707
@objectobjectobject4707 4 ай бұрын
Okay subscribed !
@nihalnetha96
@nihalnetha96 3 ай бұрын
is there a way to get the notion notes?
@wallords
@wallords 9 ай бұрын
How do you add L1 regularization to a tree???
@riswandaayu5930
@riswandaayu5930 10 ай бұрын
Hallo Miss, thankyou for the knowledge, Miss can I request your file in this presentation ?
@ermiaazarkhalili5586
@ermiaazarkhalili5586 Жыл бұрын
Any chance to have slides?
@NguyenSon-ew9wn
@NguyenSon-ew9wn Жыл бұрын
Agree. Hope to have that note
@emma_ding
@emma_ding Жыл бұрын
Yes! Download all the PDFs of my Notion pages at emmading.com/resources by navigating to the individual posts. Enjoy!
@faisalsal1
@faisalsal1 6 ай бұрын
She just read the text with zero knowledge about the content. U no good.
@CharlesJackson-i7q
@CharlesJackson-i7q 4 күн бұрын
Jackson Kimberly White Timothy Thompson Gary
@MaxPenelope-w4j
@MaxPenelope-w4j 4 күн бұрын
Anderson Betty White Steven Smith Gary
Gradient Boosting : Data Science's Silver Bullet
15:48
ritvikmath
Рет қаралды 60 М.
Nurse's Mission: Bringing Joy to Young Lives #shorts
00:17
Fabiosa Stories
Рет қаралды 6 МЛН
Please Help Barry Choose His Real Son
00:23
Garri Creative
Рет қаралды 23 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 282 М.
When to Use XGBoost
7:08
Super Data Science: ML & AI Podcast with Jon Krohn
Рет қаралды 3,7 М.
XGBoost Made Easy | Extreme Gradient Boosting | AWS SageMaker
21:38
Prof. Ryan Ahmed
Рет қаралды 38 М.
What is XGBoost
9:20
Super Data Science: ML & AI Podcast with Jon Krohn
Рет қаралды 2 М.
Boosting - EXPLAINED!
17:31
CodeEmporium
Рет қаралды 49 М.
How to Sell Yourself in Phone Screens for Data Scientists
12:38
Nurse's Mission: Bringing Joy to Young Lives #shorts
00:17
Fabiosa Stories
Рет қаралды 6 МЛН