No video

Kaggle's 30 Days Of ML (Competition Part-4): Hyperparameter tuning using Optuna

  Рет қаралды 11,715

Abhishek Thakur

Abhishek Thakur

Күн бұрын

Пікірлер: 34
@abhishekkrthakur
@abhishekkrthakur 3 жыл бұрын
Notebook-1: www.kaggle.com/abhishek/competition-part-4-hyperparameter-tuning-optuna Notebook-2: www.kaggle.com/abhishek/competition-part-4-optimized-xgboost Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
@channelname9332
@channelname9332 3 жыл бұрын
🔥👍
@geekyprogrammer4831
@geekyprogrammer4831 3 жыл бұрын
Thank God you are back 😭😭😭 Once I get a job related to DS, definitely will pay you back brother!
@madhu1987ful
@madhu1987ful 3 жыл бұрын
Woaaah !!! after this optuna tuning I jumped from 1500 rank to top 300 !!! :-)
@abhishekkrthakur
@abhishekkrthakur 3 жыл бұрын
Nice work! Watch next one and jump to 30!
@vivekchowdhury4225
@vivekchowdhury4225 2 жыл бұрын
Beautifully explained GradMaster🙌 I was searching for a tutorial like this for a long time. Also, your book is amazing.
@soumyasubhrabhowmik2209
@soumyasubhrabhowmik2209 3 жыл бұрын
Thank you so much for your clear explanations of what I believe are pretty complex concepts. It has been a great experience learning from you over the past few weeks.
@agamenon1953
@agamenon1953 3 жыл бұрын
Abhishek, you're amazing! Thank you so much for sharing this valuable knowledge!
@t-m5678
@t-m5678 3 жыл бұрын
Thanks for the videos. These are very helpful.
@floopybits8037
@floopybits8037 2 жыл бұрын
Nice explanation of optuna , i giggled when you said n_estimators =7000 is small since i have a 8GB PC 😁
@loguansiang2300
@loguansiang2300 3 жыл бұрын
Thanks for the videos.Hope u can do more series for other tutorial and competition.
@pandasaspd6588
@pandasaspd6588 2 жыл бұрын
Thank you very much, I learned a lot from this video
@Yu-nd1kr
@Yu-nd1kr Жыл бұрын
hi if i use cnn1d model what code if i use optuna for cnn1d to optimize filter and kernal ??
@priyanksharma1200
@priyanksharma1200 3 жыл бұрын
What does log=true do in suggest_float? How is it different from suggest_loguniform or suggest_discrete_uniform?
@eyalbaum1254
@eyalbaum1254 3 жыл бұрын
love the video! was wondering - why not incorporating cross_val_score within the study? won't it deliver better results in terms of model selction? i tried my best to incorporate it but couldn't find an elegent solution (maybe i don't even have to)
@abhishekkrthakur
@abhishekkrthakur 3 жыл бұрын
because we created our own folds in part-1 and we have been using them. cross_val_score creates its own folds but we want to use our own folds :)
@kiranchowdary8100
@kiranchowdary8100 3 жыл бұрын
what is the reason for models giving high accuracy with cpu and not so high or a bit less with gpU? any technical reasons to know?
@abhishekkrthakur
@abhishekkrthakur 3 жыл бұрын
check out default value of tree_method parameter for gpu and cpu.
@AI-Kawser
@AI-Kawser 3 жыл бұрын
Sir can you please share the link to the other video you mentioned here in the beginning. Thank you.
@abhishekkrthakur
@abhishekkrthakur 3 жыл бұрын
kzfaq.info/get/bejne/a9SJpK5ercfTe40.html&ab_channel=AbhishekThakur
@madhu1987ful
@madhu1987ful 3 жыл бұрын
Hi Abhishek, In optuna code, i observed that u have commented the GPU params in XGBRegressor while submitting predictions, but same were enabled during hyperparam tuning. Is there a reason?
@abhishekkrthakur
@abhishekkrthakur 3 жыл бұрын
the answer to the question is already in my video :) gpu uses `tree_method="gpu_hist"` but cpu uses `tree_method="exact"`. apparently, the latter is better on this dataset. so we finetune on gpu, so that it is fast and submit with cpu.
@mohamadosman9616
@mohamadosman9616 3 жыл бұрын
this great , but up to now I am stuck in 100th place :(, Now I am trying this technique for 1000 trials to see the best parameters
@abhishekkrthakur
@abhishekkrthakur 3 жыл бұрын
see part 5!
@mohamadosman9616
@mohamadosman9616 3 жыл бұрын
@@abhishekkrthakur ❤️❤️❤️❤️
Kaggle's 30 Days Of ML (Competition Part-5): Model Blending 101
31:58
Abhishek Thakur
Рет қаралды 10 М.
Kaggle's 30 Days Of ML (Day-13 Part-1): Scikit-Learn Pipelines
19:46
Abhishek Thakur
Рет қаралды 8 М.
Meet the one boy from the Ronaldo edit in India
00:30
Younes Zarou
Рет қаралды 19 МЛН
Running With Bigger And Bigger Feastables
00:17
MrBeast
Рет қаралды 165 МЛН
Happy birthday to you by Tsuriki Show
00:12
Tsuriki Show
Рет қаралды 12 МЛН
End-to-End: Automated Hyperparameter Tuning For Deep Neural Networks
55:37
Auto-Tuning Hyperparameters with Optuna and PyTorch
24:05
PyTorch
Рет қаралды 44 М.
XGBoost's Most Important Hyperparameters
6:28
Super Data Science: ML & AI Podcast with Jon Krohn
Рет қаралды 4,3 М.
Mastering Hyperparameter Tuning with Optuna: Boost Your Machine Learning Models!
28:15
Tuning Model Hyper-Parameters for XGBoost and Kaggle
24:34
Jeff Heaton
Рет қаралды 18 М.
Dask and Optuna for Hyper Parameter Optimization
13:53
Coiled
Рет қаралды 2 М.
Kaggle's 30 Days Of ML (Day-1): Getting Started With Kaggle
43:42
Abhishek Thakur
Рет қаралды 185 М.
Improving accuracy using Hyper parameter tuning
7:34
AK Python
Рет қаралды 6 М.
Meet the one boy from the Ronaldo edit in India
00:30
Younes Zarou
Рет қаралды 19 МЛН