No video

Complete Guide to Cross Validation

  Рет қаралды 53,795

Rob Mulla

Rob Mulla

Күн бұрын

Пікірлер: 84
@casualgamer91
@casualgamer91 2 жыл бұрын
Hi Rob, thanks for the nice explanation of cross-validation. After you run the cv method, what do you use as your final model to predict new data? My 1st thought is to use the best performing fold, but that seems to defeat the purpose of cross-validation and would be prone to overfitting. Would you use all 5 folds then average the predictions similar to how you calculated the out of fold AUC score? Or perhaps just train on all your data since you have an idea that your model will perform at around 0.83 AUC with unseen data?
@robmulla
@robmulla 2 жыл бұрын
So glad you liked it. Your intuition is exactly correct. You would not want to only use the model from the best fold - because that is an overfit model to just that one split. Many times people average across folds, which works well especailly when stacking. However most GMs have perfected the art of re-training the final model on all the data. This sometimes requires additional parameter tuning and is done much more on intuition. For instance training for less epochs because you have more data to train on. Hope that helps!
@casualgamer91
@casualgamer91 2 жыл бұрын
Thanks for the reply! It'd be great if you could make a future video on retraining the final model on all the data with hyperparameter tuning with perhaps a concrete example. Appreciate your hard work on educating us!
@user-nu2vd6qf6y
@user-nu2vd6qf6y 8 ай бұрын
can you please enlight more on this rob, i now understand that cv is a robust method of validating the model performance. but inside the cv loop you fit the model 5times(well according to the fold number) using 5 different train-val dataset. is that mean you're only training 1 model for 5 consecutive time, so each times the model has knowledge of more data. or do you actually make 5 different model using those 5 different dataset?
@gunasekharvenkatachennaiah1033
@gunasekharvenkatachennaiah1033 6 ай бұрын
@@casualgamer91 Here is the sample code for that: ``` from sklearn.model_selection import KFold from sklearn.metrics import accuracy_score from sklearn.pipeline import Pipeline from sklearn.tree import DecisionTreeClassifier import numpy as np models = [] kf = KFold(n_splits=5, shuffle=True, random_state=0) x = df.drop(columns=['target'], axis=1) y = df['target'] tr_acc = [] ts_acc = [] for train_index, test_index in kf.split(x, y): lr = Pipeline(steps=[ ('preprocessing', preprocessing), ('classification model', DecisionTreeClassifier()) ]) x_train, x_test = x.iloc[train_index], x.iloc[test_index] y_train, y_test = y.iloc[train_index], y.iloc[test_index] lr.fit(x_train, y_train) y_train_preds = lr.predict(x_train) y_test_preds = lr.predict(x_test) tr_acc.append(accuracy_score(y_train, y_train_preds)) ts_acc.append(accuracy_score(y_test, y_test_preds)) models.append(lr) def predict_avg(models, new_data): new_data_df = pd.DataFrame([new_data], columns=df.drop(columns=['target'], axis=1).columns) predictions = [] for model in models: prediction = model.predict(new_data_df) predictions.append(prediction) return np.mean(predictions) new_data = [5.9, 3.0, 5.1, 1.8] average_prediction = predict_avg(models, new_data) print(average_prediction) Ground Truth:- 2.0 Predicted Truth:- 2.0 ```
@MOAMA82
@MOAMA82 9 ай бұрын
The best explanation of CV on KZfaq, Rob is an ML beast, thank you.
@gauravmalik3911
@gauravmalik3911 2 жыл бұрын
This is the only video I've found on net which explains ofcourse crossvalidation part but also how to separate and divide our data into different sets. Because in most of the tutorials and articles that Ive found, they divide data into just two parts and just perform evaluation on test set which is wrong but here I've found proper explanation of whole process. Awesome video. For future I would love to know how we can apply different classification metrics when we need to have a high recall (for example in case of cancer predictions) or high precision etc. Again thank you for the detailed explanation
@robmulla
@robmulla 2 жыл бұрын
Thanks so much for this thoughtful comment. Even though this is one of my less popular videos I think the topic is really important. Cross validation is super crucial if you want your model to be robust and work on unseen data. A video about classification metrics is a great idea. Thanks again for watching.
@davidm6624
@davidm6624 2 жыл бұрын
Wow, thanks. I'm going thru ML on the theoretical side rn and it's refreshing to see such applied content! It's a long road ahead, but I believe that if you keep posting vids of such a) relevancy and b) quality with a good c) frequency in a couple of months you will be much bigger. Thanks again!
@robmulla
@robmulla 2 жыл бұрын
Thanks so much David! Glad you found it helpful.
@kalianeeboodoo4750
@kalianeeboodoo4750 Жыл бұрын
Really great tutorial, so thorough and simple to understand. You're a natural tutor
@robmulla
@robmulla Жыл бұрын
I appreciate that!
@ye-ym5jo
@ye-ym5jo Жыл бұрын
this is the best CV explanation I've ever watched and finally clear my confusion, thanks a lot sir
@robinsonrios3199
@robinsonrios3199 10 ай бұрын
Man, i really love your coding performance and all your explanations. You helped me a lot.
@eduardomanotas7403
@eduardomanotas7403 Жыл бұрын
Rob , you are so great! I can come over to your videos many times and never get bored, I need more teachers like you lol, you are a guy who really improves every day, Thanks for supporting the community
@srimantamukherjee7090
@srimantamukherjee7090 2 ай бұрын
Excellent and elegant flow of concepts and implementatiom.
@ronbzalen
@ronbzalen Жыл бұрын
this is pure gold! thanks for this awesome content !!
@robmulla
@robmulla Жыл бұрын
Glad you enjoyed it!
@user-es3wr6uf2l
@user-es3wr6uf2l Жыл бұрын
This is amazing. It was so helpful. Thank you Rob!
@robmulla
@robmulla Жыл бұрын
Glad you found it helpful! Cross validation is one of the most important things to master.
@berkguney8992
@berkguney8992 2 жыл бұрын
Great content as always Rob.
@robmulla
@robmulla 2 жыл бұрын
Thanks Berk! Glad you liked it. Tell your friends. :)
@raheemnasirudeen6394
@raheemnasirudeen6394 Жыл бұрын
This is superb, I wish to be like Rob one day.
@robmulla
@robmulla Жыл бұрын
Thanks. I’m nothing special. Just sharing what I’ve learned.
@user-es2np7gb4f
@user-es2np7gb4f Жыл бұрын
❤❤❤ You are the one from my best proffessors
@robmulla
@robmulla Жыл бұрын
Thanks! Not a professor. Just a guy.
@SuhasKM-tl1rg
@SuhasKM-tl1rg 7 ай бұрын
My God Rob, you are a blessing man
@oilgas1016
@oilgas1016 2 жыл бұрын
State of the art for cross validation.
@robmulla
@robmulla 2 жыл бұрын
It definitely is important to do cross validation correctly!
@vivekpadman5248
@vivekpadman5248 Жыл бұрын
Thanks for this detailed walkthrough through cross validation, I really learnt a lot. Can you if possible make a video on ensembling techniques in machine learming models optimization, and other general score increasing techniques ?
@robmulla
@robmulla Жыл бұрын
Glad you liked it. Optimization for blending and ensembling is a good video idea. I’ll keep it in mind for future videos.
@ashraf_isb
@ashraf_isb 4 ай бұрын
lol truly a master class, thanks Rob xD
@poojagoyal3647
@poojagoyal3647 2 жыл бұрын
Hey rob ...great explanation on the cross fold technique....can you do a further video on how to apply these techniques in case of deep learning model for image classification problem...it will be super helpful
@robmulla
@robmulla 2 жыл бұрын
Thanks for the suggestion I’ll add it to the list of ideas for future videos! Thanks for watching.
@hussainsalih3520
@hussainsalih3520 Жыл бұрын
amazing , keep doing awesome videos
@robmulla
@robmulla Жыл бұрын
Thank you! Will do! Cross validation is really important to understand in ML!
@KingJadi
@KingJadi 2 жыл бұрын
Great video as always!
@robmulla
@robmulla 2 жыл бұрын
Thanks again! I apprecaite the feedback.
@JordiRosell
@JordiRosell Жыл бұрын
GroupTimeSeriesSplit! It's implemented in mlxtend and sklego libraries and seen in some kaggle notebooks and stackoverflow answers.
@robmulla
@robmulla Жыл бұрын
Interesting. I’ll have to give it a look.
@thespace7371
@thespace7371 7 ай бұрын
Finally found a video series on data science that I can understand. Thank you!
@koleshjr
@koleshjr 2 жыл бұрын
Amazing video. You should do a time series one next.
@robmulla
@robmulla 2 жыл бұрын
Glad you liked it. I'll put time series on the list for future videos. If you haven't already check out Konrad's videos on Abhishek's channel: kzfaq.info/sun/PL98nY_tJQXZmT9ZB59T0lsx0ZzzLrYdX4
@FilippoGronchi
@FilippoGronchi 2 жыл бұрын
Always awesome...as clarity, as depth level of explanation, as examples...Just one question. What is for .sample(frac=1) when at the beginning you prepare the HoldOut set. Thanks a lot
@robmulla
@robmulla 2 жыл бұрын
Thanks for the feedback! `.sample(frac=1)` is just a way of randomly shuffling a dataframe.
@hasanovmaqsud
@hasanovmaqsud 2 жыл бұрын
Hello Rob! Thank you very much for such a valuable tutorial! Can you please elaborate more on cross validation used for Time Series forecasting? Thank you very much!
@robmulla
@robmulla 2 жыл бұрын
Thanks Maqsud. I’m thinking about making a video that goes into detail about it following up on my forecasting video. Thanks for the encouraging comment.
@chrismiles9019
@chrismiles9019 2 жыл бұрын
Thank you very much. The visualizations are really great. At 22:30 you say that there is an even distribution of each class in each of the folds, but it appears to me that all the positive samples are in fold 3, maybe because the class was only positive in the first group of that dataset? Am I missing something? Thanks again, I’m really loving the videos.
@robmulla
@robmulla 2 жыл бұрын
You are exactly right. That is because it prioritizes not having overlapping groups. The very next thing I do in the video is shuffle the target class to give a better example of how StratifiedGroupKFold works in practice.
@maxscheijen
@maxscheijen 2 жыл бұрын
Great video Rob! Question when dealing with regressions problems is the best approach to simply use KFold cross-validation? Or are there other cv methods that are also useful when dealing with regression problems?
@robmulla
@robmulla 2 жыл бұрын
Great question! If your data size is large, then typrically KFold with shuffle=True is enough. If you are concerned about it you can always create a binary feature and use StratifiedKFold on that feature - or there are some other more advanced stratification packages out there: pypi.org/project/iterative-stratification/ - I could go over those in a future video if you think it might be helpful.
@maxscheijen
@maxscheijen 2 жыл бұрын
@@robmulla Thanks for the response! I usually KFold with a shuffle. At work, I train a lot of regression models I was just curious if there are general best cross-validation practices for those kinds problems. So a future video on that would definitely be helpful! Maybe another idea for a future video is performance how to identify underperformance of a model on slices/subsections of data. However, no hurry really like the videos and live streams keep it up!
@amanthinks374
@amanthinks374 10 ай бұрын
@@maxscheijen I would use pd.cut() to bin my y variable and use stratified k fold with the binned var as a category (temp categorial y var), so that each validation fold has different levels of the y variable represented in it. Obviously discard this binned (cut) version of y-variable before you run fit()
@JaswinderSingh-wn6lc
@JaswinderSingh-wn6lc Жыл бұрын
Hey Rob! Awesome video as always! Can you please explain why did you pick the probabilities for the positive class only? I have been making myself crazy over this ..xD
@minister1005
@minister1005 10 ай бұрын
Hi, thanks for such a thorough video explaining cross validation! I really enjoyed it ^^ I have a question though. While defining the get_prep_data function, isn't '.sample(frac=1, random_state=529)' irrelevant since you already did 'holdout_ids = data.sample(500, random_state=529).index' ?
@devnull711
@devnull711 Жыл бұрын
very good content
@robmulla
@robmulla Жыл бұрын
Thanks for watching. Glad you like it.
@risabb
@risabb Жыл бұрын
Thanks a lot Rob for making this video. It's really insightful. I have a quick question - At @11:39 you created a baseline (which is really helpful!) for binary classification. Would you recommend techniques for imbalanced multiclass classification as well? Thank you in advance :)
@robmulla
@robmulla Жыл бұрын
Great question. There are a lot of resources out there with regards to imblanced datasets. It's a really common problem, but there isn't a one size fits all solution. Generally just having more training data or simpler models is best, otherwise you will find youself overfitting to the few positive samples you have.
@user-kl5nx9qy8o
@user-kl5nx9qy8o 2 ай бұрын
What about using these cross validation objects within GridSearchCV, how would you treat the creation of new features for each fold? Would you have to sacrifice using GridSearchCV to be able to apply feature engineering with aggregate data on each fold? Your approach allows feature engineerimg on each fold, but it does not allow the benefit of hyperparameter tuning as with gridsearchcv. Would the best advice be to use your approach of cross validation and apply in each fold loop with feature engineering, but with the drawback of relying on manual hyperparameter tuning instead f methods as gridsearchcv or randomsearchcv?
@ifeanyinwobodo8530
@ifeanyinwobodo8530 Жыл бұрын
Thanks for this video! Really insightful! Is it possible to use an ensemble of cross validated models for a single problem
@robmulla
@robmulla Жыл бұрын
Thanks for commenting. I’ve never heard of doing ensembles of cross validation but I don’t see why you couldn’t.
@ifeanyinwobodo8530
@ifeanyinwobodo8530 Жыл бұрын
@@robmulla ok thanks. I'd experiment with it
@MrMeap12345
@MrMeap12345 Жыл бұрын
Thank you 🙌🏻 this is great. What steps might be next in regard to producing a model you would deploy in a production environment?
@robmulla
@robmulla Жыл бұрын
Thanks for watching, feature engineering is #1. Then parameter tuning and pre/post processing. Good luck!
@maxidiazbattan
@maxidiazbattan 2 жыл бұрын
Amazing tutorial Rob, I usually split the data at the beginning of the whole process into folds, do you think that approach is fine? I don’t see that quite often in kaggle.
@robmulla
@robmulla 2 жыл бұрын
It's funny you say that. I typically do the splits at the beginning and when working in teams we use the same splits so we can share results for the exact same folds. I think this is actually really common on kaggle. With really small data you eventually want to change your random seed and try new folds at some point to make sure you aren't overfitting to a specific seed of folds. Hope that helps!
@maxidiazbattan
@maxidiazbattan 2 жыл бұрын
@@robmulla Thank you very much for the clarification Rob, yes, it helped me a lot.
@ramoda13
@ramoda13 Жыл бұрын
Thanks , great video.
@ismailonurylmaz192
@ismailonurylmaz192 8 ай бұрын
Hi Rob, thanks for this great video. It is highly didactic! I can not clarify one thing, at the end of any cross validation process, do we have to report only one metric which is the averaging of 5 different clf like in this video?
@user-sv7tr4bt8l
@user-sv7tr4bt8l 6 ай бұрын
Hey! I just noticed you didn't encode the categorical values nor specified them in the model training, is that possible with this algorythm?
@qpellidomombre
@qpellidomombre Жыл бұрын
Why don't you keep track of the error metrics in the training sets for each fold iteration. I was under the impression that it is necessary to do it to notice if there's overfitting. For example, if you have the following results for auc and K=3 K=1: validation, 0.9 (training, 0.95) K=2: validation, 0.92 (training, 0.99) K=3: validation, 0.88 (training, 0.92)
@mohammadamiri-lc3ic
@mohammadamiri-lc3ic Ай бұрын
thanks it was great
@robmulla
@robmulla 29 күн бұрын
You're welcome!
@TheKekko16
@TheKekko16 Жыл бұрын
Hi, thank you for the video. The concept of cross validation is explained really well. I'm writing my bachelor's thesis on support vector machines for classification. To implement my models I used the Python docplex library, but now I can't perform the cross validation because I don't know how to apply the scikit learn methods (for example the fit method) on customized models. Do you know how I should do?
@TylerMacClane
@TylerMacClane Жыл бұрын
Hello Rob How about if your model that you train on 9 min. video, retrained Because you predict on the same data that you train on You had a leak and therefore the predictions are very accurate Haven't watched more than 10 minutes now I continue 👾 Understood at 11 min. everything fell into place))
@robmulla
@robmulla Жыл бұрын
Awesome! Glad the video was able to answer your question before I could.
@kalianeeboodoo4750
@kalianeeboodoo4750 Жыл бұрын
Hi Rob, when there is a highly imbalance data (e.g. 920 datapoints for class 0 and 80 datapoints for class 1), do we not have to balance the dataset using some techniques such as SMOTE and generate synthetic data to equalize both classes, or CV does the job here?
@islamibrahim4382
@islamibrahim4382 Жыл бұрын
Hi Rob really great content, I am wondering if you can make something for retail ecommerce as I want to be predicting sessions, orders ,revenue and conversion rate based on previous data history which is going to help in spending wisely and get the most out of it
@parvinkolahkaj3656
@parvinkolahkaj3656 29 күн бұрын
👍
@watcher8582
@watcher8582 6 ай бұрын
Good video, but a bit sus that you say "I'm a dada scientist" 3 times (literally) in the first 70 seconds.
@robmulla
@robmulla 6 ай бұрын
😒
@watcher8582
@watcher8582 6 ай бұрын
@@robmulla Ungrateful viewers eh
@muchammadfahd-a1985
@muchammadfahd-a1985 Жыл бұрын
what happen if you using from.sklearn.model_selection import * ???
A Comprehensive Guide to Cross-Validation with Scikit-Learn and Python
24:55
Ryan & Matt Data Science
Рет қаралды 1,9 М.
25 Nooby Pandas Coding Mistakes You Should NEVER make.
11:30
Rob Mulla
Рет қаралды 268 М.
Kids' Guide to Fire Safety: Essential Lessons #shorts
00:34
Fabiosa Animated
Рет қаралды 17 МЛН
This Dumbbell Is Impossible To Lift!
01:00
Stokes Twins
Рет қаралды 37 МЛН
娜美这是在浪费食物 #路飞#海贼王
00:20
路飞与唐舞桐
Рет қаралды 6 МЛН
Пройди игру и получи 5 чупа-чупсов (2024)
00:49
Екатерина Ковалева
Рет қаралды 4,4 МЛН
Get Started with Machine Learning and AI in 2023
9:47
Rob Mulla
Рет қаралды 25 М.
Predict The Stock Market With Machine Learning And Python
35:55
Dataquest
Рет қаралды 672 М.
Machine Learning Tutorial Python 12 - K Fold Cross Validation
25:20
This INCREDIBLE trick will speed up your data processes.
12:54
Rob Mulla
Рет қаралды 263 М.
Cross Validation : Data Science Concepts
10:12
ritvikmath
Рет қаралды 37 М.
301 - Evaluating keras model using KFold cross validation​
33:32
DigitalSreeni
Рет қаралды 10 М.
Kids' Guide to Fire Safety: Essential Lessons #shorts
00:34
Fabiosa Animated
Рет қаралды 17 МЛН