No video

Shuffle your dataset when using cross_val_score

  Рет қаралды 7,088

Data School

Data School

Күн бұрын

If you use cross-validation and your samples are NOT in an arbitrary order, shuffling may be required to get meaningful results.
Use KFold or StratifiedKFold in order to shuffle!
👉 New tips every TUESDAY and THURSDAY! 👈
🎥 Watch all tips: • scikit-learn tips
🗒️ Code for all tips: github.com/jus...
💌 Get tips via email: scikit-learn.tips
=== WANT TO GET BETTER AT MACHINE LEARNING? ===
1) LEARN THE FUNDAMENTALS in my intro course (free!): courses.datasc...
2) BUILD YOUR ML CONFIDENCE in my intermediate course: courses.datasc...
3) LET'S CONNECT!
- Newsletter: www.dataschool...
- Twitter: / justmarkham
- Facebook: / datascienceschool
- LinkedIn: / justmarkham

Пікірлер: 22
@dataschool
@dataschool 3 жыл бұрын
What do you find hard or confusing about scikit-learn? Let me know in the comments, and maybe I can help! 🙌
@jamieleenathan
@jamieleenathan 3 жыл бұрын
Extracting the feature names after a pipeline that contains column transformations. I can see the feature importances but no way of understanding what those features are! :)
@dataschool
@dataschool 3 жыл бұрын
Thanks for sharing! Try slicing the Pipeline to select the ColumnTransformer step, and then use the get_feature_names method. That method has some limitations, but it may work in your case! Here are relevant links: nbviewer.jupyter.org/github/justmarkham/scikit-learn-tips/blob/master/notebooks/30_examine_pipeline_steps.ipynb nbviewer.jupyter.org/github/justmarkham/scikit-learn-tips/blob/master/notebooks/38_get_feature_names.ipynb Does that help?
@jamieleenathan
@jamieleenathan 3 жыл бұрын
@@dataschool thanks for responding Kevin. I’ll take a look at these links now but the problem I ran into was that there’s several transformers that don’t provide the get_feature_names attribute - which stops the ability to slice the entire pipe in the way you say as it just throws an error. KBindiscretizer was the one I run into. It seems strangely difficult to get feature names out of pipes when it’s incredibly important for inspecting the model with Shapley values, feature importances and so on. Will report back if I have any luck :)
@dataschool
@dataschool 3 жыл бұрын
I totally hear you! The scikit-learn core developers are very aware of this, and are actively working on a solution to this problem. I've been following the discussions and turns out it's quite a big project with tons of implications. That's all to say that a solution will come at some point and they are doing their best to get there quickly!
@agarwalyashhh
@agarwalyashhh 5 ай бұрын
Searched for all the questions before, does GridSearch apply shuffling, why are there different classes like KFold,StratifiedKFold and cross_val_score, when to use what. 5 minutes covered everything. Great!
@dataschool
@dataschool 5 ай бұрын
Glad to hear it was helpful!!
@amarsharma336
@amarsharma336 2 жыл бұрын
Explained very well in short😄
@dataschool
@dataschool 2 жыл бұрын
Thank you!
@sohana8265
@sohana8265 3 жыл бұрын
Hl, I am Sohana. I just finished your blog post in your Data School website. I basically obsessed in data science but now I am engaged with my academic in dentistry. And your post such like a inspiring to me. It will be very glad for me if you share about what should be my first step to start the journey to data science. Thank you in advance.
@dataschool
@dataschool 3 жыл бұрын
Hi Sohana, thanks for your comment! It's hard to give personalized advice, but this post might provide a helpful path for you: www.dataschool.io/launch-your-data-science-career-with-python/ Hope that helps!
@beltraomartins
@beltraomartins 3 жыл бұрын
Pretty nice! Thank you!
@dataschool
@dataschool 3 жыл бұрын
You're very welcome!
@scottterry2606
@scottterry2606 3 жыл бұрын
Randomizing input is a best practice (whether the algorithmic approach requires it or not). Also, after randomizing the first time, if you have to re-randomize to make model development work, that's usually a big red-flag. Re-randomizing should be looked at differently. Instead of using it to make a model work, use it to help ensure your model's working if your data is limited.
@dataschool
@dataschool 3 жыл бұрын
Thanks for your comment! To be clear, the goal of the shuffling was to ensure that the model evaluation procedure outputs reliable results, not to make the model work. The model itself will work regardless of whether the samples are shuffled. Could you elaborate on this part of your comment: "use it to help ensure your model's working if your data is limited"? I'd love to understand better. Thank you!
@crush3dices
@crush3dices Жыл бұрын
4:40 i do not understand why stratification does not make sense. Say i have multiple numerical features and i do a linear regression. Furthermore say that i have very few high values for the numerical label i want to predict. Wouldn't it then make sense to use a histrogram with a stratification to make sure that all training sets and test sets have a few very high values in the labels instead of having one set contain most of these high values simply by chance?
@knowledgeji6449
@knowledgeji6449 2 жыл бұрын
Hi, i was applying normal liners regression and its r2 _ score is comming .758965 but when i am applying mean of cross_val_score then it is giving negative value. i am not able to understand how is possible? crossvascore also tells us r2 score with different different sample.
@atulsingh-uy2he
@atulsingh-uy2he 3 жыл бұрын
Helpful..!!
@dataschool
@dataschool 3 жыл бұрын
Glad to hear! 🙌
@IamMoreno
@IamMoreno 3 жыл бұрын
I cannot think of an scenario where your data is not arbitrarily ordered (randomly) since CV is done in training set and this was obtained by randomly splitting the whole dataset intro train and test. Can you elaborate more on this case?
@dataschool
@dataschool 3 жыл бұрын
Excellent question, thank you so much for asking! I disagree with your assertion that "CV is done on the training set." That is certainly one way of doing model evaluation, but there are many other valid ways, including cross-validation on the entire dataset. It's true that holding out an independent test set gives you a more reliable estimate of out-of-sample performance, but the test set is unnecessary if your only goal of cross-validation is model selection (including hyperparameters). In addition, holding out an independent test set is not recommended if you have a small dataset, because the reduced size of the training set makes it harder for cross-validation (when done as part of a grid search) to locate the optimal model. Thus, there are valid reasons not to hold out an independent test set. All of that is to say that the premise of this video is that you are performing cross-validation on the entire dataset, and as such, your data may not be arbitrarily ordered. Hope that answers your question!
@muriloaraujosouza462
@muriloaraujosouza462 2 жыл бұрын
Very nice post! Could i ask you a question? I am working on a binary time series classification problem ( given a time series, I want to detect whether a consumer has committed fraud in residential electricity consumption). So, for my input data, the rows represents a Consumer ID (i have around 42000 consumers), and my columns are daily electricity consumption measurements of of each of those consumers between 01/01/2014 to 12/12/2016. So, my output is a binary array indicating whether a consumer has committed fraud or not. The database given to me is in a way that all fraudulent consumers are in the first N rows, and the rest of the data are non fraudulent users. I know i shouldn't shuffle my columns (since each column represents a single day measure in the timeseries, and shuffling that could mess things up), i need to shuffle only my rows, thats correct? But, since i am working with time series, i am using TimeSeriesSplit as cv parameter inside cross_val_score. But, i don't think TimeSeriesSplit has a Shuffle parameter (like KFold or StratifiedKFold here in your video), any tips how i could do this? To Shuffle my rows at each cross-validation fold? Sorry for the long post, and thanks again for the awesome video!
Use AUC to evaluate multiclass problems
3:40
Data School
Рет қаралды 8 М.
Bony Just Wants To Take A Shower #animation
00:10
GREEN MAX
Рет қаралды 7 МЛН
طردت النملة من المنزل😡 ماذا فعل؟🥲
00:25
Cool Tool SHORTS Arabic
Рет қаралды 13 МЛН
Cross Validation : Data Science Concepts
10:12
ritvikmath
Рет қаралды 37 М.
Closer Look at K-Fold CV | Deeper Understanding of functions cross_val_predict & cross_val_score
10:24
Sane's Academy of Artificial Intelligence
Рет қаралды 280
Use cross_val_score and GridSearchCV on a Pipeline
7:02
Data School
Рет қаралды 13 М.
Complete Guide to Cross Validation
29:49
Rob Mulla
Рет қаралды 53 М.
Machine Learning Tutorial Python 12 - K Fold Cross Validation
25:20
How do I select features for Machine Learning?
13:16
Data School
Рет қаралды 176 М.
How do I encode categorical features using scikit-learn?
27:59
Data School
Рет қаралды 138 М.
Selecting the best model in scikit-learn using cross-validation
35:54
MFML 065 - Understanding k-fold cross-validation
4:40
Cassie Kozyrkov
Рет қаралды 4,1 М.