Credit Card Fraud Detection - Dealing with Imbalanced Datasets in Machine Learning

  Рет қаралды 27,130

Greg Hogg

Greg Hogg

2 жыл бұрын

Error: The neural net predictions function is using shallow_nn everytime instead of the model passed in, sorry about that! This changes the results a bit, but the main point is choosing and creating a model, which this doesn't impact.
The Code: colab.research.google.com/dri...
Kaggle dataset (ensure you make an account!): www.kaggle.com/mlg-ulb/credit...
Learn Python, SQL, & Data Science for free at mlnow.ai/ :)
Subscribe if you enjoyed the video!
Best Courses for Analytics:
---------------------------------------------------------------------------------------------------------
+ IBM Data Science (Python): bit.ly/3Rn00ZA
+ Google Analytics (R): bit.ly/3cPikLQ
+ SQL Basics: bit.ly/3Bd9nFu
Best Courses for Programming:
---------------------------------------------------------------------------------------------------------
+ Data Science in R: bit.ly/3RhvfFp
+ Python for Everybody: bit.ly/3ARQ1Ei
+ Data Structures & Algorithms: bit.ly/3CYR6wR
Best Courses for Machine Learning:
---------------------------------------------------------------------------------------------------------
+ Math Prerequisites: bit.ly/3ASUtTi
+ Machine Learning: bit.ly/3d1QATT
+ Deep Learning: bit.ly/3KPfint
+ ML Ops: bit.ly/3AWRrxE
Best Courses for Statistics:
---------------------------------------------------------------------------------------------------------
+ Introduction to Statistics: bit.ly/3QkEgvM
+ Statistics with Python: bit.ly/3BfwejF
+ Statistics with R: bit.ly/3QkicBJ
Best Courses for Big Data:
---------------------------------------------------------------------------------------------------------
+ Google Cloud Data Engineering: bit.ly/3RjHJw6
+ AWS Data Science: bit.ly/3TKnoBS
+ Big Data Specialization: bit.ly/3ANqSut
More Courses:
---------------------------------------------------------------------------------------------------------
+ Tableau: bit.ly/3q966AN
+ Excel: bit.ly/3RBxind
+ Computer Vision: bit.ly/3esxVS5
+ Natural Language Processing: bit.ly/3edXAgW
+ IBM Dev Ops: bit.ly/3RlVKt2
+ IBM Full Stack Cloud: bit.ly/3x0pOm6
+ Object Oriented Programming (Java): bit.ly/3Bfjn0K
+ TensorFlow Advanced Techniques: bit.ly/3BePQV2
+ TensorFlow Data and Deployment: bit.ly/3BbC5Xb
+ Generative Adversarial Networks / GANs (PyTorch): bit.ly/3RHQiRj

Пікірлер: 49
@GregHogg
@GregHogg 10 ай бұрын
Take my courses at mlnow.ai/!
@prathameshmore1402
@prathameshmore1402 2 жыл бұрын
Thank you for your amazing efforts! I don't have much experience in building different models, so this video helped me a lot! Btw, I tried increasing max_depth to 6 in random forest model, and it really increased model's performance better than I expected. Thanks again!
@GregHogg
@GregHogg 2 жыл бұрын
Interesting! Yeah it's surprisingly easy to mess around with models. That's great about the max_depth! And you're very welcome :)
@petarganev4256
@petarganev4256 2 жыл бұрын
Great video on classification. Good luck with the channel!
@GregHogg
@GregHogg 2 жыл бұрын
Thanks so much Petar! I appreciate that 😊
@machinelearning3602
@machinelearning3602 2 жыл бұрын
Hope to see more of this kind in the coming days!!
@GregHogg
@GregHogg 2 жыл бұрын
With an account name of "Machine Learning" I would expect nothing less! 😂 And absolutely ☺️
@vishnusunil9610
@vishnusunil9610 7 ай бұрын
Stunning bro just clear cut explanation not wasting a single minute it's just a gold mine of information best video on a project explained step by step
@GregHogg
@GregHogg 7 ай бұрын
Thank you for the very kind words! Glad it was helpful 😀
@sivanujansivakumar5907
@sivanujansivakumar5907 2 жыл бұрын
Thanks man. I'm going to try this one. It's really helpful. 🙏😍
@GregHogg
@GregHogg 2 жыл бұрын
Enjoy! You're very welcome 😊
@mellowftw
@mellowftw 2 жыл бұрын
I'll be trying this soon, thanks Greg
@GregHogg
@GregHogg 2 жыл бұрын
No problem Krish! 😊😊
@aguspe532
@aguspe532 Жыл бұрын
Great video and explanation! Thanks!
@GregHogg
@GregHogg Жыл бұрын
You're very welcome!
@saitejatangudu6320
@saitejatangudu6320 2 жыл бұрын
Great video ❤❤ looking forward for more videos like this..
@GregHogg
@GregHogg 2 жыл бұрын
Thank you!! Absolutely 😊
@somechad3682
@somechad3682 11 ай бұрын
One thing worth mentioning would be the data wrangling part. It's often a good idea to check for feature relevance and feature importance. Funny enough, the amount of transaction and the time of it were not considered as the features that had a substantial impact on the general outcome of the model to see if a transaction was fraudulent or not. This not only reduces bias in our data frame, but it can also substantially increase the computation speed of that model! (mine had a 36% boost in speed while losing only 0.01 points in F1 score, and 0.02 in precision.) Another thing would be to write a function that fits the training and validation data in each of the models automatically. It will substantially help with the cleanliness and readability of the project. I would also consider hyperparameter tuning and pipelining everything together to make it a robust project. However, great video and a great demonstration of how to check each model and measure their suitability for the problem at hand.
@kimchi6284
@kimchi6284 3 ай бұрын
please i have a poject in this topic could you pleeeease help me i don't know what to do
@ArtistrystoriesUnleashed45
@ArtistrystoriesUnleashed45 9 ай бұрын
can i try train_test_split function from sklearn to split data into train and test set?
@unlucky-777
@unlucky-777 6 ай бұрын
Hey Greg, thank you for the video but I have a question. At first, we had a dataset that had 280000 rows and 30 columns but towards to end of the video, we decreased the dataset that only had 984 rows. Doesn't this make the model bad because we're trained on less data? Or the real problem was we were getting bad results at first because we had so many not_fraud data compared to fraud ones?
@vinsanargeese4384
@vinsanargeese4384 9 ай бұрын
I just wanna know whether it gives the accuracy details only or detect whether card is fraud or not
@srijanshovit844
@srijanshovit844 2 жыл бұрын
That's amaaazzzing!!
@sakshirathi7950
@sakshirathi7950 2 жыл бұрын
Thanks greg!! Is it okay to do projects by looking at the tutorial videos!? When is the time, we need to do it on our own
@GregHogg
@GregHogg 2 жыл бұрын
Absolutely! Go ahead. You can do it on your own when you feel like you've got the general hang of things, if that makes sense.
@mahelvson
@mahelvson Жыл бұрын
Great vídeo. I was just wondering if taking a slice from the original dataset to use as a test set is a more consistent way to evaluate the resampling procedure. Because in production, the model still has to deal with imbalanced data.
@Hash9211
@Hash9211 4 ай бұрын
yes I agree. I've tried slice of original data for test set and the results look completely different.
@KeKuHauPiOx
@KeKuHauPiOx 10 ай бұрын
im getting errposts on the rest train and val run for the numpy
@devjain7076
@devjain7076 2 жыл бұрын
12:51 shouldn't shape of y_train be (240000, 1) since it consists of exactly one column?
@GregHogg
@GregHogg 2 жыл бұрын
(240000,) and (240000,1) are very close to the same thing. I'm not sure if they both work or not
@motilalmeher7666
@motilalmeher7666 4 ай бұрын
After training the model on the balance population please find the model performance on the original population the imbalanced one.
@arsheyajain7055
@arsheyajain7055 2 жыл бұрын
Awesome 👏🥳
@GregHogg
@GregHogg 2 жыл бұрын
Thank you! 😊
@mubshali7489
@mubshali7489 2 жыл бұрын
Sweet. This is going to my github!!
@GregHogg
@GregHogg 2 жыл бұрын
I sure hope so!
@joxa6119
@joxa6119 3 ай бұрын
What is your opinion on doing oversampling (SMOTE) on the minority class?
@GregHogg
@GregHogg 2 ай бұрын
Definitely a solid option.
@MatTheBene
@MatTheBene Жыл бұрын
Are you not leaking targets if your normalize before splitting the data?
@GregHogg
@GregHogg Жыл бұрын
If I am, it isn't really a big deal
@MatTheBene
@MatTheBene Жыл бұрын
@@GregHogg it isn't a big deal in most cases probably, but with time series data you are leaking future information that the model will not have during inference, such as changes in trend 📈 in future data points
@GregHogg
@GregHogg Жыл бұрын
@@MatTheBene For time series it would be more concerning yes
@emrecoban3895
@emrecoban3895 9 ай бұрын
Are we not supposed to test from original data instead of balanced one.
@Mwme2000
@Mwme2000 8 ай бұрын
well i have the same question but every code i saw for this dataset with high f1 score did like him and after a lot of research i found that if you have highly imbalanced data like this it is okay to test on the under sampled data if u know anything else please share it
@allaboardthegravytrain5987
@allaboardthegravytrain5987 3 ай бұрын
thanks
@ottomaggio2725
@ottomaggio2725 Жыл бұрын
Nice video, however, it is not completely clear to me how the undersampling relates to the overall problem. In the end, you have to provide the client (the bank) with a model capable of detecting fraud. Let's suppose we give them the model trained on the rebalanced dataset. Since frauds are unbalanced by nature, then they will end up using the model trained on a balanced dataset on a test set that is actually unbalanced. Isn't this causing issues? Isn't the prediction biased toward the fraud? Aren't we predicting way too many frauds?
@ottomaggio2725
@ottomaggio2725 Жыл бұрын
To be more specific, I think you can try balancing the training set but you cannot balance the test set because, in the end, in the real scenario, the new data to be predicted will be always unbalanced.
@luqmanhrizal
@luqmanhrizal Жыл бұрын
its not practical to evaluate the model on the balanced the evaluation/test set since its ignore the real fraud representation. data representation is sacred.
@j_ckitchai
@j_ckitchai 6 ай бұрын
Hi thankyou a lot from making this video I learn a lot through this, I have some question at @52:05 the line print rf.predict(x_val_b) isn't that should be rf_b.predict(x_val_b) instead ? along with Gbc later on too it should use gbc_b.predict right ??
@jeremyklauber7535
@jeremyklauber7535 26 күн бұрын
I thought that as well not entirely sure why he hadn't changed those when the neural_net_predictions function he had it under shallow_nn_b
Feature Engineering Techniques For Machine Learning in Python
47:58
Survival skills: A great idea with duct tape #survival #lifehacks #camping
00:27
Tom & Jerry !! 😂😂
00:59
Tibo InShape
Рет қаралды 64 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 228 М.
Data Analysis on the Tokyo Olympics in Python!
50:51
Greg Hogg
Рет қаралды 7 М.
Why Credit Card Fraud Hasn't Stopped In The U.S.
12:59
CNBC
Рет қаралды 780 М.
Fraud Analytics lecture 1
1:18:39
Bart Baesens
Рет қаралды 16 М.
Anomaly detection with TensorFlow | Workshop
45:29
TensorFlow
Рет қаралды 104 М.
How Does Credit Card Fraud Protection Work?
4:47
Techquickie
Рет қаралды 270 М.
Fraud Detection with Graph Neural Networks
12:16
DeepFindr
Рет қаралды 25 М.
ROC and AUC, Clearly Explained!
16:17
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Survival skills: A great idea with duct tape #survival #lifehacks #camping
00:27