How to use Feature Engineering for Machine Learning, Equations

  Рет қаралды 16,249

Jeff Heaton

Jeff Heaton

Күн бұрын

Feature engineering is the process of modifying/preprocessing the input to a model, such as a neural network, to make it easier for that model to produce an accurate result. In this video, I discuss the technique that I use to build my own features.
Link to my paper that I referenced:
arxiv.org/pdf/1701.07852.pdf
** Follow Me on Social Media!
GitHub: github.com/jeffheaton
Twitter: / jeffheaton
Instagram: / jeffheatondotcom
Discord: / discord
Patreon: / jeffheaton

Пікірлер: 72
@leonardsmith9870
@leonardsmith9870 3 жыл бұрын
Hi Jeff. I've recently subscribed and I honestly have to say you have the most comprehensive and easy to understand guides out there. Not to mention the fact that whenever there is an update to something, you make a new video explaining how to work with it. I tried getting in to machine learning just over a year ago and nobody at the time was able to actually explain anything apart from "download this, download that, if it doesn't work oh well" and would just go through the official tutorials without actually explaining how to do anything on your own. Your channel alone has given me the motivation to get started again and thank you so much for doing what you're doing!
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Hello Leonard, thank you for the kind words. Glad the content is helpful, and yes, it is a lot of work keeping everything up to date.
@HarrysKavan
@HarrysKavan 2 жыл бұрын
Just wanted to leave a thank you Mr Heaton. I'm currently working on my bachelor thesis and your videos are a great help. Much appreciation.
@HeatonResearch
@HeatonResearch 2 жыл бұрын
Happy to help! Thank you for the note.
@ShashankData
@ShashankData 3 жыл бұрын
I've been following you for months, thank you for the free, well explained content!
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Thanks!!
@amineleking9898
@amineleking9898 3 жыл бұрын
Such a practical and helpful video, many thanks professor.
@user-qy4jn1cg5p
@user-qy4jn1cg5p 5 ай бұрын
This is incredibly intuitive! Thanks
@germplus
@germplus 2 жыл бұрын
Fabulous explanation. In the early stages of my course ( MSc AI & Data Science ) and I find your channel very helpful. Thank you.
@yongkangchia1993
@yongkangchia1993 3 жыл бұрын
Really valuable content that is clearly explained! keep up the great work sir!
@khaledsrrr
@khaledsrrr 11 ай бұрын
Feature Engineering Explained! 😍 This is likely the best explanation on YT. Thx 🙏
@daymaker_bybit
@daymaker_bybit 9 ай бұрын
This video and presentation is amazing. Thank you SO MUCH!! All the best!
@lakeguy65616
@lakeguy65616 Жыл бұрын
excellent video of real practical use!
@korhashamo
@korhashamo Жыл бұрын
Awesome. Great explanation. Thank you 🙏
@akramsystems
@akramsystems 3 жыл бұрын
This looks really fun to do!
@StevenSolomon-jb3zi
@StevenSolomon-jb3zi Жыл бұрын
Very insightful. Thank you.
@felixlucien7375
@felixlucien7375 Жыл бұрын
Awesome video, thank you!
@sheikhakbar2067
@sheikhakbar2067 3 жыл бұрын
I like Jeff's approach of giving us the big picture of he is talking about!
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Thanks!
@jameswilliamson1726
@jameswilliamson1726 10 ай бұрын
I read over your thesis comparing types of feature engineering vs machine learning models. Great stuff! Thx.
@HeatonResearch
@HeatonResearch 10 ай бұрын
Thanks!
@jameswilliamson1726
@jameswilliamson1726 10 ай бұрын
@@HeatonResearch Would standardizing or normalizing the input features give you better results? That one ratio had such a wide range.
@HeatonResearch
@HeatonResearch 10 ай бұрын
@@jameswilliamson1726 I will often standardize/norm after applying these techniques. The techniques I use here are really to capture the interaction between underlying features. Then standardization/normlization on top solves range concerns.
@sandeepmandrawadkar9133
@sandeepmandrawadkar9133 6 ай бұрын
Thanks for this great information
@liquidinnovation
@liquidinnovation 3 жыл бұрын
Thanks, great video! Any examples on using the shap package to additively decompose regression r^2 using shapley values?
@SAAARC
@SAAARC 3 жыл бұрын
I found this video useful. Thanks!
@MLOps
@MLOps 3 жыл бұрын
Super helpful! much appreciated!
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Glad it helped!
@jonnywright8155
@jonnywright8155 3 жыл бұрын
Love the energy!!!
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Thanks! I also went a little crazy on video editing too. lol
@nicolaslpf
@nicolaslpf Жыл бұрын
Amazing video Jeff ! The only thing you didn't tell us is if you then drop the source features to avoid collinearity or you just leave them along with the new features you created .... Or you perform PCA, VIF or Lasso after it to chose what to do?.... I loved the video concise and super useful!
@heysoymarvin
@heysoymarvin 10 ай бұрын
this is amazing!
@mohammed333suliman
@mohammed333suliman Жыл бұрын
Great, thank you.
@HeatonResearch
@HeatonResearch Жыл бұрын
You are welcome!
@gauravmalik3911
@gauravmalik3911 Жыл бұрын
very informative
@sumitchandak6131
@sumitchandak6131 3 жыл бұрын
Thia is really great and something out of box. Can you please provide similiar techniques for NLP as well
@ali_adeeb
@ali_adeeb 3 жыл бұрын
thank you so much!!
@jamalnuman
@jamalnuman 3 ай бұрын
Very useful
@hannes7218
@hannes7218 Жыл бұрын
great job!
@HeatonResearch
@HeatonResearch Жыл бұрын
Thanks!
@jifanz8282
@jifanz8282 3 жыл бұрын
Informative video as always. +1 like for my professor 👏
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Thanks Jifan!
@juggergabro
@juggergabro 2 жыл бұрын
At last, not another Data Science hijacker trying to prove themself on YT... Thank you.
@jhonnyespinozabryson8241
@jhonnyespinozabryson8241 3 жыл бұрын
Very thanks for sharing
@HeatonResearch
@HeatonResearch 3 жыл бұрын
My pleasure
@SuperHddf
@SuperHddf Жыл бұрын
thank you! :)
@Jeffben24
@Jeffben24 3 жыл бұрын
Thank you :)
@Shkvarka
@Shkvarka 3 жыл бұрын
Awesome explanation! Thank you very much! Best regards from Ukraine!:)
@bingzexu7259
@bingzexu7259 3 жыл бұрын
When we do feature engineering, are we expecting that the new feature has a high correlation with the predicted values?
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Yes for sure, so you must keep that in mind when evaluating feature importance. Generally, I leave the existing features in and let the model account for that (though some model types perform better with correlating fields removed).
@DeebzFromThe90s
@DeebzFromThe90s Жыл бұрын
Hi Jeff, what concepts should I look into to understand "Weighting" better? For instance at 9:41, you mention that if one values food more they might square it. Someone might cube it, someone might multiply it or add a coefficient of 2 or 5. These are all subjective. For weighting when it comes to features in the stock market or econometrics (my specific application), one might have a feature that is GDP or inflation. I know for a fact that change in GDP (slope) and change in the change in GDP (slope of slope i.e., acceleration) are pretty important. My first problem, is that I found these two (change in GDP and GDP acceleration) simply through guess and check, and research papers. Is there a better method to this? Or should I focus on automating 'guess and check'? Secondly, sometimes the GDP features or inflation related features vary in importance to participants in the stock market. Perhaps right now (as of Oct 2022) investors might place more emphasis on inflation related features and so I might multiply inflation features by coefficient of 2 or square it. How would one deal with dynamic weighting? Or a simpler problem might be, how do you objectively select for weighting? EDIT: I have come up with an idea, to add a coefficient to GDP or inflation based on social media mentions (sentiment), for instance. Thoughts on this and weighting in general? Thanks so much! Love the video by the way!
@programming_hut
@programming_hut 3 жыл бұрын
💛✌️ Thanks
@HeatonResearch
@HeatonResearch 3 жыл бұрын
You're welcome 😊
@ramiismael7502
@ramiismael7502 3 жыл бұрын
Can you try all different possible method to do this.
@Oliver-cn5xx
@Oliver-cn5xx 3 жыл бұрын
Hi Jeff, would you have a link to your paper and the kaggle notebook that you showed?
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Oh yeah, I should have linked that. I added it to the description, here it is too: arxiv.org/pdf/1701.07852.pdf
@Oliver-cn5xx
@Oliver-cn5xx 3 жыл бұрын
@@HeatonResearch Thanks a lot!
@youngjoopark4221
@youngjoopark4221 Жыл бұрын
I am novice. The model would figure out that relationship, then creating a new feature by dividing, multuplying something is worthy to do??
@lehaipython9242
@lehaipython9242 Жыл бұрын
How should I perform Feature Engineering on anonymous variables? I cant put my domain knowledge on them
@avithaker
@avithaker 3 жыл бұрын
Would love to see a link to your paper?
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Sure! Should have linked in the description. arxiv.org/abs/1701.07852
@avithaker
@avithaker 3 жыл бұрын
Thank you!
@johncaling6150
@johncaling6150 3 жыл бұрын
I dont remember if i asked this already if I did sorry but it would be great if you could do a tutorial about mxnet/gluon. It is a advanced library that is good for advanced things.
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Currently researching Gluon for such a video.
@johncaling6150
@johncaling6150 3 жыл бұрын
@@HeatonResearch Nice.
@johncaling6150
@johncaling6150 3 жыл бұрын
@@HeatonResearch I always have a hard time getting it installed. You install guides are the best!!!!
@taktouk17
@taktouk17 3 жыл бұрын
Please show us how to customize StyleGan2 to for example generate a babyface or change the gender of someone in the image
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Yes thinking about how to do something with that.
@Knud451
@Knud451 Жыл бұрын
Thanks! Why would you e.g. square variables to make them more dominant in the model? Wouldn't the model just put more weight on them by themselves? Unless its because you want to make a nonlinear scaling of that variable. On a side note, isn't BMI a good example of poor feature design... 😀
@brandonheaton6197
@brandonheaton6197 3 жыл бұрын
Can you address Sutton's Bitter Lesson as it applies here?
@HeatonResearch
@HeatonResearch 3 жыл бұрын
Kind of the limit of the Bitter Lesson, as time approaches infinity is that any program can be written by a random number generator, if we have enough compute time, and a way to verify correctness. I think the cleaver algorithms are always filling in the gap before massive compute is able to perform this operation on its own. However, I still see Kaggles won on feature engineering, so I tend to assume that it is still a needed skill. At least for now.
@Yifzmagarki
@Yifzmagarki 3 жыл бұрын
cunning man, does not fully say what really works and what I use by professionals
Education Researcher Reviews the BattleCry Survey
27:54
We're Strong Together
Рет қаралды 60
Different Types of Feature Engineering Encoding Techniques
24:07
Krish Naik
Рет қаралды 187 М.
Feature Engineering Techniques For Machine Learning in Python
47:58
Feature Engineering Secret From A Kaggle Grandmaster
22:23
Forecastegy
Рет қаралды 34 М.
Art of Feature Engineering for Data Science - Nabeel Sarwar
29:37
Normalization Vs. Standardization (Feature Scaling in Machine Learning)
19:48
100+ Linux Things you Need to Know
12:23
Fireship
Рет қаралды 78 М.
How I use Machine Learning as a Data Analyst
11:50
Luke Barousse
Рет қаралды 114 М.
Build machine learning models in Google Sheets
16:46
Data Professor
Рет қаралды 16 М.
How I’d learn ML in 2024 (if I could start over)
7:05
Boris Meinardus
Рет қаралды 962 М.
iPhone 16 с инновационным аккумулятором
0:45
ÉЖИ АКСЁНОВ
Рет қаралды 3,9 МЛН
Simple maintenance. #leddisplay #ledscreen #ledwall #ledmodule #ledinstallation
0:19
LED Screen Factory-EagerLED
Рет қаралды 22 МЛН
Собери ПК и Получи 10,000₽
1:00
build monsters
Рет қаралды 2,2 МЛН