Machine Learning Lecture 29 "Decision Trees / Regression Trees" -Cornell CS4780 SP17

  Рет қаралды 42,711

Kilian Weinberger

Kilian Weinberger

Күн бұрын

Lecture Notes:
www.cs.cornell.edu/courses/cs4...

Пікірлер: 42
@prattzencodes7221
@prattzencodes7221 4 жыл бұрын
With all due respect to Professor Andrew Ng for the absolute legend he is, Killian,you sir, are every ML enthusiasts' dream come true. 🔥🔥🔥🔥🔥
@AnoNymous-wn3fz
@AnoNymous-wn3fz 3 жыл бұрын
15:13 introducing Gini impurity 23:50 KL algor 46:00 Bias-Variance discussion
@abhishekkdas7331
@abhishekkdas7331 3 жыл бұрын
Thanks Professor Kilian Weinberger. I was looking for a refresher on the topic after almost 5 years and you have made it as easy as possible :) !
@orkuntahiraran
@orkuntahiraran 3 жыл бұрын
This is perfect. I am coming from a non-technical, non-math background; and this presentation really made me understand DT easily. Thank you very much.
@khonghengo
@khonghengo 3 жыл бұрын
Thank you very much, Prof. Weinberger. I was reading The Elements of statistical Learning as my reading course, then I found your channel. I truly appreciate your lectures also your notes, I print all of your notes and watch your almost all of your videos, they are extremely helpful. Thank you, I really appreciate that you let us have access to your wonderful lectures.
@jalilsharafi
@jalilsharafi 2 жыл бұрын
I'm watching this end of December 2021, I found the demos at the end starting roughly at 45 mins in the video very informative about the capabilities and limitations of a decision tree. Thanks.
@silent_traveller7
@silent_traveller7 3 жыл бұрын
Hats off to you sir. This series coupled with lecture notes is pure gold. I have watched several lecture series on youtube till the end but wow this lecture series has the most retentive audience.
@varunjindal1520
@varunjindal1520 3 жыл бұрын
Thanks Professor Kilian Weinberger. Examples in the end was really helpful to actually visualize how trees can look like.
@cacagentil
@cacagentil 2 жыл бұрын
Thank you for sharing your content. It is very interesting. Especially the discussion about why we do this ( computational problems, NP-hard, people tried many splits and found out it was the best in practice), the interactive examples at the end (very useful for learning) and all your work on trying to make it clear and simple. I like the point of view of minimizing the entropy from maximum the KL between two probability distributions. In fact, it is also easy to see the Gini impurity loss function as an optimization problem in 3D also (you get a concave/convex function by computing the hessian matrix with two parameters as the third one is just 1 - p_1 - p_2) and you have to optimize it on a space (conditions on the p_i) and you can actually draw the function and the space. You get the maximum/minimum at 1/3 for p_1, p_2, p_3 (what we don't want) and it is diminishing as we move away this point (with the the best case for one which is 1 and the others 0).
@TrentTube
@TrentTube 4 жыл бұрын
Kilian, is there some way I can contribute to you for your efforts in creating this series? It's been fantastically entertaining and helped in my understanding of these topics profoundly.
@geethasaikrishna8286
@geethasaikrishna8286 4 жыл бұрын
Thanks for awesome lecture & your university for making it available online
@nicolasrover8803
@nicolasrover8803 4 жыл бұрын
Thank you very much. Your teaching is incredible
@mohajeramir
@mohajeramir 3 жыл бұрын
This was amazing. Thank you very much
@mathedelic5778
@mathedelic5778 5 жыл бұрын
Sehr gut!
@Charby0SS
@Charby0SS 4 жыл бұрын
Would it be possible to split using something similar to Gaussian processes instead of the brute force method? Great lecture btw :)
@KulvinderSingh-pm7cr
@KulvinderSingh-pm7cr 5 жыл бұрын
"No man left behind", wait .. that's Decision trees right ?? Thanks prof. Enjoyed and learnt a lot!!
@utkarshtrehan9128
@utkarshtrehan9128 3 жыл бұрын
Machine Learning ~ Compression 💡
@dominikmazur4196
@dominikmazur4196 9 ай бұрын
Thank you 🙏
@KW-md1bq
@KW-md1bq 4 жыл бұрын
Should probably have mentioned the log used in Information Gain is Base 2.
@yunpengtai2595
@yunpengtai2595 3 жыл бұрын
I have some problems about regression.I wonder if I can discuss them with you.
@zaidamvs4905
@zaidamvs4905 5 ай бұрын
i have a question how we know the best sequence of features that we should use in each depth layer because if we want to try each one and optimize with 30 to 40 features will take forever , or how we can do this for m features because i can really visual how this work.
@michaelmellinger2324
@michaelmellinger2324 2 жыл бұрын
@34:28 Can view all of machine learning as compression
@shaywilliams629
@shaywilliams629 3 жыл бұрын
Forgive me if I'm wrong but if a pure leaf node with 3 classes that results in P1=1, P2=0, P3=0, the sum of Pk*log(Pk) would be 0, so the idea would be to minimize from the positive entropy equation?
@prabhatkumarsingh8668
@prabhatkumarsingh8668 4 жыл бұрын
The formula shown for Gini impurity is applied on the leaf node right? The Gini impurity for the attribute is the weighted value..?
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Essentially you compute the weighted Gini impurity for each attribute, for each possible split.
@hohinng8644
@hohinng8644 Жыл бұрын
28:24 this sound like a horror movie lol
@gregmakov2680
@gregmakov2680 2 жыл бұрын
giang bai ma long ghep tum lum het nha :D:D:D met ca nup lum bat bo tu bi gio :D:D:D thay hu qua diiii
@rahulseetharaman4525
@rahulseetharaman4525 Жыл бұрын
Why do we do a weighted sum of the entropies ? What is the intuition behind weighting them and not simply adding the entropies of the splits ?
@kilianweinberger698
@kilianweinberger698 Жыл бұрын
Good question. If you add them, you implicitly give them both equal weight. Imagine you make a split where on one side you only have a single example (e.g. labeled positive), and on the other side you have all n-1 remaining data points. This is a pretty terrible split, because you learn very little from it. However, on the one side with a single example you have zero impurity (all samples, namely only that single one, trivially share the same label). If you give that side as much weight as the other side, you will conclude that this is a great split. In fact, this is what will happen if you simply add them up, the decision tree will one by one split off single data points and create highly pathological "trees". So instead we weigh them by how many points are in the split. This way, in our pathological case, the single example would only receive a weight of 1/n, and not contribute much to the overall impurity of the split. I hope this answers your question.
@yashwardhanchaudhuri6966
@yashwardhanchaudhuri6966 2 жыл бұрын
Hi can anyone please explain why equally likely events are a problem in decision trees? What I understood from it was that the model will need to be very comprehensive to tackle such cases but I am unsure of my insight.
@yashwardhanchaudhuri6966
@yashwardhanchaudhuri6966 2 жыл бұрын
Okay so what I understood is that a leaf node cannot have confusion. So if a node is a leaf node then it should have all positive or all negatives but not a mix of both which would happen if we stop a tree in making early right?
@michaelmellinger2324
@michaelmellinger2324 2 жыл бұрын
Decision trees are horrible. However, once you address the variance with bagging and the bias with boosting, they become amazing. @12:50
@usamajaved7055
@usamajaved7055 8 ай бұрын
Please share past papers of ml
@elsmith1237
@elsmith1237 4 жыл бұрын
What's a Katie tree?
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Actually, it is called KD-Tree. A description is here: en.wikipedia.org/wiki/K-d_tree Essentially you recursively split the data set along a single feature to speed up nearest neighbor search. Here is also a link to the notes on KD-Trees: www.cs.cornell.edu/courses/cs4780/2018fa/lectures/lecturenote16.html
@KaushalKishoreTiwari
@KaushalKishoreTiwari 3 жыл бұрын
Pk is zero means k is infinity how it is possible, Q at 39.00
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
Oh, no. p_k is not 1/k. We are computing the divergence between p_k and 1/k. p_k is the fraction of elements of class k in that particular node, so p_k=0 if there are no elements of class k in that node.
@SanjaySingh-ce6mp
@SanjaySingh-ce6mp 3 жыл бұрын
isn't log(a/b)=log(a)-log(b) ?? at 30:35
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
yes, but here we have log(a/(1/b))=log(a)-log(1/b)=log(a)+log(b).
@SanjaySingh-ce6mp
@SanjaySingh-ce6mp 3 жыл бұрын
@@kilianweinberger698 thank u,i got it now🙏
Machine Learning Lecture 30 "Bagging" -Cornell CS4780 SP17
49:43
Kilian Weinberger
Рет қаралды 24 М.
Machine Learning Lecture 26 "Gaussian Processes" -Cornell CS4780 SP17
52:41
DAD LEFT HIS OLD SOCKS ON THE COUCH…😱😂
00:24
JULI_PROETO
Рет қаралды 16 МЛН
Mom's Unique Approach to Teaching Kids Hygiene #shorts
00:16
Fabiosa Stories
Рет қаралды 34 МЛН
Sigma Kid Hair #funny #sigma #comedy
00:33
CRAZY GREAPA
Рет қаралды 37 МЛН
Fast and Furious: New Zealand 🚗
00:29
How Ridiculous
Рет қаралды 40 МЛН
Regression Trees, Clearly Explained!!!
22:33
StatQuest with Josh Starmer
Рет қаралды 623 М.
Machine Learning Lecture 32 "Boosting" -Cornell CS4780 SP17
48:27
Kilian Weinberger
Рет қаралды 33 М.
Visual Guide to Decision Trees
6:26
Econoscent
Рет қаралды 32 М.
Decision and Classification Trees, Clearly Explained!!!
18:08
StatQuest with Josh Starmer
Рет қаралды 711 М.
Decision trees - A friendly introduction
22:23
Serrano.Academy
Рет қаралды 11 М.
DAD LEFT HIS OLD SOCKS ON THE COUCH…😱😂
00:24
JULI_PROETO
Рет қаралды 16 МЛН