Random forests, aka decision forests, and ensemble methods. Slides available at: www.cs.ubc.ca/~nando/540-2013/... Course taught in 2013 at UBC by Nando de Freitas
Пікірлер: 78
@lwoltersyt7 жыл бұрын
I simply love all explanatory video's of Nando de Freitas; clear and effective. Straight to the point
@siddarthjay37878 жыл бұрын
Excellent video. Professors like him are the reason learning is fun. They throw exciting ideas at you as if they were no thing.
@breckenpeter53432 жыл бұрын
I guess im randomly asking but does anyone know a trick to log back into an instagram account? I somehow lost my account password. I appreciate any tips you can offer me.
@arthurlayne63502 жыл бұрын
@Brecken Peter Instablaster ;)
@breckenpeter53432 жыл бұрын
@Arthur Layne I really appreciate your reply. I got to the site on google and I'm trying it out now. Seems to take a while so I will reply here later with my results.
@breckenpeter53432 жыл бұрын
@Arthur Layne it did the trick and I actually got access to my account again. I am so happy! Thank you so much, you saved my account :D
@arthurlayne63502 жыл бұрын
@Brecken Peter You are welcome :D
@meganmaloney1928 жыл бұрын
Incredibly helpful! Thank you for posting.
@comadano11 жыл бұрын
Thanks for posting these great lectures to be publicly available. I hope the remaining lecture videos get posted soon!
@ApiolJoe7 жыл бұрын
The ressource that helped me the most in least amount of time. Thanks for sharing.
@LuisFelipeZeni7 жыл бұрын
Excellent professor, thanks for sharing your Knowledge with us.
@GoBlue71717 жыл бұрын
Wow this is amazing. Such a clear and informative lecture.
@alefranc10010 жыл бұрын
Tks Nando ... This lecture is really good and useful !
@antonosipov1009 жыл бұрын
Very good introduction to Random Forest. Thank you!
@kellyli19209 жыл бұрын
This is so great clear and easy to understand!!! Thank you so much!!!
@junfu86958 жыл бұрын
+Kelly Li it is
@jiansenxmu9 жыл бұрын
A Perfect and Amazing Lecture on Random Forest, dank u wel !
@EmreOzanAlkan9 жыл бұрын
Thank you! It's really nice and easy to understand!
@ravimadhavan19848 жыл бұрын
Great lecture!
@BangsterDK8 жыл бұрын
Amazing. Thank you so much!
@dinofranceschelli39697 жыл бұрын
Excellent ! Really easy to understand thanks for sharing !
@rodrigo100kk4 жыл бұрын
This course is amazing !
@aadilraf6 жыл бұрын
Amazing lecture!
@drbhojrajghimire39088 жыл бұрын
Very good classroom video which is very simple, interesting and clearly explained.
@BoredFOMOape11 жыл бұрын
Thanks for posting! Great lecture
@agnerraphael7 жыл бұрын
Thanks for publising, Great Help for my Project
@naiden1007 жыл бұрын
Thanks for a great lecture!
@aidaelkouri70507 жыл бұрын
Hi! Loved the video. Was wondering how the problem would work if you chose greater than 2 features to look at? Would you plot them in the 3d then? Thank you
@d0msch7 жыл бұрын
thanks for publishing, helped me a lot :)
@fatemehsaki902011 жыл бұрын
Really Great lecture and teacher !!! Thanks so much............
@helenlundeberg8 жыл бұрын
Around 1:08:25, the professor talks about Bayesian optimization using a GP prior or something. What I'm not clear on is what is the prior on ?
@AbhijeetSachdev9 жыл бұрын
Brilliant lecture :)
@PravinMaske18 жыл бұрын
excellent video. very helpful easily understood...
@ronthomas83318 жыл бұрын
excellent lecture!!!!
@PrakashMatthew6 жыл бұрын
Thank you @Nando de Freitas. Is the second part of the lecture, the one to be taken by the grad student, available online?
@jihoonkim68197 жыл бұрын
Thank you for the nice lecture.
@tonyperez88789 жыл бұрын
how to compute the info gain? what is the relation of it with the entropy and mutual information ?
@BudiMulyo7 жыл бұрын
Thank you,, ! It's all very clear posting video.. !
@andreiherasimau78006 жыл бұрын
excellent video!
@sameenatasneemshaikh83497 жыл бұрын
Thank You sir excellent lecture please share other videos like you said Bayesian optimization lecture please share that lecture also.
@satter87henne5 жыл бұрын
I looked up the ressource (Criminisi) and I wonder: Is there an R package or a Python package where this specific algorithm is implemented. There are many R packages and I know there is sci-kit learn in Python. However, I want to make sure I use the more general model as outlied by Criminisi.
@hyperzoanoid11 жыл бұрын
Do random forests need classification trees with more than 1 node to be effective?
@rezayousufi42004 жыл бұрын
Super explanation!!!!
@sheikhsaqib63239 жыл бұрын
Can anyone here please help me out, am trying to classify accelerometer and gyroscope values of a robot using random forests. The classification labels are walking and under peturbation using diffenrent forces. From what i understand this is data that evolves over time as the robot walks. What my question is can i use the technique described in the video to do the classification or do I need to do it a different way(if so please point me to the right material where i can read about how to classify such data)
@kaleeswaranm26797 жыл бұрын
Very good lecture, but I would suggest watching in *1.5 speed.
@aaronbrinker26136 жыл бұрын
Nando for President
@jennifermew83867 жыл бұрын
Thank you
@shubhamjha18 жыл бұрын
The next lecture(the one about kinect and stuff) seems pretty interesting. Could i have a link to that?
@zenzafine45005 жыл бұрын
did u get the link?
@zenzafine45005 жыл бұрын
i think is this one kzfaq.info/get/bejne/l76hfKaXrZq-nHU.html
@reneveloso9 жыл бұрын
Thank you! Great lecture! You have a very brazilian name... :-)
@tpinto96 жыл бұрын
Portuguese?
@abhijeet24patil11 жыл бұрын
in step 2 of algo Random Forest : "until the minimum node size n_min is reached."-i dint understand this line. when we should stop Selecting m variables at random from the p variables.
@tadinglesby89977 жыл бұрын
Can one have a greater level of tree depth in their model (say D=10) with only 8 features. Or does it have to be a a maximum depth of 8 corresponding to 8 features?
@ecemilgun98677 жыл бұрын
You can actually use a feature in more than 1 splits. It sometimes causes overfitting but is sometimes needed: For instance both deficiency and abundance of glucose in blood levels indicate a disease.
@user-qt5mw5xd2e7 жыл бұрын
great!
@fathiafaraj4796 жыл бұрын
Hello sir: I am a graduate student in computer science My scientific research (object detection using random forest and local binary pattern ) I hope you help in RF code ;i useing MATLAB
@cfelixcfelix65689 жыл бұрын
I did not get the thing with the 3 trees and the historgrams. Does not each tree give a definite answer?
@AbhijeetSachdev9 жыл бұрын
+Cfelix Cfelix Ofcourse it will give, but in case of classification you will just take the sign of average value. . which is exactly the same as majority. In case of regression, "averaging" actually comes into account
@nicooteiza8 жыл бұрын
+Cfelix Cfelix Actually, not necessarily. In many real-life cases, data is not separable on the features, so even a very big tree would not be able to give definite answers. Moreover, each individual tree could be "pruned" to a certain depth, so the answers would not be definite classes, but "probabilities" for each class. For example, in a 3 class problem, for each tree you would have a [p1, p2, p3] result, where p1+p2+p3 = 1, and the final prediction can be obtained by adding all the tree's vectors element-wise and dividing by the number of trees.
@alkodjdjd8 жыл бұрын
Te enviei correio e nao obtive resposta. Obrigado
@fangliren11 жыл бұрын
I'm not sure I understand your question; with only one node (which would have to be the "parent" node, by definition), the tree wouldn't have any "children" nodes into which to sort the data based on the criterion in the parent node. The tree wouldn't do anything at all... Perhaps you are asking a different question?
@R1CH4RDGZA5 жыл бұрын
Loquacious, now that's one good obscure word just off the cuff
@KrishnaDN4 жыл бұрын
My dream is to work with professor like him
@leicaandrei8 жыл бұрын
why is the bias very low at 40:20?
@shervin08 жыл бұрын
I'm not completely sure, but it could be because of the way trees are created. They try to maximize information gain with simple decisions. Due to the simple decisions and use of the information gain cannot cause very biased divisions (As I said, I'm not sure if this is the reason! :) )
@deepakk19446 жыл бұрын
What is bagging?
@sourceschaudes95875 жыл бұрын
boosting
@theamazingjonad97165 жыл бұрын
Bagging is an ensemble technique whereby you randomly build k models from k subset data from the original data. The subset data are built using bootstrap resampling.
@yuzhou16 жыл бұрын
great video, watch at 1.5x speed
@melaxxl9 жыл бұрын
walla shaza.
@donnasara708 жыл бұрын
Спасибо, что без акцента
@kikokimo26 жыл бұрын
play in x2 speed!
@muneebaadil18988 жыл бұрын
Although I appreciate giving away the knowledge and trying but I'm sorry, VERY slow lecture, leading to a boring class.
@prathameshaware74337 жыл бұрын
Yeah..it was slow but anyways there is an option in KZfaq to increase the speed so u can use it next time :)