13. Classification

  Рет қаралды 131,682

MIT OpenCourseWare

MIT OpenCourseWare

7 жыл бұрын

MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016
View the complete course: ocw.mit.edu/6-0002F16
Instructor: John Guttag
Prof. Guttag introduces supervised learning with nearest neighbor classification using feature scaling and decision trees.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Пікірлер: 48
@MrCatandMe
@MrCatandMe 6 жыл бұрын
Watching MIT OpenCourseWare videos identifies how completely lacking in substance my college education really was.
@leixun
@leixun 3 жыл бұрын
*My takeaways:* 1. Nearest neighbours 4:18 2. K-nearest neighbours 8:11 3. Performance metrics 16:50 4. logistic regression 30:20
@shivayshakti6575
@shivayshakti6575 2 жыл бұрын
thanks buddy!
@iLoveTurtlesHaha
@iLoveTurtlesHaha 6 жыл бұрын
I LOVE this man. I found this video from a search and didn't see the other 12 videos in the series and I am picking up everything he is saying. Also, it's so cool how he encourages class participation. Great teachers are amazing and a gift to humanity.
@sololife9403
@sololife9403 Жыл бұрын
agree with you. and he is very calm
@avelmira
@avelmira 4 жыл бұрын
An unintended consequence of learning the difference between linear and logistic regression from Prof. Guttag in this video: the scene from The Princess Bride intrusively popped in my head where Miracle Max says: "There is a big difference between mostly dead and all dead. Mostly dead is still alive." Then I spent a few minutes giggling before I can focus again.
@naheliegend5222
@naheliegend5222 5 жыл бұрын
Everythime I see something like that, I wonder how brilliant a human can be to break down complexity so simple like that.
@kingofgods898
@kingofgods898 3 жыл бұрын
Listening to my professor try to lecture on classification makes me nauseous and hate my life. Listening to this guy lecture on classification and I'm actually enjoying it and understanding it. People are not equal.
@saveryd
@saveryd 7 жыл бұрын
Prof. Guttag and Grimson are really great ! I wish I had those professors when I was in college !
@adiflorense1477
@adiflorense1477 3 жыл бұрын
same here
@nomad_manhattan
@nomad_manhattan 6 жыл бұрын
Absolutely the best ML courses I have encountered and I have tried many. This is the only one that keeps me focus and intrigued :) Do get Prof. Guttag's book! Good companion for this class
@w1d3r75
@w1d3r75 2 жыл бұрын
If it wasn't that expensive. All of the MIT books are expensive (the ones in the MIT publications page)
@haneulkim4902
@haneulkim4902 3 жыл бұрын
Amazing lecture as always! Thanks for great resources👏
@sandeepgill2693
@sandeepgill2693 3 жыл бұрын
Hats off to you sir for the way you share you knowledge.
@aravindsankaran3778
@aravindsankaran3778 6 жыл бұрын
Precission is Positive predictive value and not specificity! 19:20
@creponnekarim2865
@creponnekarim2865 2 жыл бұрын
this man seems like an old very wise man that spent most of his time either in the research, or with his grand childs plus he's a good teacher
@isbestlizard
@isbestlizard 3 жыл бұрын
YES this is what I need to do the titanic challenge on kaggle
@mohanraj7697
@mohanraj7697 2 жыл бұрын
I came here for the same. Your comment assures I can watch this, thank you
@henrikmanukyan3152
@henrikmanukyan3152 3 ай бұрын
Stopped at the most important point 😀 makes you go to the next lesson . Anyway I am glad he mentioned it
@guilhermeaguilar6477
@guilhermeaguilar6477 7 жыл бұрын
very nice this videos about machine learning
@Speed001
@Speed001 Жыл бұрын
34:24 fiting linear regression into a range, logistic regression. The machine learning model that's always visualized.
@littlenarwhal3914
@littlenarwhal3914 6 жыл бұрын
This is complicated, but the prof explains things well. Now i just need to learn more python to be able to understand it fully...
@Jcastellanoss123
@Jcastellanoss123 3 жыл бұрын
Thanks a lot for this classes, not only learn about the computational thinking, also the reason that Leonardo DiCaprio dont survive in the movie.
@markk6594
@markk6594 5 жыл бұрын
42:46 line "for i in range(len(probs)):" because you just need i as a index for testset and probs, you could zip these lists, e.g. "for p_i, ts_i in zip(probs, testSet):" then you can use p_i and ts_i instead of probs[i] and testSet[i]. All in all a really good lecture, thank you very much!
@jongcheulkim7284
@jongcheulkim7284 2 жыл бұрын
Thank you.
@sandipdey2033
@sandipdey2033 5 жыл бұрын
Can anyone here tell me where can I find the video for "Regression" from the same set of MIT videos? Under what name is it present in the MIT lecture videos from the above-mentioned link?
@mitocw
@mitocw 5 жыл бұрын
Linear regression is covered in lecture 9: ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-0002-introduction-to-computational-thinking-and-data-science-fall-2016/lecture-videos/lecture-9-understanding-experimental-data/. Best wishes on your studies!
@pierreehibertcortezcortez5547
@pierreehibertcortezcortez5547 6 ай бұрын
Lo máximo!
@ChrisAdvena
@ChrisAdvena 3 жыл бұрын
Prof. Guttag talks about problems finding k nearest neighbor for large data sets due to number of distances needed to be calculated. Good old-fashioned relational databases have had a solution to this for decades. They use, for example, partitioning, multi-level indexing and calculated columns. The calculated columns can be stored or cached. In fact, we want our database to live in a disk / cache balance that optimizes our multitude of parameters , which boil down to preprocessing time and real-time processing time as constrained by money. This makes finding nearest neighbor, or any other math based comparison, faster by multiple orders of magnitude for large data sets. Recognizing, much of this can be done in memory, my question is, at which key places in machine learning do we most apply what we have learned in other data science fields about quick data access? In other words, where can we largely mitigate these and how do we decide if it is preferable to maximize performance of a function as opposed to utilizing a different ML approach?
@batatambor
@batatambor 4 жыл бұрын
Why didn't the professor fall in the 'dummy variable' trap? He used classes C1, C2 and C3 but he shouldn't have used all the 3 to create the regression model since C1 = 1 - C2 - C3, which means that the variables are dependent on each other. Someone knows the answer?
@TheJustinmulli
@TheJustinmulli 4 жыл бұрын
27:30 Wouldn't it be better to set label and k as keyword arguments instead of creating a separate knn function via lambda abstraction? He talks about using this to build much more general programs, yet he created two functions when you could just create one that does both, which would be more general than creating two.
@annakh9543
@annakh9543 5 жыл бұрын
i'm already sad that im gonna finish these series of lectures soon :/
@fuzzyip
@fuzzyip 4 жыл бұрын
wow, i wish you were my professor
@adiflorense1477
@adiflorense1477 3 жыл бұрын
12:01 why is the k nearest neighbor data training separated into testing and training again?
@Trazynn
@Trazynn 3 жыл бұрын
"The more legs an animal has, the less likely it is to be a reptile."
@landrynoulawe1565
@landrynoulawe1565 Жыл бұрын
Animal with 4 legs has more chances to be a repitile than animal with 2 legs.
@adiflorense1477
@adiflorense1477 3 жыл бұрын
it turns out that linear regression and logistic regression use the term coeff to denote weight. that interesting
@fabianusmonepatimonepati6721
@fabianusmonepatimonepati6721 2 жыл бұрын
Wow I'intersting it
@danielmelendrez1616
@danielmelendrez1616 Жыл бұрын
3:12 I believe that this statement is wrong. He is ACTUALLY using the full representation using the number of legs too. If you do the math, using the binary rep only, then the distance matrix shown is incorrect. CORRECTION: They are NOT using the number of legs, however, they erroneously threw a 2 in the last element of the binary data for chicken while it should be 0. I tested my own algorithm with this number and I get the same result as shown in the video. Additionally, the last binary feature should be 'reptile', correct? In the python data set the last element is zero in various of the reptile cases. Please let me know if I am missing something obvious...
@amishsethi1799
@amishsethi1799 5 жыл бұрын
Is there any way to get access to the posted code?
@mitocw
@mitocw 5 жыл бұрын
The full course site on OCW has the lecture notes and code files: ocw.mit.edu/6-0002F16. Good luck with your studies!
@rsd2dcc
@rsd2dcc Жыл бұрын
Finally got applause for something 😂😂😂
@TheRelul
@TheRelul 4 жыл бұрын
tough crowd here..
@hannukoistinen5329
@hannukoistinen5329 2 жыл бұрын
If this a level of MIT, forget it!! There are much more usable courses for example professor Gilbert Strang.
14. Classification and Statistical Sins
49:25
MIT OpenCourseWare
Рет қаралды 54 М.
Naive Bayes, Clearly Explained!!!
15:12
StatQuest with Josh Starmer
Рет қаралды 1 МЛН
Who’s more flexible:💖 or 💚? @milanaroller
00:14
Diana Belitskay
Рет қаралды 19 МЛН
Ну Лилит))) прода в онк: завидные котики
00:51
ДЕНЬ РОЖДЕНИЯ БАБУШКИ #shorts
00:19
Паша Осадчий
Рет қаралды 6 МЛН
6. Monte Carlo Simulation
50:05
MIT OpenCourseWare
Рет қаралды 2 МЛН
12. Clustering
50:40
MIT OpenCourseWare
Рет қаралды 292 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 240 М.
StatQuest: Random Forests Part 1 - Building, Using and Evaluating
9:54
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Decision and Classification Trees, Clearly Explained!!!
18:08
StatQuest with Josh Starmer
Рет қаралды 676 М.
16. Learning: Support Vector Machines
49:34
MIT OpenCourseWare
Рет қаралды 1,9 МЛН
1. What is Computation?
43:06
MIT OpenCourseWare
Рет қаралды 1,8 МЛН
Lecture 1: Introduction to Thermodynamics
52:51
MIT OpenCourseWare
Рет қаралды 26 М.
All Learning Algorithms Explained in 14 Minutes
14:10
CinemaGuess
Рет қаралды 166 М.
Who’s more flexible:💖 or 💚? @milanaroller
00:14
Diana Belitskay
Рет қаралды 19 МЛН