7. Eckart-Young: The Closest Rank k Matrix to A

  Рет қаралды 87,095

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018
Instructor: Gilbert Strang
View the complete course: ocw.mit.edu/18-065S18
KZfaq Playlist: • MIT 18.065 Matrix Meth...
In this lecture, Professor Strang reviews Principal Component Analysis (PCA), which is a major tool in understanding a matrix of data. In particular, he focuses on the Eckart-Young low rank approximation theorem.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Пікірлер: 83
@adolfocarrillo248
@adolfocarrillo248 5 жыл бұрын
I love Prof. Gilbert Strang, he is a dedicated man to teach mathematics. Please receive a huge hug on my behalf.
@yupm1
@yupm1 4 жыл бұрын
What a wonderful lecture! I wish Prof. Gilbert a very long life.
@kirinkirin9593
@kirinkirin9593 5 жыл бұрын
10 years ago I took OCW for the first time and I am still taking it. Thank you professor Gilbert Strang.
@amirkhan355
@amirkhan355 5 жыл бұрын
Thank you for being who you are and touching our lives!!! I am VERY VERY grateful.
@neoblackcyptron
@neoblackcyptron 2 жыл бұрын
Really deep lectures, I learn something new every time I watch them again and again. These lectures are gold.
@tempaccount3933
@tempaccount3933 2 жыл бұрын
Gil 3:30. Eckart & Young in 1936 were both at The University of Chicago. The paper was published in the (relatively new?) journal Psychometrica. Ekhart had already worked in the foundations of QM with some of the founders. And went on to work in Fermi's section on the Manhattan Project. If I recall correctly, Eckart married the widow of von Neumann. And ended up at UCSD. He was very renowned in applied physics including oceanographicy/ geophysics. Mr Gale Young was a grad student at Chicago. He also had a successful career taking his Master's from Chicago to positions in acedemia & US nuclear power industry.
@xXxBladeStormxXx
@xXxBladeStormxXx 3 жыл бұрын
It's funny that this video (lec 7) has vastly fewer views than both lecture 6 and 8. But if the title of this video was PCA instead of Eckart-Young it would easily be the most viewed video in the series. That's why kids, do the entire course instead of just watching 15 minutes of popular concepts.
@prajwalchoudhary4824
@prajwalchoudhary4824 3 жыл бұрын
well said
@neoblackcyptron
@neoblackcyptron 3 жыл бұрын
He has not explained anything about PCA in this lecture. He barely started out in the end and it wrapped up.
@oscarlu9919
@oscarlu9919 3 жыл бұрын
That's the exact same thing I think about. I just followed up on the sequence of videos, and I surprisingly notice that this video is about PCA, which is closely connected to previous videos. But viewing previous videos makes the understanding of PCA far deeper!
@georgesadler7830
@georgesadler7830 2 жыл бұрын
Professor Strang thank you for great lecture that involves Norms, Ranks and Least Squares. All three topics are very important for solid linear algebra development.
@eljesus788
@eljesus788 3 жыл бұрын
Gil has been my math professor for the last 12 years. These online courses are so amazing.
@mitocw
@mitocw 5 жыл бұрын
Fixed audio sync problem in the first minute of the video.
@turdferguson3400
@turdferguson3400 5 жыл бұрын
You guys are the best!!
@darkmythos4457
@darkmythos4457 5 жыл бұрын
Thank you !
@nikre
@nikre 2 жыл бұрын
a privilege to take part in such a distilled lecture. no confusion at all.
@dmitriykhvan2035
@dmitriykhvan2035 3 жыл бұрын
you have changed my life Dr. Strang!
@troychavez
@troychavez 3 жыл бұрын
His passion, knowledge and unique style. He's such a treasure. An amazing professor and wonderful mathematician.
@SalarKalantari
@SalarKalantari Жыл бұрын
33:54 "Oh, that was a brilliant notation!" LOL!.
@JulieIsMe824
@JulieIsMe824 3 жыл бұрын
Most interesting linear algebra lecture ever!! It's very easy to understand even for us chemistry students
@JuanVargas-kw4di
@JuanVargas-kw4di 2 жыл бұрын
In the least-squares vs. PCA discussion that starts at 37:44, he's comparing minimizing sum of squares of vertical distances to minimizing sum of squares of squares of perpendicular distances. However, each vertical error is related to each perpendicular error by the same multiplicative constant (cosine of the angle made by the estimated line), so in a way, minimizing one is tantamount to minimizing the other. Where the two methods do seem to differ is that least squares allows for an intercept term, while the PCA line goes through the origin. However, when we look at the estimate of the intercept term ( b_0 = mean(y) - b_hat*mean(x) ), least squares appears to be performing a de-meaning similar to the first step in PCA. In summary, I think we would need a more thorough discussion than we see in the video in order to conclude that least squares and the first principal component of PCA are different.
@k.christopher
@k.christopher 5 жыл бұрын
Thank you Prof Gilbert.
@mathsmaths3127
@mathsmaths3127 4 жыл бұрын
Sir You are wonderful and beautiful mathematician Thank you so much for teaching us for being with us
@Nestorghh
@Nestorghh 4 жыл бұрын
He’s the best.
@tusharganguli
@tusharganguli 2 жыл бұрын
Protect this man at all cost! Now we know, what an angel looks like!
@lavalley9487
@lavalley9487 Жыл бұрын
Thank, Pr... Very helpful!
@KipIngram
@KipIngram 3 жыл бұрын
44:40 - No, it's not making the mean zero that creates the need to use N-1 in the denominator. That's done because you are estimating population mean via sample mean, and because of that you will underestimate the population variance. It turns out that N-1 instead of N is an exact correction, but it's not hard to see that you need to do *something* to push your estimate up a bit.
@obarquero
@obarquero 3 жыл бұрын
Well, indeed I guess both are saying more or less the same thing. This is called Bessel’s correction. I prefer to think that dividing by N-1 yields an unbiased estimator, so that on average the sample cov matrix is the same as the cov matrix from the pdf.
@haideralishuvo4781
@haideralishuvo4781 3 жыл бұрын
Can anyone explain whats relation of Eckart Young theorem and PCA ?
@micahdelaurentis6551
@micahdelaurentis6551 3 жыл бұрын
I just have one question not addressed in this lecture...what actual color is the blackboard?
@GeggaMoia
@GeggaMoia 3 жыл бұрын
Anyone else thinks he talks with the same passion for math, as Walter White does for Chemistry? Love this guy.
@prajwalchoudhary4824
@prajwalchoudhary4824 3 жыл бұрын
lol
@jayadrathas169
@jayadrathas169 4 жыл бұрын
Where is the follow-up lecture on PCA it seems to be missing from the following lectures?
@philippe177
@philippe177 4 жыл бұрын
Did you find it where. I am dying to find it.
@krakenmetzger
@krakenmetzger 4 жыл бұрын
@@philippe177 The best explanation I've found is in a book called "Data Mining: The Textbook" by Charu Aggarwal. The tl;dr. Imagine you have a bunch of data points in Rn, and you just list them as rows in a matrix. First assume the "center of mass" (mean value of rows) is 0. Then PCA = SVD. The biggest eigenvalue/eigenvector points in the direction of largest variance, and so on for the second, third, fourth, etc eigenthings. In the case where center of mass is not zero, SVD gives you the same data as PCA, it just takes into account that the center of mass has moved.
@vasilijerakcevic861
@vasilijerakcevic861 4 жыл бұрын
Its this lecture
@justpaulo
@justpaulo 4 жыл бұрын
kzfaq.info/get/bejne/m99ig6hm3c-dXXU.html
@xc2530
@xc2530 Жыл бұрын
27:00 matirx A multiply orthogonal matrix, the norm of A doesn’t change
@Zoronoa01
@Zoronoa01 2 жыл бұрын
is it my computer or is the sound level a bit low?
@KapilGuptathelearner
@KapilGuptathelearner 5 жыл бұрын
at around 37:15 when the Sir is talking about difference in Least Squares and PCA. I think minimization will lead to same solution as perpendicular length is proportional to other line. Hypotenuse * sin(theta), where theta is the angle between vertical line and the line of least squares which must be fixed for a particular plane(line). I could not understand where I am going wrong.
@AmanKumar-xl4fd
@AmanKumar-xl4fd 5 жыл бұрын
Where r u from
@KapilGuptathelearner
@KapilGuptathelearner 5 жыл бұрын
@@AmanKumar-xl4fd ??
@AmanKumar-xl4fd
@AmanKumar-xl4fd 5 жыл бұрын
@@KapilGuptathelearner jst asking
@shivammalviya1718
@shivammalviya1718 5 жыл бұрын
Very nice doubt bro. The catch is in the theta. Let us assume that first you used least squares and found out a line such that error is minimum and it is equals to E. Then as you said the error in case of PCA should be sin(theta) * E , here change in theta will have a direct effect to minimize error of PCA since it is in the product. So minimizing just E will not work, as you should minimize the product, and sin(theta) is also there. I hope you got i want to say.
@AmanKumar-xl4fd
@AmanKumar-xl4fd 5 жыл бұрын
@UCjU5LGbSp1UyWxb8w7wPE6Q u know about coding
@dingleberriesify
@dingleberriesify 4 жыл бұрын
I always thought the N-1 was related to the fact that the variance of a single object is undefined (or at least nonsensical), so the N-1 ensures this is reflected in the maths? As well as something related to the unbiasedness of the estimator etc.
@obarquero
@obarquero 3 жыл бұрын
This is called Bessel’s correction. I prefer to think that dividing by N-1 yields an unbiased estimator, so that on average the sample cov matrix is the same as the cov matrix from the pdf.
@user-kn4oj4ze7b
@user-kn4oj4ze7b 2 жыл бұрын
How can I find the proof of Eckart-Young theorem mentioned in the video? Where is the link?
@mitocw
@mitocw 2 жыл бұрын
The course materials are available at: ocw.mit.edu/18-065S18. Best wishes on your studies!
@pandasstory
@pandasstory 4 жыл бұрын
Great lecture! Thank you so much Prof. Gilbert Strang. But can anyone tell me where to find the following part of PCA?
@forheuristiclifeksh7836
@forheuristiclifeksh7836 21 күн бұрын
0:37 what is a pca(?
@Enerdzizer
@Enerdzizer 5 жыл бұрын
Where is continuation? It must have been on Friday as Prof announced. But lecture 8 is not that lecture. Right?
@joaopedrosa2246
@joaopedrosa2246 4 жыл бұрын
I'm looking for it too.
@baswanthoruganti7259
@baswanthoruganti7259 4 жыл бұрын
Desperately waiting for that Friday for MIT to upload.....
@johnnyhackett199
@johnnyhackett199 2 жыл бұрын
@2:48 Why'd he have the chalk in his pocket?
@forheuristiclifeksh7836
@forheuristiclifeksh7836 21 күн бұрын
5:56 vector norm matrices
@vivekrai1974
@vivekrai1974 11 ай бұрын
28:50 Isn't it wrong to say that Square(Qv) = Transpose(Qv) * (Qv)? I think Square(Qv) = (Qv) * (Qv).
@Andrew6James
@Andrew6James 4 жыл бұрын
Does anyone know where the notes are?
@mitocw
@mitocw 4 жыл бұрын
Most of the material is in the textbook. There are some sample chapters available from the textbook, see the Syllabus for more information at: ocw.mit.edu/18-065S18.
@zkhandwala
@zkhandwala 5 жыл бұрын
Good lecture, but I feel it only just starts getting into the heart of PCA before it ends. I don't see a continuation of the the discussion in subsequent lectures, so I'm wondering if I'm missing something.
@rahuldeora5815
@rahuldeora5815 4 жыл бұрын
Yes you are right. Do you know any other good source to learn PCA of this quality? Am having a hard time finding
@ElektrikAkar
@ElektrikAkar 4 жыл бұрын
@@rahuldeora5815 This one seems pretty nice for more information on PCA: kzfaq.info/get/bejne/gpOghNd40pm6g2w.html
@joaopedrosa2246
@joaopedrosa2246 4 жыл бұрын
@@ElektrikAkar thanks for that, I've wasted a huge amount of time looking for a good source
@DataWiseDiscoveries
@DataWiseDiscoveries 3 жыл бұрын
nice lecture loved it @@ElektrikAkar
@yidingyu2739
@yidingyu2739 4 жыл бұрын
It seems that Prof. Gilbert Strang is a fan of Gauss.
@xc2530
@xc2530 Жыл бұрын
44:00 covariance matrix
@naterojas9272
@naterojas9272 4 жыл бұрын
Gauss or Euler?
@sb.sb.sb.
@sb.sb.sb. 3 жыл бұрын
ancient indian mathematicians knew about Pythagors thrm and euclidean distance
@xc2530
@xc2530 Жыл бұрын
31:00 PCA
@forheuristiclifeksh7836
@forheuristiclifeksh7836 21 күн бұрын
7:09
@TheRossspija
@TheRossspija 4 жыл бұрын
16:55 There was a joke that we didn't get to hear :(
@manoranjansahu7161
@manoranjansahu7161 2 жыл бұрын
Good. But I wish proof was given
@xc2530
@xc2530 Жыл бұрын
4:26 norm
@xc2530
@xc2530 Жыл бұрын
Minimise- use L1
@xc2530
@xc2530 Жыл бұрын
18:00 nuclear norm- incomplete matrix with missing data
@mr.soloden1981
@mr.soloden1981 5 жыл бұрын
Нихера не понял, но лайк поставил)
@AndrewLvovsky
@AndrewLvovsky 4 жыл бұрын
xaxa
@kevinchen1820
@kevinchen1820 2 жыл бұрын
20220526 簽
Lecture 8: Norms of Vectors and Matrices
49:21
MIT OpenCourseWare
Рет қаралды 156 М.
6. Singular Value Decomposition (SVD)
53:34
MIT OpenCourseWare
Рет қаралды 219 М.
Spot The Fake Animal For $10,000
00:40
MrBeast
Рет қаралды 194 МЛН
Iron Chin ✅ Isaih made this look too easy
00:13
Power Slap
Рет қаралды 36 МЛН
Идеально повторил? Хотите вторую часть?
00:13
⚡️КАН АНДРЕЙ⚡️
Рет қаралды 8 МЛН
23. Accelerating Gradient Descent (Use Momentum)
49:02
MIT OpenCourseWare
Рет қаралды 50 М.
Lecture 1: The Column Space of A Contains All Vectors Ax
52:15
MIT OpenCourseWare
Рет қаралды 318 М.
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,8 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 853 М.
Terry Tao, Ph.D. Small and Large Gaps Between the Primes
59:24
Spot The Fake Animal For $10,000
00:40
MrBeast
Рет қаралды 194 МЛН