6. Singular Value Decomposition (SVD)

  Рет қаралды 218,478

MIT OpenCourseWare

MIT OpenCourseWare

5 жыл бұрын

MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018
Instructor: Gilbert Strang
View the complete course: ocw.mit.edu/18-065S18
KZfaq Playlist: • MIT 18.065 Matrix Meth...
Singular Value Decomposition (SVD) is the primary topic of this lecture. Professor Strang explains and illustrates how the SVD separates a matrix into rank one pieces, and that those pieces come in order of importance.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Пікірлер: 154
@tetrabromobisphenol
@tetrabromobisphenol 2 жыл бұрын
Gilbert Strang is a treasure to the world. His books are top notch, and the MIT Open Courseware has made it possible for myself and so many others to learn at our own pace. Many thanks Dr. Strang!
@niamcd6604
@niamcd6604 7 ай бұрын
Brown nose. MIT also fails to hire anyone with brown skin and will blow out of proportion any work conducted by white old guys. Unbelievable.
@gzitterspiller
@gzitterspiller 3 жыл бұрын
I love the way he always explains everything from the beginning, he tells you the whole process.
@indrajitjana
@indrajitjana 4 жыл бұрын
Thumbs up to the camera angle management team; great video audio editing.
@nishantraj95
@nishantraj95 Жыл бұрын
One of the best lectures on KZfaq, thanks Prof. Gilbert for explaining this beautifully.
@mauriceli6949
@mauriceli6949 5 жыл бұрын
love Prof Strang's lecture, enlighten me on the depth of linear algebra
@saitaro
@saitaro 4 жыл бұрын
His explanations have always been impeccable. What a legend.
@dalisabe62
@dalisabe62 3 жыл бұрын
Awesome! I could watch the lectures over and over and each time, I gain a different and extra insight into the results and the process. Linear Algebra could turn into magic after you apply it’s theory--the once dull computations of rows and columns. The most fascinating aspect of linear Algebra is when the entire matrix details unfold into a single effect, one that resembles the effect of simple scalars used in simple math formulas. Thanks to the fundamental concept of the Eigenvectors and Eigenvalues. Understanding the very essence of a matrix and what it does to any vector quantity (change in magnitude and direction) is the key concept to appreciating the underlying concept of various matrix decompositions. It all boils down to how many ways we could see the workings of a matrix in terms of its decomposed forms. The SVD composition is one of those decompositions of a matrix that allows a matrix to be seen as a sequential transformation of direction and magnitude in terms of some orthonormal basis characteristic of such matrix and its Eigenspace. Such a beauty!
@akshayshrivastava97
@akshayshrivastava97 3 жыл бұрын
Thank you very much Dr. Strang for making this intuitive and natural. I had only gone through SVD in bits and pieces and never had what I could consider a good enough understanding until I saw this lecture.
@youngjim7987
@youngjim7987 3 жыл бұрын
He just saved my from the brim of breaking down trying to understand SVD! What he presented is so natural, so beautiful that I instantly catch them. Appreciate MIT open courseware also!
@FranekMag
@FranekMag 5 жыл бұрын
Fantastic lecture, really appreciate MIT sharing the course lectures for free.
@cpaniaguam
@cpaniaguam 2 жыл бұрын
This SVD business is truly poetry in motion.
@clecarosc
@clecarosc 4 жыл бұрын
This is pure gold... such a fantastic Professor!! Thank you
@gyeongchankim5423
@gyeongchankim5423 4 жыл бұрын
This lecture in specific was truly beautiful
@BigBen866
@BigBen866 Жыл бұрын
I am confident once I went through a Refresher up to this point that his explanation will be as clear as water😀 Because none of my advanced mathematics professors ever explained in such detail!! Awesome professor 👍🏾🙏
@georgesadler7830
@georgesadler7830 2 жыл бұрын
Professor Strang thank you for breaking down and explaining Singular Value Decomposition in linear algebra.
@megh280193
@megh280193 2 жыл бұрын
Genius of this Man is the way he is explaining such a difficult concept so easily..... Hatts Off. As per Indian Ethos I just want to touch your feet.
@vigneshbhaskaran6436
@vigneshbhaskaran6436 2 жыл бұрын
This lecture singularly helped me through the concepts on use of eigen values for capturing variability
@wxie7689
@wxie7689 2 жыл бұрын
Prof Strang is mathematical elegant in delivering linear algebra in an understandable way
@lfalfa8460
@lfalfa8460 7 ай бұрын
God bless your existence, Prof. Strang.
@jxchtajxbt53
@jxchtajxbt53 3 жыл бұрын
Truly awesome instructor. Fantastic!
@Random-sm5gi
@Random-sm5gi 2 жыл бұрын
Splendidly explained. Thank you so much
@manoranjansahu7161
@manoranjansahu7161 2 жыл бұрын
Beautiful exposition of SVD
@xuantong1634
@xuantong1634 2 жыл бұрын
Thank you professor Strang, thank you MIT OCW
@richardcasey4439
@richardcasey4439 4 жыл бұрын
Awesome simplicity. Wish I had him in college
@chiahungmou8010
@chiahungmou8010 Жыл бұрын
Love the cool idea to light up the U and V!
@fierydino9402
@fierydino9402 8 ай бұрын
Thank you so much for this beautiful and brilliant lecture.
@Hank-ry9bz
@Hank-ry9bz 2 ай бұрын
16:23 symmetry; 28:50 geometry; 42:24 "econ" SVD
@naterojas9272
@naterojas9272 4 жыл бұрын
How wonderful!
@hanshans387
@hanshans387 2 ай бұрын
really useful lecture, thank you!
@omerblk1773
@omerblk1773 3 жыл бұрын
Amazing lesson ♥♥
@asikakhatun4885
@asikakhatun4885 3 жыл бұрын
খুবই সুন্দর লাগলো...
@Enerdzizer
@Enerdzizer 4 жыл бұрын
It is better not to rotate when getting ellipse. Ellipse has to be done without rotating, because multipling by lambda makes stretching along axes. Then of course should be rotating by matrix U. I.e. when ellipse first appeared on the board it should be vertically placed, not inclined. It is possible to draw - you should placed vectors 1 and 2 inclined initially on the first step. Then after multipling by V they will be along axes. Then stretching, then once again rotating. Still, it is a good lecture! :)
@chriselgoog9744
@chriselgoog9744 4 жыл бұрын
You are right. For a picture see en.wikipedia.org/wiki/Singular_value_decomposition#/media/File:Singular-Value-Decomposition.svg
@coreybean8280
@coreybean8280 4 жыл бұрын
His drawing isn't necessarily wrong, if for instance the vectors in V happen to be the unit vectors
@gustavstreicher4867
@gustavstreicher4867 4 жыл бұрын
Great video thanks!
@jhatzi99
@jhatzi99 7 ай бұрын
"Six angles in 4D rotations. We'll leave it there..."
@ParthPaTeL-wm3kt
@ParthPaTeL-wm3kt 6 ай бұрын
Very fruitful content
@vivaviiv
@vivaviiv 4 жыл бұрын
Amazing stuff
@nickfruneaux5232
@nickfruneaux5232 10 ай бұрын
Love this!!
@armandfavrot3210
@armandfavrot3210 3 жыл бұрын
Awesome, thanks!
@causalinference4176
@causalinference4176 4 жыл бұрын
Good lecture, but I think the geometric intepretation of SVD is its motivation. We should use SVD as a tool to understand general linear transformation
@JulieIsMe824
@JulieIsMe824 4 жыл бұрын
Love Prof.Strang so much!!!really excellent lecture!!!!
@stephanienam2218
@stephanienam2218 4 жыл бұрын
I cannot thank you enough for this clear, insightful lecture. It is very inspiring that you are calling the matrices "beautiful", and I can feel your love for them :) For me your lectures are beautiful.
@DrRandyDavila
@DrRandyDavila 5 жыл бұрын
Amazing
@shahlaalikhani6281
@shahlaalikhani6281 3 жыл бұрын
Thanks!
@hyperduality2838
@hyperduality2838 4 ай бұрын
Poles (eigenvalues) are dual to zeros -- optimized control theory. The standard basis is dual to the eigen vector basis -- symmetric matrices. Stretch is dual to squeeze -- eigenvalues, forces are dual, ellipsoids are dual to circles or hyperspheres. "Always two there are" -- Yoda. Syntax is dual to semantics -- languages or communication. If mathematics is a language then it is dual. The teacher (Gilbert Strang, Yoda) is dual to the pupil (You, Luke Skywalker) -- the Hegelian dialectic.
@nicolaaseltrut8040
@nicolaaseltrut8040 2 жыл бұрын
致敬!
@allyourcode
@allyourcode 3 жыл бұрын
@6:49 The fact that A^T * A and A * A^T have the same non-zero eigenvalues is blowing my mind right now. I guess this is intimately related to the fact that det(B) = det(B^T), since det is where we get the characteristic polynomial/egenvalues...
@allyourcode
@allyourcode 3 жыл бұрын
I think it turns out that this is not actually used, so no need to lose sleep over it, I guess...
@dariuszspiewak5624
@dariuszspiewak5624 Жыл бұрын
@@allyourcode You're mistaken. It is not only used but it's crucial for this to work. The fact that the eigenvalues are the same is what enables to have the same Sigma matrix for both of the matrices, AA' and A'A. If that was not the case, well, the technique would lead us astray.
@freeeagle6074
@freeeagle6074 Жыл бұрын
In a previous lecture, professor mentioned that eigenvalues of AB and BA (similar matrixes) are the same. If we consider A' as B and A as A, then A'A and AA' should have the same eigenvalues.
@wasabibleach
@wasabibleach 2 жыл бұрын
Actually better than his textbook :)
@StarryNightSky587
@StarryNightSky587 5 ай бұрын
„6 angles in 4 dimensions, we leave it there“ 😂
@peterpark2981
@peterpark2981 3 ай бұрын
Respect
@xc2530
@xc2530 Жыл бұрын
50:00 the important/biggest/principal part of matrix
@renanwillianprado9187
@renanwillianprado9187 3 жыл бұрын
If A^T A V = V S is the spectral decomposition of A^T A with S diagonal and V orthogonal, we have that (A V)^T (A V)= S. Hence, the nonzero columns of AV, which is associated with the nonzero columns of A, is orthogonal with all other nonzero columns of AV. Hence, is possible to find an orthogonal matrix U such that A V = U sqrt(S) -- note that the diagonal of S give us the squared norm of each column of AV, which means this is completely fine. Now, this means A=U sqrt(S) V^T. Completing U, if necessary, we have the SVD decomposition. In practice, after the calculation of the spectral decomposition of A^T A, we would calculate the Kern of (AV)^T -- orthogonal to Im(AV) -- and find an orthogonal basis to it, and of course, all the singular values associated with this process would be zero.
@renanwillianprado9187
@renanwillianprado9187 3 жыл бұрын
P.S.: In the picture at 32:35, the u_1 and u_2 should be a_1 and a_2, after applying the rotation U.
@pradeepkumar9586
@pradeepkumar9586 2 жыл бұрын
Love....
@henryzhu7309
@henryzhu7309 4 жыл бұрын
38:35 pilot in class
@yogeshpasari7400
@yogeshpasari7400 3 жыл бұрын
Legend
@xc2530
@xc2530 Жыл бұрын
42:00 use SVD, the smallest SVD and square SVD
@RamakrishnanAngaraiGanesan
@RamakrishnanAngaraiGanesan 2 жыл бұрын
How can any mXn matrix A be factorized into SQ, where both are square matrices (symmetric and orthogonal)? The dimensions are not matching. Or, is this only for a square matrix A?
@Srinishplaytime
@Srinishplaytime 2 жыл бұрын
legend
@kingmanzhang
@kingmanzhang 4 жыл бұрын
Magic
@evanparshall1323
@evanparshall1323 2 жыл бұрын
One thing that I find confusing is that because the sigma matrix is left multiplied by V transposed, it is not pure scaling. This is because the sigma values are multiplied by the rows of V transposed and not the columns. This will cause additional rotation and changes in dimensionality.
@tdchayes
@tdchayes 2 жыл бұрын
You need to do the steps in sequence. Yes, if you consider the operations from left to right you get mixed scaling and rotation. But considered from right to left, it's a (pure) rotation, a (pure) scaling) and a final rotation.
@vigneshrb2529
@vigneshrb2529 4 ай бұрын
Amazing explanation by Gilbert sir as always but I have a small doubt. We find the U vectors from the V vectors because of the reason that there might be a collection of Vectors in U to choose from which are orthogonal but what happens when even the V vectors are a collection of vectors from where we have to choose the orthogonal one's?
@karimsaid2248
@karimsaid2248 5 жыл бұрын
@37 Doesn't rotation in 3d space require two angles only, theta(zenith) and phi(azimuth)?
@DeepakSingh-xt5io
@DeepakSingh-xt5io 4 жыл бұрын
for 3d rotation we need roll,pitch, and yaw; you can specify zenith with pitch, azimuth with yaw, so we need roll parameter too.
@gustavstreicher4867
@gustavstreicher4867 4 жыл бұрын
You can think of it as 3 combination 2, which gives 3. Two directions combine to form a single rotation from the one vector to the other. So, the number of angles is D combination 2, where D is the dimension of your space. Four dimensions will thus require 6 angles, etc.
@codywan3816
@codywan3816 4 жыл бұрын
I think 2 angles are enough to specify the direction. Another parameter to specify the magnitude. I'm not a native speaker of English and didn't understand what they meant by roll, pitch and yaw physics.stackexchange.com/questions/175354/why-are-three-parameters-required-to-express-rotation-in-3-dimension
@mrudhulraj2824
@mrudhulraj2824 3 жыл бұрын
Feels like we can do math on our own when we intuite it better
@SophySongTan
@SophySongTan 2 жыл бұрын
|det|,absolute value
@MohakKant
@MohakKant 3 жыл бұрын
One question I have here is with the rotation and stretching - if A is a rectangular matrix does it not project x into a lower or higher dimensional space?
@yuwuxiong1165
@yuwuxiong1165 3 жыл бұрын
projections are square matrices at the first place I guess.
@k10shetty
@k10shetty 3 жыл бұрын
26:06 What did the professor mean by-> orthogonal v's in the "row space" ?
@thebigmonstaandy6644
@thebigmonstaandy6644 3 жыл бұрын
i think he sayed it,becaurce v's belong to V^t(V ist transposed) ,so v^t belongs to V and v^t are rows
@Bridgelessalex
@Bridgelessalex 5 жыл бұрын
I thought A^{T} A is positive semidefinite isn't it?
@zkhandwala
@zkhandwala 5 жыл бұрын
I was going to respond the same thing. I believe that's correct. The general argument, however, still holds.
@gzitterspiller
@gzitterspiller 3 жыл бұрын
that is why he writes positive or equal to 0.
@xaviert7224
@xaviert7224 2 жыл бұрын
Yes I think you are right.
@gemacabero6482
@gemacabero6482 3 жыл бұрын
When U and V transposed are the same, do these matrices represent a change of basis matrix? Because for this to happen, A has to be positive symmetric, so then the singular values of A would equal the eigenvalues of A right? So if A represents a transformation , then the diagonal matrix sigma could be understood as that same transformation matrix but in the basis of eigenvalues, so then U = V transposed, should be change of basis matrices ???? But when are change of basis matrices equal to each other???? Thank you!!
@dariuszspiewak5624
@dariuszspiewak5624 Жыл бұрын
When transposed they are not the same. They are orthonormal which means when multiplication happens - UU' or VV' - the outcome is an identity matrix in the correct space (m-by-m or n-by-n).
@BleachWizz
@BleachWizz 3 жыл бұрын
You should really rerecord the previous 5 classes with functions on the place of numbers and call it tensor calculus - introduction
@NolanZewariligon
@NolanZewariligon 4 жыл бұрын
Where can we find the notes professor Strang mentions in the videos? ( 45:17 )
@mitocw
@mitocw 4 жыл бұрын
The materials for the course are available on MIT OpenCourseWare at: ocw.mit.edu/18-065S18. Best wishes on your studies!
@mitocw
@mitocw 4 жыл бұрын
Prof Strang's notes were turned into his most recent book -- visit the OCW course site for links and the Reading section for guidance on what chapters go with the lectures: ocw.mit.edu/18-065S18.
@jorgeblanco9797
@jorgeblanco9797 5 жыл бұрын
I wish I had gotten into MIT.
@dectorey7233
@dectorey7233 5 жыл бұрын
To be fair the stress MIT students go through their undergrad/grad careers is inconceivable. Though it's incredible how much you learn even in one lecture at this school, let alone an entire 4 years
@jorgeblanco9797
@jorgeblanco9797 5 жыл бұрын
@@dectorey7233 lol I know, I have friends that are going there. The summer programs I did there were tough too.
@mohamedabdallah7354
@mohamedabdallah7354 4 жыл бұрын
How could I plug the UtU after sigma although they don't satisfy multiplication conditions concerning the size of matrices?
@EternusVia
@EternusVia 3 жыл бұрын
I think he is referring to a square matrix A in that case
@xc2530
@xc2530 Жыл бұрын
45:30 second approach to find SVD
@xc2530
@xc2530 Жыл бұрын
Polar decomosition
@kevinshao9148
@kevinshao9148 2 жыл бұрын
One question Prof. and guys, how comes sigma in the middle is diagonal (square)? shouldn't its dimension (m, n) ? Thanks!
@kevinshao9148
@kevinshao9148 2 жыл бұрын
@What could go wrong? - Niagara edition Ah, that makes sense. Thanks for your explain!
@diegooliveira7713
@diegooliveira7713 2 жыл бұрын
I love prof. Strang and I'm very grateful for his lectures, but I have to admit I don't find this proof convincing. Correct me if I'm wrong, but the project he starts out with (@3:32) is to prove that every matrix A has this decomposition. But @13:26, he starts out by **assuming** that A has this decomposition, and then he deduces what U, Σ, and Vᵀ must be. But that doesn't show that every matrix A has this decomposition. Instead, it shows that *if A has this decomposition*, then *that's what U, Σ, and Vᵀ must be*. Even if we were to argue that we can obtain such U, Σ, and Vᵀ for any matrix A, prof. Strang doesn't show that by multiplying UΣVᵀ we will get back A for any A. I struggled with this topic for months until I finally could arrange everything into what I believe is a proof for why every matrix A can be decomposed this way.
@dariuszspiewak5624
@dariuszspiewak5624 Жыл бұрын
Can you please share your proof with us? That would be awesome. Thank you!
@diegooliveira7713
@diegooliveira7713 Жыл бұрын
​@@dariuszspiewak5624 Hi. I'm going to paste what I've written in my notes. The non-mathematic notation is in Portuguese (my native language), but I think Google Translate will probably do a good job. In any case, I haven't been peer-reviewed, so I cannot guarantee the accuracy of this proof. [Teorema] Se A é mxn e Rank(A) = r -> A = UΣVᵀ, onde: Vᵀ nxn é ortogonal e tem em suas primeiras r linhas os autovetores unitários de AᵀA para os quais os autovalores são maiores que zero, e nas restantes n-r linhas os autovetores unitários de AᵀA para os quais os autovalores são exatamente zero. As primeiras r linhas de Vᵀ são uma base para C(Aᵀ), e as restantes n-r linhas de Vᵀ são uma base para N(A). Σ mxn é diagonal mxn (retangular se m≠n) e tem em suas primeiras r entradas diagonais a raiz quadrada dos autovalores de AᵀA maiores que zero (que são os mesmos autovalores maiores que zero de AAᵀ), ordenados de acordo com as linhas de Vᵀ, e todas as outras entradas são 0. U mxm é ortogonal e a i-ésima de suas primeiras r colunas são Avᵢ->/σᵢ, onde σᵢ = √λᵢ (λᵢ = autovalor de vᵢ->), sendo que essas r colunas formam uma base para C(A), e as restantes m-r colunas formam qualquer base ortonormal para N(Aᵀ). E também é o caso que nas primeiras r colunas de U estão os autovetores unitários de AAᵀ para os quais os autovalores são maiores que zero, e nas restantes m-r colunas de U estão os autovetores unitários de AAᵀ para os quais os autovalores são exatamente zero. [Prova]: Considere A mxn (pode ser m≠n), com Rank(A) = r (r pode ser qualquer valor). Recapitulação básica: Sabemos que o espaço colunar C(A) e o espaço nulo à esquerda N(Aᵀ) ∈ ℝᵐ, e o espaço de linha C(Aᵀ) e o espaço nulo N(A) ∈ ℝⁿ. Sabemos que a dimensão tanto de C(A) quanto de C(Aᵀ) é r. Chame {v₁->, v₂->, ..., vᵣ->} os autovetores ortonormais de AᵀA cujos autovalores são maiores que 0, e {vᵣ₊₁->, vᵣ₊₂->, ..., vₙ->} os autovetores ortonormais de AᵀA cujos autovalores são exatamente 0. Esses conjuntos sempre precisam existir, pois: AᵀA nxn é simétrica, e portanto tem decomposição espectral QoΛQo⁻¹, o que faz seus n autovetores serem ortogonais. Normalizando esses n autovetores, obtemos um conjunto ortonormal. AᵀA é positiva semidefinida, portanto seus autovalores são maiores ou iguais a zero. Como Rank(AᵀA) = Rank(A) = r, segue-se, pelo Teorema da Nulidade, que dim(N(AᵀA)) = n-r, e portanto, por Gram-Schmidt, pode-se achar uma base ortonormal de n-r vetores para N(AᵀA). E como sabemos, cada vetor linearmente independente em um espaço nulo de uma matriz é um autovetor da matriz com autovalor igual a zero. Então existem pelo menos n-r autovetores ortonormais em AᵀA com autovalor igual a zero. Na verdade, existem _exatamente_ n-r autovetores ortonormais em AᵀA com autovalor igual a zero, pois se houvesse mais um autovetor ortornormal em AᵀA com autovalor igual a zero, então a dim(N(AᵀA)) seria maior que n-r. Então o conjunto {vᵣ₊₁->, vᵣ₊₂->, ..., vₙ->} sempre pode ser obtido. O conjunto {v₁->, v₂->, ..., vᵣ->} é obtido simplesmente selecionando-se os autovetores ortogonais em AᵀA cujos autovalores são maiores que zero. Eles sempre precisam existir, pois todos os autovetores em AᵀA com autovalor igual a zero estão no outro conjunto, e os autovalores de AᵀA são ou zero ou um número maior que zero. Como vimos, {vᵣ₊₁->, vᵣ₊₂->, ..., vₙ->} é uma base para N(AᵀA). E como já provado, N(AᵀA) = N(A). Como já provado: Se V ∈ ℝⁿ e dim(V) = r -> [(V⟂ é um subespaço ortogonal a V) & (dim(V⟂) = n-r) V⟂ é o complemento ortogonal de V] O subespaço spannado por {v₁->, v₂->, ..., vᵣ->} ∈ ℝⁿ e tem dimensão r, e o subespaço spannado por {vᵣ₊₁->, vᵣ₊₂->, ..., vₙ->}, que é N(A), é ortogonal e ele e tem dimensão n-r. Portanto o subespaço spannado por {v₁->, v₂->, ..., vᵣ->} é o complemento ortogonal de N(A), que como sabemos é C(Aᵀ). Então veja que {v₁->, v₂->, ..., vᵣ->} é um conjunto ortonormal que é uma base para C(Aᵀ), enquanto {vᵣ₊₁->, vᵣ₊₂->, ..., vₙ->} é um conjunto ortonormal que é uma base para N(A). Chame {u₁->, u₂->, ..., uᵣ->} o conjunto de vetores tais que u₁-> = Av₁->/σ₁, u₂-> = Av₂->/σ₂, ..., uᵣ-> = Avᵣ->/σᵣ, onde σᵢ = √λᵢ Esses vetores sempre precisam existir, pois λ₁, λ₂, ..., λᵣ são maiores que zero. Vamos provar que {u₁->, u₂->, ..., uᵣ->} são ortogonais: Para i≠j: uᵢ->ᵀuⱼ-> = (Avᵢ->/σᵢ)ᵀ(Avⱼ->/σⱼ) uᵢ->ᵀuⱼ-> = (1/(σᵢσⱼ))vᵢ->ᵀAᵀAvⱼ-> Como vⱼ-> é um autovetor de AᵀA, segue-se que AᵀAvⱼ-> = σⱼ²vⱼ->. Substituindo: uᵢ->ᵀuⱼ-> = (1/(σᵢσⱼ))vᵢ->ᵀσⱼ²vⱼ-> uᵢ->ᵀuⱼ-> = (σⱼ²/(σᵢσⱼ))vᵢ->ᵀvⱼ-> Como os vᵢ->'s são ortonormais, vᵢ->ᵀvⱼ-> = 0. Substituindo: uᵢ->ᵀuⱼ-> = (σⱼ²/(σᵢσⱼ))*0 uᵢ->ᵀuⱼ-> = 0 Vamos provar que {u₁->, u₂->, ..., uᵣ->} são unitários: ||uᵢ->|| = √(uᵢ->ᵀuᵢ->) ||uᵢ->|| = √((Avᵢ->/σᵢ)ᵀ(Avᵢ->/σᵢ)) ||uᵢ->|| = √((1/(σᵢσᵢ))vᵢ->ᵀAᵀAvᵢ->) Como vᵢ-> é um autovetor de AᵀA, segue-se que AᵀAvᵢ-> = σᵢ²vᵢ->. Substituindo: ||uᵢ->|| = √((1/(σᵢσᵢ))vᵢ->ᵀσᵢ²vᵢ->) ||uᵢ->|| = √((σᵢ²/(σᵢσᵢ))vᵢ->ᵀvᵢ->) Como os vᵢ->'s são ortonormais, vᵢ->ᵀvᵢ-> = 1. Substituindo: ||uᵢ->|| = √((σᵢ²/(σᵢσᵢ))*1) Simplificando: ||uᵢ->|| = √1 ||uᵢ->|| = 1 Portanto o conjunto {u₁->, u₂->, ..., uᵣ->} é ortonormal. Porque eles foram definidos como uᵢ-> = (Avᵢ->/σᵢ), segue-se que todos eles pertencem a C(A), pois são simplesmente combinações lineares das colunas de A. Como Rank(A) = r, então esses r vetores {u₁->, u₂->, ..., uᵣ->}, que são linearmente independentes (pois são ortogonais entre si), precisam ser uma base para C(A). Chame {uᵣ₊₁->, uᵣ₊₂->, ..., uₘ->} uma base ortonormal para N(Aᵀ). Sempre é possível achar essa base ortonormal por Gram-Schmidt. Chame V = [v₁-> v₂-> ... vₙ->]. Veja que V é ortogonal. Chame U = [u₁-> u₂-> ... uₘ->]. Veja que U é ortogonal. Chame Σ = matriz mxn cujas primeiras r entradas diagonais serão {σ₁, σ₂, ..., σᵣ}, e todas as outras entradas (diagonais e não-diagonais) serão 0. Note que Σ é diagonal (retangular se m≠n). Vamos explicar por que a seguinte igualdade é verdadeira: AV = UΣ Primeiro, vamos analisar AV. Sabemos que a i-ésima coluna de AV é simplesmente Avᵢ->. Para as primeiras r colunas de AV, a i-ésima coluna será, como definimos acima, σᵢuᵢ->, pois uᵢ-> = Avᵢ->/σᵢ. Todas as restantes n-r colunas serão 0->, uma vez que Avᵣ₊₁-> = 0->, ..., Avₙ-> = 0->, pois esses vetores formam uma base para N(A). Logo, AV = [σ₁u₁-> σ₂u₂-> ... σᵣuᵣ-> 0-> 0-> ... 0->], onde a quantidade de 0->'s é n-r. Note que AV é mxn, pois A é mxn e V é nxn. Agora, vamos analisar UΣ. Esta também será uma matriz mxn, pois U é mxm e Σ é mxn. Porque Σ é diagonal com r entradas diagonais não-zero e n-r entradas diagonais zero, temos o seguinte: O primeiro vetor colunar de Σ é , o segundo vetor colunar é , o r-ésimo vetor colunar é (na r-ésima posição), e todos os outros n-r vetores colunares são 0->. Portanto segue-se que UΣ será [σ₁u₁-> σ₂u₂-> ... σᵣuᵣ-> 0-> 0-> ... 0->] Logo, AV = UΣ. Como V é ortogonal, seu inverso é Vᵀ. Pós-multiplicando ambos os lados por Vᵀ: A = UΣVᵀ Logo, toda matriz tem decomposição por valores singulares. Vamos ver o que é AAᵀ: AAᵀ = UΣVᵀVΣᵀUᵀ Porque V é ortogonal, VᵀV = I. Substituindo: AAᵀ = UΣΣᵀUᵀ Como já provado, porque Σ é uma matriz diagonal mxn, ΣΣᵀ precisa ser mxm diagonal, e como Σ estava estruturado de forma que suas primeiras r entradas diagonais eram maiores que zero e as restantes são exatamente zero, segue-se que as primeiras r entradas diagonais de ΣΣᵀ são maiores que zero e as restantes são exatamente zero. Chame D := ΣΣᵀ. Substituindo: AAᵀ = UDUᵀ Porque U é ortogonal, Uᵀ = U⁻¹. Substituindo: AAᵀ = UDU⁻¹ Veja que o LDE é a diagonalização de AAᵀ. Como já provado, precisa ser o caso que os vetores em U são autovetores de AAᵀ. Mais especificamente, nas primeiras r colunas de U estão os autovetores unitários de AAᵀ para os quais os autovalores são maiores que zero, e nas restantes m-r colunas de U estão os autovetores unitários de AAᵀ para os quais os autovalores são exatamente zero.
@flyflyflyfly7757
@flyflyflyfly7757 Жыл бұрын
How to prove A times A transpose has the same non-trivial eigenvalue as A transpose times A?
@janferkl7649
@janferkl7649 3 жыл бұрын
32:12 I do not understand why are the resulting vectors called u1 and u2. I expect Ax1 and Ax2 to be there as a result of A multiplication. If U multiplication is rotation, than u1 and u2 are just rotated unit vectors (exactly as v's, different rotation)... Can somebody prove me wrong please?
@HKwak
@HKwak 3 жыл бұрын
I think it is correct to write σ_i*u_i not just u_i. U is just the rotation, so σ_i should remain there.
@k10shetty
@k10shetty 3 жыл бұрын
5:06 did he mean positive semi definite ?
@voppenheimer
@voppenheimer 3 жыл бұрын
Professor Strang mentions that U and V form bases for the four fundamental subspaces of A, but it's not clear to me how C(A) = C(Ur) and C(A') = C (Vr'). I know that U and V were determined by the eigenvectors of AA' and A'A, respectively, but how are these related to the column and row spaces of A?
@yuwuxiong1165
@yuwuxiong1165 3 жыл бұрын
i think the structure of Sigma tells that.
@dariuszspiewak5624
@dariuszspiewak5624 Жыл бұрын
Here's how I understand it. They are connected to the column and row spaces by the fact that they constitute bases for those spaces, which in turn means there are r of them (rank of the matrix) and they are orthogonal (within each basis, that is to say, because the row space and column space generally belong to two different spaces if the matrix is not square). They are not columns or rows from the matrix itself. They can be constructed from the matrix (especially the basis for the row space), though, by the Gram-Schmidt process. I still need to confirm how the ortho-basis for the column space comes into life... By the way, it just so happens (miraculously) that the U and V matrices that store orthonormal vectors contain the eigenvectors of A'A and AA'. It's just a strange and beautiful fact. Additionally, the sigma matrix is built from the square roots of the eigenvalues which all happen to be real non-negative for the symmetric matrices A'A and AA'. Obviously, since the entries in U and V are eigenvectors belonging to the row/column-spaces, they definitely are linear combinations of the rows/columns (and there are r of them). Is that clearer?
@thomassun3046
@thomassun3046 27 күн бұрын
I have a question, could anyone answer it? many thanks! how could v1 v2 in row spaces?, u1,u2 in column space which is true obvious
@rezab314
@rezab314 3 жыл бұрын
26:35 ☺️
@piotrekm7943
@piotrekm7943 8 ай бұрын
OK, but If AA^T = U \Lambda U^T and A^TA=V \Lambda V^T what is derivation of formulae AV=\Sigma U?
@mariap.9768
@mariap.9768 Жыл бұрын
4:49 There is an error in the subtitles. Should say 'It's symmetric', not 'It's a metric'
@user-ff9oq6jg9c
@user-ff9oq6jg9c 2 жыл бұрын
Why do U and V mean rotation? Why not reflection?
@neoneo1503
@neoneo1503 3 жыл бұрын
The proof for why v1,v2,... ,vr are orthogonal means u1,u2,... ,ur are orthogonal by A*v=sigma*u at 23:00.
@Unstable_Diffusion89
@Unstable_Diffusion89 3 жыл бұрын
@25:00 Professor Strang manipulates V_1^t * (A^T * A) *V_2 into V_1^t * (∂_2 ^2) *V_2...... How does A^T * A = ∂_2 ^2 exactly?
@user-rc6ik6rq4s
@user-rc6ik6rq4s 3 жыл бұрын
Those v's are just eigenvectors of A^TA. Then we have A^TA v = ∂^2 v.
@johnnash8515
@johnnash8515 3 жыл бұрын
Just come from AX=§X.
@intuitivecontrolsystem4436
@intuitivecontrolsystem4436 2 жыл бұрын
1. Start with Ax = ∂ x and keep the transpose x^T A^T= x^T ∂ ready 2. Multiply A^T both sides thus, A^T Ax = A^T ∂ x 3. Multiply x^T both sides thus, x^T A^T Ax = x^T A^T ∂ x 4. Replace x^T A^T with x^T ∂ 5. Thus, x^T(A^T A)x =x^T (∂ ^2 ) x So you believe (∂ ^2 ) are the eigenvalues of A^T A, so are the sigma^2
@brendanlydon5272
@brendanlydon5272 3 жыл бұрын
I am not entirely sure if this is correct ( plz say if I am not), but I’m also pretty sure U, sigma, and v-transpose can also be called LDL-transpose or also LDU for L is lower D diagonal and U for upper
@matiassandacz9145
@matiassandacz9145 4 жыл бұрын
I think he made an error. He said the determinant of an orthogonal matrix is 1, but actually it is 1 or -1. Then Det(A) = + or - the product of the singular values.
@papaonn
@papaonn 4 жыл бұрын
actually it refers to the symmetry matrix, so he simply ignore the negative value because symmetry matrix has eigen of positive semi definiteness, and the orthogonal matrix here refers to the eigen matrix.
@imrematajz1624
@imrematajz1624 4 жыл бұрын
Matías Sandacz Hi, I think he was only talking about (semi-) definite matrices in which case Prof Strang was right.
@lunablind1456
@lunablind1456 3 жыл бұрын
​@@imrematajz1624 I think @Matías Sandacz refers to the U and V matrices. Those are orthogonal which means the determinant is either 1 or -1. If it is 1 it would be a rotation. If it -1 it would be a reflection. At 29:46 he even says that it could be a reflection. Can someone please confirm this?
@goutamchatterjee1196
@goutamchatterjee1196 3 жыл бұрын
Can someone please explain why is AV1 =sigma1 U1
@jimmygan801
@jimmygan801 3 жыл бұрын
wait... this is the dude that wrote the textbook!!!???
@learningstatistics1290
@learningstatistics1290 3 жыл бұрын
26:09 I doubt the statement that v must be in the row space.
@jayhene830
@jayhene830 3 жыл бұрын
(A^T*A)v = cv (A^t)Av = cv --> v must be in the column space of A^T since it is a linear combination of the columns of A^T. Therefore it is in the rowspace of A
@ZhaoXiliang
@ZhaoXiliang 2 жыл бұрын
A=SQ, where A must be square matrix?
@mosesgu0415
@mosesgu0415 Жыл бұрын
Since we can get free lecture from MIT, is it really necessary to go to college? I bet the wherever it is, MIT has better in quality.
@ajiteshbhan
@ajiteshbhan 4 жыл бұрын
I have a query guys at 36:00 professor speaks about the no. of components in U, Sigma, V just looking at A for 2x2, 3x3 and 4x4, how do we look at rectangular matrices for such cases? Any help would be appreciated.
@abhishekshukla6466
@abhishekshukla6466 2 жыл бұрын
why singular values are always positive?
@anamericanprofessor
@anamericanprofessor 5 жыл бұрын
sigma 1 >= sigma 2
@elvykamunyokomanunebo1441
@elvykamunyokomanunebo1441 Жыл бұрын
Has any body else spotted the error @25:21 A^T A = sigma_2 ^2 /( sigma_1 * sigma_2 ) is that an error there? sigma_2 ^2 relates to the vector v_2 while A^T A relates sigma^2 , a diagonal matrix i.e. A^T A = V sigma^2 V^T or am I wrong, if so would appreciate an explanation :)
@vivianhuang5647
@vivianhuang5647 Жыл бұрын
We know that A^T*A = V(E^T*E)V^T in which V2 is an eigenvector to A^T*A, and sigma2^2 is its eigenvalue (from E^T*E). Then (A^T*A)*V2= (sigma2^2)*V2.
@elvykamunyokomanunebo1441
@elvykamunyokomanunebo1441 Жыл бұрын
@@vivianhuang5647 Thanks :)
@cl3826
@cl3826 4 жыл бұрын
Why is UtU = I?
@erikdzul
@erikdzul 4 жыл бұрын
U is an orthonormal matrix, so UtU = UUt = I
@SantosPerezAttorney
@SantosPerezAttorney 2 жыл бұрын
Am getting an SVD tattoo.
@menolmunes5198
@menolmunes5198 Жыл бұрын
26:35😅
@coolarun3150
@coolarun3150 Жыл бұрын
speed 1.25 is perfect
7. Eckart-Young: The Closest Rank k Matrix to A
47:16
MIT OpenCourseWare
Рет қаралды 86 М.
5. Positive Definite and Semidefinite Matrices
45:28
MIT OpenCourseWare
Рет қаралды 152 М.
When You Get Ran Over By A Car...
00:15
Jojo Sim
Рет қаралды 24 МЛН
Мы никогда не были так напуганы!
00:15
Аришнев
Рет қаралды 6 МЛН
Became invisible for one day!  #funny #wednesday #memes
00:25
Watch Me
Рет қаралды 55 МЛН
Final muy increíble 😱
00:46
Juan De Dios Pantoja 2
Рет қаралды 54 МЛН
Singular Value Decomposition (SVD) and Image Compression
28:56
Serrano.Academy
Рет қаралды 89 М.
Lecture 1: The Column Space of A Contains All Vectors Ax
52:15
MIT OpenCourseWare
Рет қаралды 316 М.
Principal Component Analysis (PCA)
13:46
Steve Brunton
Рет қаралды 367 М.
Lecture: The Singular Value Decomposition (SVD)
44:36
AMATH 301
Рет қаралды 225 М.
The Matrix Transpose: Visual Intuition
26:01
Sam Levey
Рет қаралды 18 М.
9. Four Ways to Solve Least Squares Problems
49:51
MIT OpenCourseWare
Рет қаралды 117 М.
Singular Value Decomposition (the SVD)
14:11
MIT OpenCourseWare
Рет қаралды 597 М.
17. Orthogonal Matrices and Gram-Schmidt
49:10
MIT OpenCourseWare
Рет қаралды 201 М.
When You Get Ran Over By A Car...
00:15
Jojo Sim
Рет қаралды 24 МЛН