9. Four Ways to Solve Least Squares Problems

  Рет қаралды 117,947

MIT OpenCourseWare

MIT OpenCourseWare

5 жыл бұрын

MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018
Instructor: Gilbert Strang
View the complete course: ocw.mit.edu/18-065S18
KZfaq Playlist: • MIT 18.065 Matrix Meth...
In this lecture, Professor Strang details the four ways to solve least-squares problems. Solving least-squares problems comes in to play in the many applications that rely on data fitting.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Пікірлер: 48
@intuitivej9327
@intuitivej9327 2 жыл бұрын
Fantastic lecture... i am so lucky to watch and learn.... i am an ordinary mother of two little girls and recently finding myself experiencing full of joy learning linear algebra. It is all happened to me beacuse of wonderful lecturer and mit. thank you for sharing with us. I am going to continue to learn. Many thanks from korea.
@patf9770
@patf9770 3 жыл бұрын
MIT has still got it. What a time to be alive that we can watch this for free
@brendawilliams8062
@brendawilliams8062 2 жыл бұрын
I am sure happy about it.
@SheikhEddy
@SheikhEddy 5 жыл бұрын
Thanks for the lecture! I've tried to learn these things before and gotten out more confused than I was when I came in, but Dr. Strang's approach makes it all seem so simple!
@kristiantorres1080
@kristiantorres1080 3 жыл бұрын
Who dares to dislike this masterpiece of a lesson?
@iwonakozlowska6134
@iwonakozlowska6134 5 жыл бұрын
The "idea" of orthogonal projection allowed me to understand the Christoffel symbols. I "studied" all the lectures on MIT 18.06 but I am still discovering the linear algebra anew. Thanks G.S. , thanks MIT.
@georgesadler7830
@georgesadler7830 2 жыл бұрын
Professor Strang thanks for showing different ways to Solve Least Squares problems in linear algebra and statistics. Least Squares is used every day to fit data.
@abay669
@abay669 2 жыл бұрын
I wish u were my professo Mr Strang, but hey, I have u as my Professor here online: thank you very much for ur elegant explanation. Wish u good healt and long live Mr Strang
@chiahungmou7351
@chiahungmou7351 Жыл бұрын
Last two minutes for Gram-Schmidt is really remarkable, 2 mins hardly time to see the heart of that mathematic machine.
@omaraymanbakr3664
@omaraymanbakr3664 8 күн бұрын
ruthlesss 25 people have disliked this video , who dares to dislike a lecture by prof Strang!!
@unalcachofa
@unalcachofa 5 жыл бұрын
The first question from the problem set asks for the eigenvalues of A+ when A square. I know that A and A+ have the same number of zero eigenvalues but I'm stuck searching for a relationship for the non zero ones. Some hint?? I check numerically and I verified that they are not 1/λ_i as one might have conjecture.
@daweedcito
@daweedcito 2 жыл бұрын
Thought Id be watching for 5 minutes, ended up staying for the whole class...
@mengyuwang5159
@mengyuwang5159 5 жыл бұрын
One thing in question in the lecture is that Ui but not Vi is in the column space of A. Vi should be in the A's row space.
@user-jt7kw4jf9o
@user-jt7kw4jf9o Жыл бұрын
Thanks, I agree with u. I get trouble when I first see it
@shiv093
@shiv093 5 жыл бұрын
lecture starts at 5:18
@oldcowbb
@oldcowbb 4 жыл бұрын
thanks, i have 5 more minute to study for the final now
@alecunico
@alecunico 3 жыл бұрын
Hero
@ryanjackson0x
@ryanjackson0x 2 жыл бұрын
I am not skipping anything from Strang
@paganisttc
@paganisttc 3 жыл бұрын
The best of the bests
@hasan0770816268
@hasan0770816268 4 жыл бұрын
least square problem: to solve a system of equations that has more equations than unknowns, i.e. non square matrix. we solve by At Ax = At b, but since we cant find At for non square matrix, we approximate using svd
@forheuristiclifeksh7836
@forheuristiclifeksh7836 19 күн бұрын
5:41 least squared
@thatsfantastic313
@thatsfantastic313 7 ай бұрын
Mathematicians teach machine learning way better than machine learning experts do, lol. Hats off to Prof. Strang
@Enerdzizer
@Enerdzizer 4 жыл бұрын
Prof claimed that A+b give the same result as ATA-1b in 40:39 if matrix ATA is invertable . But if it is not invertable ,what is geometric meaning of A+b? Is it still projection of b onto the column space of A?
@rushikeshshinde2325
@rushikeshshinde2325 4 жыл бұрын
It's it's not invertible,in general the vector gets mapped to a null space which is smaller than n dimension. This means, it gets mapped to lesser dimensional space hence it's impossible to recover/map it back to column space.
@dariuszspiewak5624
@dariuszspiewak5624 3 жыл бұрын
"You can't raise it from the dead"... How true, how true, prof. Strang :))) Even though there are some in this world that think it's actually possible to raise people from the dead, LoL :)))
@ajiteshbhan
@ajiteshbhan 4 жыл бұрын
At 46:00 professor says SVD in this case is neither side inverse but Right side is one side inverse, then he says at end under independent columns SVD gives same result as Guass, but sigma matrix in pseudo inverse should still have missing values how will they give same result?
@yuchenzhao6411
@yuchenzhao6411 4 жыл бұрын
Under the independent columns assumption, A has left-inverse, and it's form is exactly same as the Guass's method.
@heidioid
@heidioid 2 жыл бұрын
bookmark Least Squares Problem 23:00
@srinivasg20
@srinivasg20 4 жыл бұрын
Sir you are father of linear algebra. Nice teaching sir.
@dohyun0047
@dohyun0047 4 жыл бұрын
in notice box why both of equations don't produce same identity matrix? 43:30
@jayantpriyadarshi9266
@jayantpriyadarshi9266 4 жыл бұрын
Because you cannot open the bracket in the second expression. As the inner matrices are not square and thus they don't have an inverse.
@dohyun0047
@dohyun0047 4 жыл бұрын
@@jayantpriyadarshi9266 thank youuuu
@jayantpriyadarshi9266
@jayantpriyadarshi9266 4 жыл бұрын
@@dohyun0047 no worries bro.
@Irfankhan-jt9ug
@Irfankhan-jt9ug 4 жыл бұрын
Camera man ....Follow Prof Strang!!!
@user-fh4xl3xz1f
@user-fh4xl3xz1f 4 жыл бұрын
Wel, this matrix here is doing its best to be the inverse. Actually, everybody here is just doing the best to be an inverse. (c) This phrase really describes me fighting my procrastination all the day.
@matheusvillaas
@matheusvillaas Жыл бұрын
29:40 why do we use p=2 norm rather than any other p?
@peperomero4603
@peperomero4603 5 күн бұрын
xtx is the norm 2 of x, the usual inner product
@alshbsh2
@alshbsh2 4 жыл бұрын
how did he get (Ax-b)T(Ax-b)?
@hiltonmarquessantana8202
@hiltonmarquessantana8202 4 жыл бұрын
MA MO The dot product in a matrix form
@phsamuelwork
@phsamuelwork 4 жыл бұрын
Ax-b is a column vector. So (Ax-b)T is a row vector. Let's write Ax-b = w, wT w give us sum_i wi^2, that is exactly the sum of square of all elements in w.
@user-rn6ff8fq5n
@user-rn6ff8fq5n 4 ай бұрын
If b is perpendicular to the column space of A, what is the solution for Ax=b?
@peperomero4603
@peperomero4603 5 күн бұрын
then b is in the null space of the hat matrix H (the orthogonal complement) and so we know that Hb = 0 and so b-hat is 0, so x-hat (Hb = Ax-hat) is 0 if the nxm matrix A has rank=m and if not x-hat is the null space of A. So x-hat would be generated by the columns of the matrix (I - A^+A) where A^+ is any matrix such that AA^+A = A.
@drscott1
@drscott1 2 жыл бұрын
👍🏼
@Fan-vk8tl
@Fan-vk8tl 3 жыл бұрын
the pseudoinverse part is unclear, the book tells more details and it relationship with the normal solution
@meyerkurt5875
@meyerkurt5875 3 жыл бұрын
Could u tell me how to find the book or the name of book? Thank you!
@Fan-vk8tl
@Fan-vk8tl 3 жыл бұрын
@@meyerkurt5875 His own book: linear algebra and learning from Data.
@chrischoir3594
@chrischoir3594 3 жыл бұрын
This guy could be the worst professor of all time
@paradoxicallyexcellent5138
@paradoxicallyexcellent5138 2 жыл бұрын
Far from the worst but he ain't great, that's for sure.
Lecture 10: Survey of Difficulties with Ax = b
49:36
MIT OpenCourseWare
Рет қаралды 51 М.
WORLD'S SHORTEST WOMAN
00:58
Stokes Twins
Рет қаралды 126 МЛН
A teacher captured the cutest moment at the nursery #shorts
00:33
Fabiosa Stories
Рет қаралды 55 МЛН
路飞太过分了,自己游泳。#海贼王#路飞
00:28
路飞与唐舞桐
Рет қаралды 38 МЛН
Double Stacked Pizza @Lionfield @ChefRush
00:33
albert_cancook
Рет қаралды 117 МЛН
Lecture 1: The Column Space of A Contains All Vectors Ax
52:15
MIT OpenCourseWare
Рет қаралды 318 М.
Berry's Paradox - An Algorithm For Truth
18:34
Up and Atom
Рет қаралды 435 М.
23. Accelerating Gradient Descent (Use Momentum)
49:02
MIT OpenCourseWare
Рет қаралды 50 М.
What's going on with Windows Laptops?
10:30
Marques Brownlee
Рет қаралды 2,6 МЛН
Linear Systems of Equations, Least Squares Regression, Pseudoinverse
11:53
Linear Least Squares to Solve Nonlinear Problems
12:27
The Math Coffeeshop
Рет қаралды 29 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 426 М.
21. Eigenvalues and Eigenvectors
51:23
MIT OpenCourseWare
Рет қаралды 623 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 852 М.
WORLD'S SHORTEST WOMAN
00:58
Stokes Twins
Рет қаралды 126 МЛН