Machine learning - Introduction to Gaussian processes

  Рет қаралды 294,787

Nando de Freitas

Nando de Freitas

11 жыл бұрын

Introduction to Gaussian process regression.
Slides available at: www.cs.ubc.ca/~nando/540-2013/...
Course taught in 2013 at UBC by Nando de Freitas

Пікірлер: 159
@augustasheimbirkeland4496
@augustasheimbirkeland4496 2 жыл бұрын
5 minutes in and its already better than all 3 hours at class earlier today!
@DistortedV12
@DistortedV12 5 жыл бұрын
Finally! This is gold for beginners like me! Thank you Nando!! Saw you o the committee at the MIT defense, great questions!
@erlendlangseth4672
@erlendlangseth4672 6 жыл бұрын
Thanks, this helped me a lot. By the time you got to the hour mark, you had covered sufficient ground for me to finally understand gaussian processes!
@sarnathk1946
@sarnathk1946 6 жыл бұрын
This is indeed an Awesome lecture! I liked the way the complexity is slowly built over the lecture. Thank you very much!
@SijinSheung
@SijinSheung 5 жыл бұрын
This lecture is so amazing! The hand drawing part is really helpful to build up intuition reagarding GP. This is a life-saving video to my finals. Many thanks!
@sourabmangrulkar9105
@sourabmangrulkar9105 4 жыл бұрын
The way you started from basics and built up on it to explain the Gaussian Processes is very easy to understand. Thank you :)
@maratkopytjuk3490
@maratkopytjuk3490 8 жыл бұрын
Thank you, I tried to understand GP via papers, but only you could help me to build up understanding the idea. That is great that you took time to explain gaussian distribution and the important operations! You're the best!
@MrEdnz
@MrEdnz 2 жыл бұрын
Learning a new subject via papers isn’t very helpful indeed :) They expect you to understand basic principles of GP. However lectures like these or books start with the basic principles💪🏻
@fuat7775
@fuat7775 Жыл бұрын
This is absolutely the best explanation of the Gaussian!
@user-oc5gk7yn6o
@user-oc5gk7yn6o 4 жыл бұрын
I've found so many lectures for understanding gaussian process. Until now you are the only one I think can make me understand it.. Thanks a lot man
@daesoolee1083
@daesoolee1083 2 жыл бұрын
The best tutorial for GP among all the materials I've checked.
@francescocanonaco5988
@francescocanonaco5988 5 жыл бұрын
I tried to understand GP via blog article, paper and a lot of videos. Best video ever on GP! Thank you !
@dennisdoerrich3743
@dennisdoerrich3743 6 жыл бұрын
Wow, you saved my life with this genius lecture ! I think it's a pretty abstract idea with GP and it's nice that you can walk one through from scratch !
@jingjingjiang6403
@jingjingjiang6403 6 жыл бұрын
Thank you for sharing this wonderful lecture! Gaussian process was so confusing when it was taught in my university. Now it is crystal clear!
@life99f
@life99f 2 жыл бұрын
I feel so fortunate to find this video. It's like walking in a fog and finally be able to see things clearly.
@LynN-he7he
@LynN-he7he 4 жыл бұрын
Thank you, thank you thank you!! I was stuck on a homework problem and still figuring out what it means to be a testing vs. training data set and how the play a role in the Gaussian Kernel function. I was stuck for the last 3 days, and your video from about 45min - 1 hour mark made the lightbulb go off!
@ziangxu7751
@ziangxu7751 3 жыл бұрын
What an amazing lecture. It is much clearer than lectures taught in my university.
@huitanmao5267
@huitanmao5267 7 жыл бұрын
Very clear lectures ! Thanks for make them publicly available !
@pradeepprabakarravindran615
@pradeepprabakarravindran615 11 жыл бұрын
Thank you ! Your videos are so much awesome than any ML lecture series I have seen so far ! -- Grad Student from CMU
@xingtongliu1636
@xingtongliu1636 5 жыл бұрын
This becomes very easy to understand with your thorough explanation. Thank you very much!
@marcyaudrey6608
@marcyaudrey6608 10 ай бұрын
This lecture is amazing Professor. From the bottom of my heart, I say thank you.
@Ricky-Noll
@Ricky-Noll 3 жыл бұрын
All time one of the best videos on KZfaq
@emrecck
@emrecck 3 жыл бұрын
That was a great lecture Mr.Freitas, thank you very very much! I watched it to study my Computational Biology course, and it really helped.
@dieg3005
@dieg3005 8 жыл бұрын
Thank you very much Prof. de Freitas, excellent introduction
@Jacob011
@Jacob011 10 жыл бұрын
Absolutely superb lecture! Everything is clearly explained even with source code.
@woo-jinchokim6441
@woo-jinchokim6441 7 жыл бұрын
by far the best structured lecture on gaussian processes. love it :D
@MattyHild
@MattyHild 5 жыл бұрын
FYI Notation @22:05 is wrong. since he selected an x1 to condition on, he should be computing mu2|1 but he is computing mu1|2
@DanielRodriguez-or7sk
@DanielRodriguez-or7sk 4 жыл бұрын
Thank you so much Professor De Freitas. What a clear explanation of GP
@jinghuizhong
@jinghuizhong 9 жыл бұрын
The lecture is quite clear and it inspires me about the the key ideas of gaussian process. Many thanks!
@MB-pt8hi
@MB-pt8hi 6 жыл бұрын
Very good lecture, full of intuitive examples which deepens the understanding. Thanks a lot
@bluestar2253
@bluestar2253 3 жыл бұрын
One of the best teachers in ML out there!
@dwhdai
@dwhdai 4 жыл бұрын
wow, this is probably the best lecture I've ever watched. on any topic.
@austenscruggs8726
@austenscruggs8726 2 жыл бұрын
This is an amazing video! Clear and digestible.
@HarpreetSingh-ke2zk
@HarpreetSingh-ke2zk 2 жыл бұрын
I started learning about multivariate Gaussian processes in 2011, but it's terrible that I just got to this video when 2021 is ending. He explained things in a way that even a layperson could grasp. He first explains the meaning of the concepts, followed by an example/data, and last, theoretical representation. Typically, mathematic's presenters/writers avoid using data to provide examples. I'm always on the lookout for lectures like these, where the theoretical understanding is demonstrated through examples or data. Unless the concepts are not difficult to grasp, but the presenter/writer has made us go deep in order to open up complex notations without providing any examples.
@KhariSecario
@KhariSecario 2 жыл бұрын
Here I am in 2021, yet your explanation is the easiest one to understand from all the sources I gathered! Thank you very much 😍
@matej6418
@matej6418 Жыл бұрын
me in 2023, still the same
@Gouda_travels
@Gouda_travels 2 жыл бұрын
after one hour of smooth explanation, he says and this brings us to Gaussian processes :)
@taygunkekec9616
@taygunkekec9616 9 жыл бұрын
Very clearly explained. The dependencies for learning the framework is concisely and incrementally given while details that make the framework harder to understand is elaborately evaded (You will understand what I mean if you try to dig through Rasmussen's book on GP).
@sumantamukherjee1952
@sumantamukherjee1952 9 жыл бұрын
Lucidly explained. Great video
@sak02010
@sak02010 5 жыл бұрын
thanks a lot prof. Very clean and easy to understand explanation.
@malharjajoo7393
@malharjajoo7393 4 жыл бұрын
Basic summary of lecture video: 1) Recap on multivariate Normal/Gaussian distribution (MVN). - some info on conditional probability 2) Some information on how sampling can be done from Univariate/Multivariate Gaussian distribution. 3) 39:00 - Introduction to Gaussian Process (GP) It is important to note that GP is considered as a Bayesian non-parametric approach/model
@quantum01010101
@quantum01010101 4 жыл бұрын
That is clear and flows naturally, Thank you very much.
@bottomupengineering
@bottomupengineering 5 ай бұрын
Great explanation and pace. Very legit.
@sanjanavijayshankar5508
@sanjanavijayshankar5508 4 жыл бұрын
Brilliant lecture. One could not have taught GPs better.
@saminebagheri4175
@saminebagheri4175 7 жыл бұрын
amazing lecture.
@richardbrown2565
@richardbrown2565 3 жыл бұрын
Great explanation. I wish that the title mentioned that it was part one of two, so that I would have known it was going to take twice as long.
@afish3356
@afish3356 4 жыл бұрын
An extremely good lecture! Thank you for recording this :) :)
@EbrahimLPatel
@EbrahimLPatel 8 жыл бұрын
Excellent introduction to the subject! Thank you :)
@darthyzhu5767
@darthyzhu5767 8 жыл бұрын
really clear and comprehensive. thanks so much.
@oliverxie9559
@oliverxie9559 3 жыл бұрын
Really great video for reading Gaussian Processes for Machine Learning!
@adrianaculebro9176
@adrianaculebro9176 5 жыл бұрын
Finally understood how this idea is explained and applied using mathematical language
@jx4864
@jx4864 2 жыл бұрын
After 30mins, I am sure that he is top 10 teacher in my life
@pankayarajpathmanathan7009
@pankayarajpathmanathan7009 7 жыл бұрын
The best lecture for gaussian processes
@rsilveira79
@rsilveira79 6 жыл бұрын
Awesome lecture, very well explained!
@akshayc113
@akshayc113 9 жыл бұрын
Thanks a lot Prof. Just a minor correction for the people following the lectures. You made a mistake while writing out the formulae at 22:10 You were writing out mean and variance of P(X1|X2) whereas the diagram was to find P(X2|X1). Since this is symmetric, you can just get them by appropriate replacements, but just letting slightly confused people know
@charlsmartel
@charlsmartel 8 жыл бұрын
+akshayc113 I think all that should change is the formula for the given graphs. It should read: mu_21 = mu_2 + sigma_21 sigma_11*-1 (x_1 - mu_1). Everything else can stay the same.
@tobiaspahlberg1506
@tobiaspahlberg1506 8 жыл бұрын
I think he actually meant to draw x_1 where x_2 is in the diagram. This switch would agree with the KPM formulae on the next slide.
@turkey343434
@turkey343434 4 жыл бұрын
Gaussian processes start at 1:01:15
@hohinng8644
@hohinng8644 Жыл бұрын
pin this
@malharjajoo7393
@malharjajoo7393 4 жыл бұрын
1:04:08 - Would be good to emphasize that the test set is actually used for generating prior ... I had a hard time making sense out of it because the test set is usually provided separately (but in this case we are generating it !!)
@homtom2
@homtom2 8 жыл бұрын
This helped me so much! Thanks!
@pattiknuth4822
@pattiknuth4822 3 жыл бұрын
Extremely good lecture. Well done.
@yousufhussain9530
@yousufhussain9530 8 жыл бұрын
Amazing lecture!
@SimoneIovane
@SimoneIovane 5 жыл бұрын
Great lesson! Thank you!
@haunted2097
@haunted2097 10 жыл бұрын
well done! Very intuitive!
@dhruv385
@dhruv385 5 жыл бұрын
Wow! Great Lecture!
@stefansgrab
@stefansgrab 7 жыл бұрын
Chapeau, good lecture!
@gourv7ghoshal
@gourv7ghoshal 6 жыл бұрын
Thank you for sharing this vdo, it was really helpful
@maudentable
@maudentable 3 жыл бұрын
a master doing his work
@ahaaha8462
@ahaaha8462 4 жыл бұрын
amazing lecture, thanks a lot
@kiliandervaux6675
@kiliandervaux6675 3 жыл бұрын
Thank you so much for this amazing lecture. I wanted to applaude at the end but I realised I was in front of my computer.
@liamdavey8726
@liamdavey8726 6 жыл бұрын
Great Teacher! Thanks!
@kevinzhang4692
@kevinzhang4692 2 жыл бұрын
Thank you! It is a wonderful lecture
@yunlongsong7618
@yunlongsong7618 4 жыл бұрын
Great lecture. Thanks.
@FariborzGhavamian
@FariborzGhavamian 8 жыл бұрын
great lecture !
@jhn-nt
@jhn-nt 2 жыл бұрын
Great lecture!
@brianstampe7056
@brianstampe7056 4 жыл бұрын
Very helpful. Thanks!
@niqodea
@niqodea 4 жыл бұрын
BEAST MODE teaching
@bingtingwu8620
@bingtingwu8620 Жыл бұрын
Thanks!!! Easy to understand👍👍👍
@user-ym7rp9pf6y
@user-ym7rp9pf6y 3 жыл бұрын
Awesome explanation. thanks
@zhijianli8975
@zhijianli8975 7 жыл бұрын
Great lecture
@dracleirbag5838
@dracleirbag5838 2 жыл бұрын
I like the way you teach
@xesan555
@xesan555 8 жыл бұрын
nando you are wondeful...
@pedromaroto4633
@pedromaroto4633 5 жыл бұрын
I do not undetstand the concept of gp prior and gp posterior. Could anyone help me? Thank you in advance!
@itai19
@itai19 3 жыл бұрын
Thanks for the lecture, I have a problem with the discussion around 11 - from my understanding, a spherical case does represent some correlation between X and Y, as X is a sub-component of the max radius calculation, meaning larger x leads to smaller possible values of y (or at least lower probability for higher values). In other words, the covariance can be approximated to something like E[x*sqrt(r^2-x^2)]. Are we saying that ends up being zero, i.e. correlation is unable to express such a dependency? My intuition currently understands a square to express 0 correlation
@crestz1
@crestz1 4 ай бұрын
Amazing lecturer
@redberries8039
@redberries8039 3 жыл бұрын
This was a good explanation.
@katerinapapadaki4810
@katerinapapadaki4810 5 жыл бұрын
Thanks for the helful lecture! The only thing I want to point out is that if you put labels on the axises on your plots, it would be more helful for the listener to understand from the begging what you describe
@JadtheProdigy
@JadtheProdigy 5 жыл бұрын
Can someone explain why f is distributed with mean 0?
@buoyrina9669
@buoyrina9669 5 жыл бұрын
You are the best
@adamtran5747
@adamtran5747 2 жыл бұрын
Love the content.
@kambizrakhshan3248
@kambizrakhshan3248 3 жыл бұрын
Thank you!
@swarnendusekharghosh9539
@swarnendusekharghosh9539 3 жыл бұрын
Thankyou sir for a clear explanation
@bertobertoberto3
@bertobertoberto3 9 жыл бұрын
Round of Applause
@heyjianjing
@heyjianjing 3 жыл бұрын
around 56:00, I don't think we should omit the condition sign on the mu*, that is conditioned on f: E(f*|f), not E(f*), otherwise, the expected value of f* alone should just be zero
@philwebb59
@philwebb59 2 жыл бұрын
1:05:58 Analog computers existed way before the first digital circuits. A WWII vintage electrical analog computer, for example, consisted of banks of op amps, configured as integrators and differentiators.
@Romis008
@Romis008 6 жыл бұрын
Fantastic
@ratfuk9340
@ratfuk9340 4 ай бұрын
Thank you for this
@Raven-bi3xn
@Raven-bi3xn 3 жыл бұрын
Am I correct to think that the "f" notation in 30':30" is not the same "f" in 1:01':30"? In the latter case, each f consists of all the 50 f distributions that are exemplified in the former case? If that understanding is correct, then in sampling from the GP, each sample is a 50by1 vector from the 50D multivariate Gaussian distribution. This 50by1 vector is what Dr. Nando refers to as "distribution over functions". In other words, given the definition of a stochastic process as "indexed random variables", each random variable of GP is drawn from a multivariate Gaussian distribution. In that viewpoint, each "indexed" random variable is a function in 1:01':30". This lecture from 2013 is truly an amazing resource.
@abhishekparida22
@abhishekparida22 4 жыл бұрын
Thank you for the lecture, and I appreciate the way you presented, spending a reasonable amount of time explaining Multivariate Gaussian distribution and building up from basics. My question to you is the following: If I happen to anticipate that the underlying distribution is Poisson (say), instead of Gaussian, WHAT will be the appropriate changes (I have an understanding its the likelihood which is modified, but not sure!). Will it still be called a Gaussian Process (or Poisson Process)?
@idleft
@idleft 8 жыл бұрын
I have a question about the regression part which I spend a lot of time thinking. In the beginning, we are assume f_i ~ N(0,K). I think this is because for the prior purpose. At the Noiseless GP regression, we are using f as the mu. My understanding is if we had a measurement, we consider that as mu for that specific x. Is that correct? What if there are multiple measurements for same x? Thank you.
@malekebadi9805
@malekebadi9805 4 жыл бұрын
Sum of two Gaussian r.v with means mu1 and mu2 is gaussian with mean mu1+mu2. Isn't it? Multiple measurements are multiple draws from the Gaussian process, so the means must be added.
@deephazarika2259
@deephazarika2259 6 жыл бұрын
when estimating 'f', why each point is treated as a separate dimension and not different points in the same dimension?
@malekebadi9805
@malekebadi9805 4 жыл бұрын
As far as I understood, Gaussian process (regression) serves two purposes: refining the prior (and posterior) and predicting the response for new points. If you collect new observations for the same points you are refining the posterior and if you extend your new point to a new dimension, you're predicting. In the former case, the confidence interval between two points remains relatively fat. Querying for points in new dimensions (given that practically you can do that) squeeze the confidence interval. Theoretically, it doesn't matter I guess. Think of an experiment in which you keep the x the same in every iteration but you read different y's. Think of another experiment in which your x values are changing from one iteration to another and you receives y's. From GP point of view, both are the same.
@terrynichols-noaafederal9537
@terrynichols-noaafederal9537 5 ай бұрын
For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?
@ho4040
@ho4040 2 жыл бұрын
Holy shit...what a good lecture
@tospines
@tospines 5 жыл бұрын
I think I got the essence of GP, but what I can not understand is why we take that the mean is 0 when clearly it is not 0. I mean, if we suppose that f* will be distributed as a gaussian with mean 0, the expectation value of f* must be 0. Could anyone explain me this fact?
@oskarkeurulainen6414
@oskarkeurulainen6414 5 жыл бұрын
0 is only the mean for the prior for f*. When we know values of other variables that are correlated with f*, then we actually want to consider the mean when f* is conditioned on the other observed variables. Compare with the ellipse in the beginning with x1 and x2, both have mean 0 but if we observe one of them to be positive, the other one is also likely to be positive and thus has a positive conditional expectation.
Machine learning - Gaussian processes
1:17:28
Nando de Freitas
Рет қаралды 91 М.
Machine Learning Lecture 26 "Gaussian Processes" -Cornell CS4780 SP17
52:41
Опасность фирменной зарядки Apple
00:57
SuperCrastan
Рет қаралды 11 МЛН
World’s Largest Jello Pool
01:00
Mark Rober
Рет қаралды 91 МЛН
IQ Level: 10000
00:10
Younes Zarou
Рет қаралды 4,7 МЛН
Gaussian Processes : Data Science Concepts
24:47
ritvikmath
Рет қаралды 9 М.
Machine learning - Maximum likelihood and linear regression
1:14:01
Nando de Freitas
Рет қаралды 110 М.
Gaussian Processes
23:47
Mutual Information
Рет қаралды 121 М.
What's a Tensor?
12:21
Dan Fleisch
Рет қаралды 3,6 МЛН
Water powered timers hidden in public restrooms
13:12
Steve Mould
Рет қаралды 295 М.
Inside Mark Zuckerberg's AI Era | The Circuit
24:02
Bloomberg Originals
Рет қаралды 1,3 МЛН
Machine learning - Importance sampling and MCMC I
1:16:18
Nando de Freitas
Рет қаралды 86 М.
I get confused trying to learn Gaussian Processes | Learn with me!
29:32
The Trillion Dollar Equation
31:22
Veritasium
Рет қаралды 8 МЛН
ML Tutorial: Gaussian Processes (Richard Turner)
1:53:32
Marc Deisenroth
Рет қаралды 131 М.
Опасность фирменной зарядки Apple
00:57
SuperCrastan
Рет қаралды 11 МЛН