isn't 22:19 the right side formula for x1|x2 not for x2|x1?
@Sheriff_Schlong3 ай бұрын
at 1:02:40 IK this teacher was a legend. 11years late and still able to gain much valuable knowledge from these lectures!
@crestz14 ай бұрын
beautifully linked the idea of maximising likelihood by illustrating the 'green line' @ 51:41
@crestz14 ай бұрын
Amazing lecturer
@forughghadamyari82814 ай бұрын
hi. Thanks for wonderful videos. please introduce a book to study for this course.
@ratfuk93404 ай бұрын
Thank you for this
@bottomupengineering5 ай бұрын
Great explanation and pace. Very legit.
@terrynichols-noaafederal95375 ай бұрын
For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?
@m0tivati0n716 ай бұрын
Still great in 2023
@huuducdo1436 ай бұрын
Hello Nando, thank you for your excellent course. Following the bell example, the muy12 and sigma12 you wrote should be for the case that we are giving X2=x2 and try to find the distribution of X1 given X2=x2. Am I correct? Other understanding is welcomed. Thanks a lot!
@newbie80517 ай бұрын
It amazes me that people were discussing these topics when I was studying about the water-cycle lol.
@ScieLab8 ай бұрын
Hi Nando, is it possible to access the codes that you have mentioned in the lecture?
@S25plus8 ай бұрын
Thanks prof. Freitas, this is extremely helpful
@TheDeatheater39 ай бұрын
super good
@marcyaudrey660810 ай бұрын
This lecture is amazing Professor. From the bottom of my heart, I say thank you.
@concoursmaths827010 ай бұрын
professor Nando, thank you a lot!!
@bodwiser10011 ай бұрын
One thing that remained confusing for me for a long time and which I don't think he clarified in the video was that the N and the summation from i = 1 to i = N does not refer to the # of data points in our dataset but to the number of times of we run the Monte Carlo simulation.
@truongdang8790 Жыл бұрын
Amazing example!
@guliyevshahriyar Жыл бұрын
Thank you very much.
@bingtingwu8620 Жыл бұрын
Thanks!!! Easy to understand👍👍👍
@subtlethingsinlife Жыл бұрын
He is a hidden gem .. I have gone through a lot of his videos , they are great in terms of removing jargon .. and bringing clarity
@fuat7775 Жыл бұрын
This is absolutely the best explanation of the Gaussian!
@nikolamarkovic9906 Жыл бұрын
49:40 str 46
@el-ostada5849 Жыл бұрын
Thank you for everything you have given to us.
@charlescoult Жыл бұрын
This was an excellent lecture. Thank you.
@cryptogoth Жыл бұрын
Great lecture, abrupt ending. I believe this is the short (but dense) book mentioned by Criminisi about decision forests www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CriminisiForests_FoundTrends_2011.pdf
@chenqu773 Жыл бұрын
It looks like that the notation of the axis in the graph on the right side of the presentation, @ around 20:39, is not correct. It could probably be the x1 on x-axis. I.e: it would make sense if μ12 refered to the mean of variable x1, rather than x2, judging from the equation shown on the next slide.
@kianbehdad Жыл бұрын
You can olny "die" once. That is how I remember die is singular :D
@hohinng8644 Жыл бұрын
The use of notation at 23:00 is confusing for me
@rikki1462 жыл бұрын
Learning advanced ml concepts for free! What a time to be alive. Thanks a lot for the vid!
@marouanbelhaj78812 жыл бұрын
To this day, I keep coming back to your videos to refresh ML concepts. Your courses are a Masterpiece!
@emmanuelonyekaezeoba63462 жыл бұрын
Very elaborate and simple presentation. Thank you.
@Gouda_travels2 жыл бұрын
This is when got really interesting 22:02 typically, I'm given points and I am trying to learn the mu's and the sigma's
@MrStudent19782 жыл бұрын
1:12:24 What is mu(x)? Is that different from mu?
@ahmed_mohammed_12 жыл бұрын
I wish if i discovered your courses a bit earlier
@adamtran57472 жыл бұрын
Love the content. <3
@michaelcao94832 жыл бұрын
Thank you! Really great explanation!!!
@augustasheimbirkeland44962 жыл бұрын
5 minutes in and its already better than all 3 hours at class earlier today!
@truptimohanty93862 жыл бұрын
This is the best video for understanding the Bayesian Optimization. It would be a great help if you could you post a video on multi objective Bayesian optimization specifically on expected hyper volume improvement. Thank you
@htetnaing0072 жыл бұрын
Don't stop sharing these knowledge for those are vital to the progress of humankind!
@jeffreycliff9222 жыл бұрын
access to the source code to do this would be useful
@gottlobfreige10752 жыл бұрын
So, basically, it's partial derivatives?
@gottlobfreige10752 жыл бұрын
I don't understand, it's basically a lot of derivatives within the layers.. correct?
@jx48642 жыл бұрын
After 30mins, I am sure that he is top 10 teacher in my life
@cicik572 жыл бұрын
the best way to explain gamma function, is that is continuous factorial. you should point that P(teta) you write is probability DENCITY function here
@jhn-nt2 жыл бұрын
Great lecture!
@gottlobfreige10752 жыл бұрын
How do you understand the math part with depth? Anyone? Help me!
@xinking26442 жыл бұрын
if their is a mistake in 21:58 ? it should be condition on x1 instead of x2 ?
@FabulusIdiomas2 жыл бұрын
People is scared because your explanation sucks. You should do a better job as teacher