Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Пікірлер: 25
@mpete02737 жыл бұрын
Out of the entire course this is easily the most important lecture. I've watched it several times to really internalize it.
@MadcowDeity9 жыл бұрын
Fantastic videos, I appreciate how open you're being with the coursework!
@osamamustafa68842 жыл бұрын
Your energy is appreciable in this whole playlist!
@paulthomann55449 жыл бұрын
Thank you very much for posting these lectures! So far, I find them interesting and understandable.
@CW7119 жыл бұрын
very clear connection between least square loss, MLE and KL divergence. Thanks.
@Nestorghh6 жыл бұрын
Great Professor! Thanks Dr De Freitas
@dbskluvu7 жыл бұрын
This lecturer teaches more than my lecturer in 1 hour. So good. Please teach in my uni )):
@kafaayari2 жыл бұрын
excellent teaching.
@dragonlorder8 жыл бұрын
Better explanation than Bishop's chapter 1 : )
@omeryalcn57976 жыл бұрын
bishop book is bible of ml
@andrewczeizler598 жыл бұрын
this needs to be turned into mooc!!!
@salemameen9 жыл бұрын
thanks
@treflir7 жыл бұрын
Good content for beginners ! I strongly advise against your method to simulate gaussian variables though. It is a general method (the use of the inverse of the distribution function) that is really bad for gaussian variables in terms of complexity. The two best way to go are Box-Müller's method and Marsaglia's method (and from my own experience their performance are equivalent). That being said, I enjoyed this video, thank you for sharing !
@npabbisetty6 жыл бұрын
At the end of each video, it would be great to include a "Summary for Practitioners"...that distills the theory to practice...the why and the what.
@VictorChavesVVBC7 жыл бұрын
At the very end, how closely related is cross entropy to that KL/MLE relationship? It seems that you have it as the variable term in 1:12:08 but I'm not sure.
@abcborgess7 жыл бұрын
nice
@robthorn39107 жыл бұрын
Plural of matrix is matrices.
@cognitiveinstinct29297 жыл бұрын
I like the video series, but the audio needs work. The constant background buzz is really distracting.
@maiiabakhova24749 жыл бұрын
The guy is not good with mathematics, losing logarithms, confusing probabilities and values of density funcitions. But thanks anyway.
@kikirizki43187 жыл бұрын
why is the value of y axis of gaussian distibution equals to p_x_given_theta ?
@myabakhova72717 жыл бұрын
For the probability there are must be an integral over a small interval of length delta x which is then approximated by value of function multiplied by delta x. But he ignores it because he considers ratio where delta x will be cancelled. Technically it should be at least mentioned. As mathematician I found such skipping of steps annoying.
@Ahmedkedir7 жыл бұрын
I am wary of Mathematician who generalize about the "guy" based on one video and use argument from authority. Please check the works of the "guy" first. I admit there are minor mistakes, but this lecture was meant as revision not as main lecture.
@myabakhova72717 жыл бұрын
Since when mistakes are fine in a revision lecture? First time I hear about it.
@seratonewaymar10687 жыл бұрын
1. He's a Computer Science professor, not a mathematics prof. The fact that he's this proficient with maths is an accomplishment in itself 2. You're getting a course (that Oxford students pay around 3000-4000£ to attend) for free on KZfaq along with the slides for FREE. Don't complain 3. If you understand what his mistakes are, it shouldn't affect you in the least. You just need to code all this once and then forget about it. 4. Be positive, spread gratitude instead of your complaints Peace. Happy Learning