Logistic regression: Optimization and Bayesian inference via Monte Carlo. Slides available at: www.cs.ubc.ca/~nando/540-2013/... Course taught in 2013 at UBC by Nando de Freitas
Пікірлер: 14
@harry1357931gmail11 жыл бұрын
Excellent Lecture on Logistic Regression....must go through it twice to thrice to absorb it completely....
@mrf1459 жыл бұрын
Thank you. The lecture is quite easy to understand. You delivered it in very good way.
@18amarage8 жыл бұрын
I really love your lecture sir...
@brianclark479610 жыл бұрын
thank you for making this lecture available. very helpful.
@anynamecanbeuse4 жыл бұрын
This is just blowing my mind I should say.
@vermajiutube6 жыл бұрын
Awesome lecture. How do we estimate w(thetha_i) using MC in bayesian ?
@karthiks323910 жыл бұрын
This was a nice lecture.. Thank you.. I was also looking for lectures on Constrained Optimization and Support Vector Machines.. Are these available?
@flamingxombie7 жыл бұрын
Great lecture. By 'simulating' thetas, I am reckoning that we are respawning thetas from regions in proportion to their probabilities?
@abdulrahmansattar28736 жыл бұрын
Superlike!
@JackSPk5 жыл бұрын
Maybe I'm misunderstading the 3D plot, but shouldn't it be Y=0={RED} and Y=1={BLUE} at 12:40 ?
@Jinex20108 жыл бұрын
22:00 I think J comes Jacobian and H from Hessian
@Raven-bi3xn3 жыл бұрын
But the question is about the J of cost function.
@Romis0086 жыл бұрын
The prof repeatedly mentions that entropy is the opposite of information... I always thought entropy was expected shannon information content. Am I missing something? By the way, amazing content!
@Cindy-md1dm5 жыл бұрын
Entropy measures the uncertainty, it is negative of information. When the probability goes close to 1, which means more information you get, less uncertainty left.