Machine learning - Logistic regression

  Рет қаралды 39,139

Nando de Freitas

Nando de Freitas

Күн бұрын

Logistic regression: Optimization and Bayesian inference via Monte Carlo.
Slides available at: www.cs.ubc.ca/~nando/540-2013/...
Course taught in 2013 at UBC by Nando de Freitas

Пікірлер: 14
@harry1357931gmail
@harry1357931gmail 11 жыл бұрын
Excellent Lecture on Logistic Regression....must go through it twice to thrice to absorb it completely....
@mrf145
@mrf145 9 жыл бұрын
Thank you. The lecture is quite easy to understand. You delivered it in very good way.
@18amarage
@18amarage 8 жыл бұрын
I really love your lecture sir...
@brianclark4796
@brianclark4796 10 жыл бұрын
thank you for making this lecture available. very helpful.
@anynamecanbeuse
@anynamecanbeuse 4 жыл бұрын
This is just blowing my mind I should say.
@vermajiutube
@vermajiutube 6 жыл бұрын
Awesome lecture. How do we estimate w(thetha_i) using MC in bayesian ?
@karthiks3239
@karthiks3239 10 жыл бұрын
This was a nice lecture.. Thank you.. I was also looking for lectures on Constrained Optimization and Support Vector Machines.. Are these available?
@flamingxombie
@flamingxombie 7 жыл бұрын
Great lecture. By 'simulating' thetas, I am reckoning that we are respawning thetas from regions in proportion to their probabilities?
@abdulrahmansattar2873
@abdulrahmansattar2873 6 жыл бұрын
Superlike!
@JackSPk
@JackSPk 5 жыл бұрын
Maybe I'm misunderstading the 3D plot, but shouldn't it be Y=0={RED} and Y=1={BLUE} at 12:40 ?
@Jinex2010
@Jinex2010 8 жыл бұрын
22:00 I think J comes Jacobian and H from Hessian
@Raven-bi3xn
@Raven-bi3xn 3 жыл бұрын
But the question is about the J of cost function.
@Romis008
@Romis008 6 жыл бұрын
The prof repeatedly mentions that entropy is the opposite of information... I always thought entropy was expected shannon information content. Am I missing something? By the way, amazing content!
@Cindy-md1dm
@Cindy-md1dm 5 жыл бұрын
Entropy measures the uncertainty, it is negative of information. When the probability goes close to 1, which means more information you get, less uncertainty left.
Machine learning - Importance sampling and MCMC I
1:16:18
Nando de Freitas
Рет қаралды 86 М.
Machine learning - Unconstrained optimization
1:16:19
Nando de Freitas
Рет қаралды 17 М.
Looks realistic #tiktok
00:22
Анастасия Тарасова
Рет қаралды 106 МЛН
Каха заблудился в горах
00:57
К-Media
Рет қаралды 9 МЛН
New model rc bird unboxing and testing
00:10
Ruhul Shorts
Рет қаралды 29 МЛН
Support Vector Machines: All you need to know!
14:58
Intuitive Machine Learning
Рет қаралды 139 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 252 М.
Machine learning - Maximum likelihood and linear regression
1:14:01
Nando de Freitas
Рет қаралды 110 М.
Machine learning - Bayesian learning
1:17:40
Nando de Freitas
Рет қаралды 61 М.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 376 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 816 М.
Nature's Incredible ROTATING MOTOR (It’s Electric!) - Smarter Every Day 300
29:37
Machine learning - Regularization and regression
1:01:15
Nando de Freitas
Рет қаралды 47 М.
Machine learning - Random forests
1:16:55
Nando de Freitas
Рет қаралды 237 М.
Looks realistic #tiktok
00:22
Анастасия Тарасова
Рет қаралды 106 МЛН