No video

Machine learning - Neural networks

  Рет қаралды 31,051

Nando de Freitas

Nando de Freitas

11 жыл бұрын

Neural Networks
Slides available at: www.cs.ubc.ca/~nando/540-2013/...
Course taught in 2013 at UBC by Nando de Freitas

Пікірлер: 12
@xbuchtak
@xbuchtak 10 жыл бұрын
I must agree, this is an excellent lecture and the most easy to understand explanation of backprop I've ever seen.
@DlVirgin
@DlVirgin 11 жыл бұрын
this is the best lecture on neural networks I have ever seen (i have seen many)...you very thoroughly explained every aspect of how ANNs work in a way that was easy to understand...
@JaysonSunshine
@JaysonSunshine 6 жыл бұрын
At 1:03:45, it is stated that the hyperbolic tangent function represents a solution to the vanishing gradient problem, but this false according to Wikipedia (and other sources): en.wikipedia.org/wiki/Vanishing_gradient_problem. The ReLU activation function does help/resolve this problem, though.
@JaysonSunshine
@JaysonSunshine 6 жыл бұрын
There are errors at 1:01:58; the learning rate is missing from the batch equation, and in both cases it is more informative to switch the sign so it's clear we're moving opposite the gradient and the step size is positive.
@lradhakrishnarao902
@lradhakrishnarao902 7 жыл бұрын
The videos and lecture are amazing. Have resolved lot of my issues. However, I want to add, something. Where are the topics for SVM and HMM? Also, it would be nice, if one or two complex equations are shown , how to solve.
@6katei
@6katei 10 жыл бұрын
I also agree.
@qdcs524gmail
@qdcs524gmail 9 жыл бұрын
Sir, may I know the activation function used in the ANN 4-layer example with the canary where 4 output neurons (sing, move, etc.) are activated at the same time? Does each layer use the same activation function? Please advise. Thanks.
@tobiaspahlberg1506
@tobiaspahlberg1506 8 жыл бұрын
Was there a reason why x_i1 and x_i2 were replaced by just x_i in the regression MLP example?
@chandreshmaurya8102
@chandreshmaurya8102 8 жыл бұрын
x_i is vector with components x_i1 and x_i2. Shorthand notation.
@shekarforoush
@shekarforoush 7 жыл бұрын
Nop,if you paid attention to the xi values in the table,you may understand they are scalars,so in this example instead of having 2 inputs we only one input x feature at times i,
@im_sanjay
@im_sanjay 6 жыл бұрын
Can I get the slides?
@sehkmg
@sehkmg 6 жыл бұрын
Just go to the course website then you'll get slides. www.cs.ubc.ca/~nando/540-2013/lectures.html
Machine learning - Deep learning I
1:15:05
Nando de Freitas
Рет қаралды 33 М.
Machine learning - Unconstrained optimization
1:16:19
Nando de Freitas
Рет қаралды 17 М.
EVOLUTION OF ICE CREAM 😱 #shorts
00:11
Savage Vlogs
Рет қаралды 8 МЛН
World’s Largest Jello Pool
01:00
Mark Rober
Рет қаралды 91 МЛН
What is Back Propagation
8:00
IBM Technology
Рет қаралды 53 М.
Machine learning - Random forests
1:16:55
Nando de Freitas
Рет қаралды 237 М.
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 222 М.
Machine learning - Importance sampling and MCMC I
1:16:18
Nando de Freitas
Рет қаралды 86 М.
Liquid Neural Networks
49:30
MITCBMM
Рет қаралды 240 М.
Machine learning - linear prediction
1:04:20
Nando de Freitas
Рет қаралды 70 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 411 М.
Machine learning - Bayesian learning
1:17:40
Nando de Freitas
Рет қаралды 61 М.
EVOLUTION OF ICE CREAM 😱 #shorts
00:11
Savage Vlogs
Рет қаралды 8 МЛН