Lecture 3 | Learning, Empirical Risk Minimization, and Optimization

  Рет қаралды 16,413

Carnegie Mellon University Deep Learning

Carnegie Mellon University Deep Learning

Күн бұрын

Carnegie Mellon University
Course: 11-785, Intro to Deep Learning
Offering: Fall 2019
For more information, please visit: deeplearning.cs.cmu.edu/
Contents:
• Training a neural network
• Perceptron learning rule
• Empirical Risk Minimization
• Optimization by gradient descent

Пікірлер: 19
@chovaus
@chovaus 3 күн бұрын
best course about deep learning. now 2024 and happy I found it back. well done!
@ian-haggerty
@ian-haggerty 2 ай бұрын
Thank you again to Carnegie Mellon University & Bhiksha Raj. I find these lectures fascinating.
@ErolAspromatis
@ErolAspromatis 3 жыл бұрын
The professor with the sword is the Conan of Machine Learning!
@devops8729
@devops8729 4 жыл бұрын
Thanks for sharing knowledge. Amazing content and Professor.
@ahnafsamin3777
@ahnafsamin3777 2 жыл бұрын
The teacher seems to be so mean to his students! Quite surprised to see this at the CMU!
@joelwillis2043
@joelwillis2043 2 жыл бұрын
cry more baby
@adamatkins8496
@adamatkins8496 2 жыл бұрын
he doesn't have time for idiots!
@ZapOKill
@ZapOKill 2 жыл бұрын
3 minutes into the lecture, and by now I would have left 2 times... and watched it on youtube, where I can use my phone
@jijie133
@jijie133 4 жыл бұрын
Great!
@sansin-dev
@sansin-dev 3 жыл бұрын
What is a good textbook / reference book to follow to keep with this lecture?
@insoucyant
@insoucyant 3 жыл бұрын
Thank You.
@anuraglahon8572
@anuraglahon8572 4 жыл бұрын
I want to attend the class
@mastercraft117
@mastercraft117 11 ай бұрын
Does someone know where I can get the assignments for this class?
@bhargavram3480
@bhargavram3480 4 жыл бұрын
Dear Professor, At around timestamp 57:25, we go from the integral to an average sum. On what basis are we substituting P(X) = 1/N. What is the basis for this assumption that the PDF of X is uniform?
@paulhowrang
@paulhowrang 3 жыл бұрын
There is no basis for that, but do you want to assume a distribution over data? It is somewhat a parsimonious approach, when we do not know the distribution, take the least "informative" one, i.e., Uniform distribution. This way we are assuming no prior information about that data. But if you have prior, feel free to use it!
@lusvd
@lusvd 2 жыл бұрын
We are not substituting P(X) = 1/N. In the slide 109, last equation (in red): The Law of large numbers (LNN) states that the RHS will converge to the LHS as N -> infinity. In other words, we do not know P(X) and we dont need to, because we can estimate the expected value using the LNN.
@pratoshraj3679
@pratoshraj3679 4 жыл бұрын
Wish I was his student
@CharlesVanNoland
@CharlesVanNoland Жыл бұрын
240 students didn't even show up? These are the people developing our operating systems, our webstack platforms, our applications and software. They're all lazy bums who aren't even passionate about their field like was the case 20 years ago. Software used to be written by people who wanted to code if they were rich or poor. It was in their blood. Now we just have 90% of the industry flooded with people who want the Sillyclown Valley lifestyle but don't care for the work. The industry only exists because of people who loved the work and the lifestyle was just a bonus.
@pranjalgupta2072
@pranjalgupta2072 4 жыл бұрын
hey man satisfying your personal ego on a youtube dislike not cool.
Lecture 4 | The Backpropagation Algorithm
1:17:01
Carnegie Mellon University Deep Learning
Рет қаралды 12 М.
Lecture 1 | The Perceptron - History, Discovery, and Theory
1:09:13
Carnegie Mellon University Deep Learning
Рет қаралды 39 М.
ОСКАР ИСПОРТИЛ ДЖОНИ ЖИЗНЬ 😢 @lenta_com
01:01
Luck Decides My Future Again 🍀🍀🍀 #katebrush #shorts
00:19
Kate Brush
Рет қаралды 8 МЛН
I wish I could change THIS fast! 🤣
00:33
America's Got Talent
Рет қаралды 82 МЛН
Lecture 2 | The Universal Approximation Theorem
1:17:41
Carnegie Mellon University Deep Learning
Рет қаралды 34 М.
The Problem with Wind Energy
16:47
Real Engineering
Рет қаралды 679 М.
Lecture 9 | (1/3) Convolutional Neural Networks
1:21:23
Carnegie Mellon University Deep Learning
Рет қаралды 7 М.
Quantum Computing and the Limits of the Efficiently Computable - 2011 Buhl Lecture
1:09:57
Support Vector Machines Part 1 (of 3): Main Ideas!!!
20:32
StatQuest with Josh Starmer
Рет қаралды 1,3 МЛН
Bill Gates Reveals Superhuman AI Prediction
57:18
Next Big Idea Club
Рет қаралды 27 М.
Manifold Mixup: Better Representations by Interpolating Hidden States
21:08
КОГДА БАТЯ ЗАТЕЯЛ СТРОЙКУ😂#shorts
0:59
BATEK_OFFICIAL
Рет қаралды 2,1 МЛН
Инженер #история #ссср #фильмы #кино
0:59