What is Back Propagation

  Рет қаралды 50,094

IBM Technology

IBM Technology

Жыл бұрын

Learn about watsonx→ ibm.biz/BdyEjK
Neural networks are great for predictive modeling - everything from stock trends to language translations. But what if the answer is wrong, how do they “learn” to do better? Martin Keen explains that during a process called backward propagation, the generated output is compared to the expected output, and then the error contributed by each neuron (or “node”) is examined. By adjusting the node’s weights and biases, error is reduced and thus the overall accuracy improved.
Get started for free on IBM Cloud → ibm.biz/sign-up-now
Subscribe to see more videos like this in the future → ibm.biz/subscribe-now

Пікірлер: 39
@vencibushy
@vencibushy 5 ай бұрын
Back propagation to neural networks is what negative feedback is to closed loop systems. The understanding come pretty much naturally to the people which studied automation and control engineering. However - many articles tend to mix thing up. In this case back propagation and gradient descent. Back propagation is the process of passing the error back through the layers and using it to recalculate the weights. Gradient descent is the algorithm used for recalculation. There are other algorithms for recalculation of the weights.
@Kiera9000
@Kiera9000 11 ай бұрын
thanks for getting me through my exams cause the script from my professor helps literally nothing in understanding deep learning. Cheers mate
@anant1870
@anant1870 Жыл бұрын
Thanks for this Great explanation MARK 😃
@ca1790
@ca1790 15 күн бұрын
The gradient is passed backward using the chain rule from calculus. The gradient is just a multivariable form of the derivative. It is an actual numerical quantity for each "atomic" part of the network; usually a neuron's weights and bias.
@Mary-ml5po
@Mary-ml5po Жыл бұрын
I can't get enough of you brilliant videos. Thank you for making what it seemed to me before as complicated easy to understand . Could you please post a video about loss functions and gradient decent?
@im-Anarchy
@im-Anarchy 10 ай бұрын
What did he even taught actually?
@hamidapremani6151
@hamidapremani6151 2 ай бұрын
Brilliantly simplified explanation for a fairly complex topic. Thanks, Martin!
@hashemkadri3009
@hashemkadri3009 2 ай бұрын
marvin u mean, smh
@sakshammishra9232
@sakshammishra9232 9 ай бұрын
Lovely man..... excellent videos..all complexities eliminated. thanks a lot 😊
@KamleshSingh-um9jy
@KamleshSingh-um9jy 10 күн бұрын
Excellent session ..thank you !!
@1955subraj
@1955subraj 8 ай бұрын
Very well explained 🎉
@neail5466
@neail5466 Жыл бұрын
Thank you for the information. Could you please tell if the the BP is only available and applicable for Supervised models, as we have to have a pre computed result to compare against!! Certainly, unsupervised models could also use this theoretically but does / could it effect in a positive way? Additionally how the comparison actually performed? Especially for the information that can't be quantised !
@rigbyb
@rigbyb Жыл бұрын
Great video! 😊
@sweealamak628
@sweealamak628 2 ай бұрын
Thanks Mardnin!
@Zethuzzz
@Zethuzzz 3 ай бұрын
Remember the chain rule that you learned in high school.Well that’s what is used in Backpropogation
@guliyevshahriyar
@guliyevshahriyar 11 ай бұрын
Thank you!
@msatyabhaskarasrinivasacha5874
@msatyabhaskarasrinivasacha5874 2 ай бұрын
Awesome.....awesome superb explanation sir
@idobleicher
@idobleicher 3 ай бұрын
A great video!
@rishidubey8745
@rishidubey8745 20 күн бұрын
thanks marvin
@stefanfueger3487
@stefanfueger3487 Жыл бұрын
Wait ... the video is online for four hours ... and still no question how he manages to write mirrored?
@Aegon1995
@Aegon1995 Жыл бұрын
There’s a separate video for that
@itdataandprocessanalysis3202
@itdataandprocessanalysis3202 Жыл бұрын
🤦‍♂
@IBMTechnology
@IBMTechnology Жыл бұрын
Ha, that's so true. Here you go: ibm.biz/write-backwards
@tianhanipah9783
@tianhanipah9783 4 ай бұрын
Just flip the video horizontally
@sahanseney134
@sahanseney134 13 күн бұрын
cheers Marvin
@pleasethink4789
@pleasethink4789 10 ай бұрын
Hi Marklin! Thank you for such a great explanation. (btw, I know your name is Martin. 😂 )
@ashodapakian2788
@ashodapakian2788 2 ай бұрын
Off topic: what drawing board setup do these IBM videos use ? it's really great.
@boyyang1290
@boyyang1290 Ай бұрын
I'd like to know, too.
@boyyang1290
@boyyang1290 Ай бұрын
I find it ,he is drawing on the Glass
@Ellikka1
@Ellikka1 2 ай бұрын
When doing the Loss Function hove is the "Correct" output given? Is it training data and the compared an other data file with desired outcomes? In the example of "Martin" how does the neural network get to know that your name was not Mark?
@the1111011
@the1111011 10 ай бұрын
why you didn't explain how the network updates the weight
@jaffarbh
@jaffarbh Жыл бұрын
Isn't Back Propagation used to lower the computation needed to adjust the weights? I understand that doing so in a "forward" fashion is much more expensive than in a "backward" fashion?
@boeng9371
@boeng9371 4 ай бұрын
In IBM we trust ✊😔
@l_a_h797
@l_a_h797 Ай бұрын
5:36 Actually, convergence is does not necessarily mean the network is able to do its task reliably. It just means that its reliability has reached a plateau. We hope that the plateau is high, i.e. that the network does a good job of predicting the right outputs. For many applications, NNs are currently able to reach a good level of performance. But in general, what is optimal is not always very good. For example, a network with just 1 layer of 2 nodes is not going to be successful at handwriting recognition, even if its model converges.
@mateusz6190
@mateusz6190 Ай бұрын
Hi, you seem to have good knowledge on this, can I ask you a question please. Do you know if neural networks will be good for recognizing handwritten math expressions? (digits, operators, variables, all elements seperated to be recognized individually). I need a program that would do that and I tried a neural network, it is good for images from dataset but terrible for stuff from outside the dataset. Would you have any tips? I would be really greatful
@mohslimani5716
@mohslimani5716 Жыл бұрын
Thanks still I need to understand how technically does it happen
@AnjaliSharma-dv5ke
@AnjaliSharma-dv5ke Жыл бұрын
It’s done by calculating the derivatives of the y hats with respect to the weights, and the function done backwards in the network applying the chain rule of calculus
@Justme-dk7vm
@Justme-dk7vm 2 ай бұрын
ANY CHANCE TO GIVE 1000 LIKES ???😩
Gradient Descent Explained
7:05
IBM Technology
Рет қаралды 58 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 292 М.
Неприятная Встреча На Мосту - Полярная звезда #shorts
00:59
Полярная звезда - Kuzey Yıldızı
Рет қаралды 7 МЛН
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 170 #shorts
00:27
Super gymnastics 😍🫣
00:15
Lexa_Merin
Рет қаралды 108 МЛН
Epoch, Batch, Batch Size, & Iterations
3:29
DeepNeuron
Рет қаралды 69 М.
What are Convolutional Neural Networks (CNNs)?
6:21
IBM Technology
Рет қаралды 277 М.
What are Transformers (Machine Learning Model)?
5:50
IBM Technology
Рет қаралды 364 М.
Input Neuron is a Lie!
1:00
Thinking Neuron
Рет қаралды 100 М.
Training AI Models with Federated Learning
6:27
IBM Technology
Рет қаралды 30 М.
Understanding Backpropagation In Neural Networks with Basic Calculus
24:28
#28 Back Propagation Algorithm With Example Part-1 |ML|
13:46
Trouble- Free
Рет қаралды 331 М.
Неприятная Встреча На Мосту - Полярная звезда #shorts
00:59
Полярная звезда - Kuzey Yıldızı
Рет қаралды 7 МЛН