Tutorial 6-Chain Rule of Differentiation with BackPropagation

  Рет қаралды 182,469

Krish Naik

Krish Naik

5 жыл бұрын

In this video we will discuss about the chain rule of differentiation which is the basic building block in BackPropagation.
Below are the various playlist created on ML,Data Science and Deep Learning. Please subscribe and support the channel. Happy Learning!
Complete Deep Learning: • Tutorial 1- Introducti...
Data Science Projects playlist: • Generative Adversarial...
NLP playlist: • Natural Language Proce...
Statistics Playlist: • Population vs Sample i...
Feature Engineering playlist: • Feature Engineering in...
Computer Vision playlist: • OpenCV Installation | ...
Data Science Interview Question playlist: • Complete Life Cycle of...
You can buy my book on Finance with Machine Learning and Deep Learning from the below url
amazon url: www.amazon.in/Hands-Python-Fi...
🙏🙏🙏🙏🙏🙏🙏🙏
YOU JUST NEED TO DO
3 MAGICAL THINGS
LIKE
SHARE
&
SUBSCRIBE
TO MY KZfaq CHANNEL
📚📚📚📚📚📚📚📚

Пікірлер: 240
@debtanudatta6398
@debtanudatta6398 3 жыл бұрын
Hello Sir, I think there is mistake in this video for backpropagation. Basically to find out (del L)/(del (w11^2)), we don't need the PLUS part. Since here O22 doesn't depend on w11^2. Please look into that. The PLUS part will be needed while calculating (del L)/(del (w11^1)), there O21 & O22 both depend on O11 and O11 depends on w11^1.
@alinawaz8147
@alinawaz8147 2 жыл бұрын
Yes brother there is mistake what is said is correct
@prakharagrawal4011
@prakharagrawal4011 2 жыл бұрын
Yes, This is correct. Thank you for pointing this out.
@aaryankangte6734
@aaryankangte6734 2 жыл бұрын
true that
@vegeta171
@vegeta171 Жыл бұрын
You are correct concerning that, but I think he wanted to take derivative w.r.t O11 since it is present in both nodes of f21 and f22, so if we replace w11^2 in the equation by O11 the equation would be correct
@byiringirooscar321
@byiringirooscar321 Жыл бұрын
it took me time to understand it but now I got the point thanks man but I can assure you that @krish naik is the first professor I have
@ksoftqatutorials9251
@ksoftqatutorials9251 5 жыл бұрын
I don't want to calulate Loss function to your videos and no need to propagate the video back and forward i.e you explained in such a easiest way I have ever seen in others. Keep doing more and looking forward to learn more from you. Thanks a ton.
@VVV-wx3ui
@VVV-wx3ui 4 жыл бұрын
This is simply yet Superbly explained. When I learnt earlier, it stopped at Back Propagation. Now, learnt what is in Backpropagation that makes the Weights updation in an appropriate way, i.e., Chain rule. Thanks much for giving clarity that is easy to understand. Superb.
@OMPRAKASH-uz8jw
@OMPRAKASH-uz8jw Жыл бұрын
you are no one but the perfect teacher,keep on adding playlist
@abhishek-shrm
@abhishek-shrm 4 жыл бұрын
This video explained everything I needed to know about backpropagation. Great video sir.
@manateluguabbaiinuk-mahanu761
@manateluguabbaiinuk-mahanu761 2 жыл бұрын
Deep Learning Playlist concepts are very clear and anyone can understand easily. Really have to appreciate your efforts 👏🙏
@mranaljadhav8259
@mranaljadhav8259 4 жыл бұрын
Well Explained sir ! Before starting the deep learning, I have decided to start the learning from your videos. You explain in very simple way ...Anyone can understand from your video. Keep it up Sir :)
@someshanand1799
@someshanand1799 3 жыл бұрын
great video especially you are giving the concept behind it, love it.. thank you for sharing with us.
@akumatyy
@akumatyy 3 жыл бұрын
Jabardast sir, i am watching ur videos after watching Andrew Ng's lecture of deep learning. I will say you simply explained even more easily. Superb.
@kshitijzutshi
@kshitijzutshi 2 жыл бұрын
Yes man, he's very good.
@aj_actuarial_ca
@aj_actuarial_ca Жыл бұрын
Your videos are really helping me to learn Machine learning as an actuarial student who is from a pure commerce/ finance background
@AmitYadav-ig8yt
@AmitYadav-ig8yt 4 жыл бұрын
It has been years since I had solved any mathematics question paper or looked at mathematics book. But the way you explained was damn good than Ph.D. holder professors at the University. I did not feel my away from mathematics at all. LoL- I do not understand my professors but understand you perfectly
@MrityunjayD
@MrityunjayD 4 жыл бұрын
Really appreciable the way you taught Chain rule...awesome..
@ganeshvhatkar9040
@ganeshvhatkar9040 4 ай бұрын
one of the best videos, I have seen in my life!!
@manjunath.c2944
@manjunath.c2944 4 жыл бұрын
clearly understood very much appreciated for your effort :)
@aditideepak8033
@aditideepak8033 3 жыл бұрын
You have explained it very well. Thanks a lot!
@shrutiiyer68
@shrutiiyer68 3 жыл бұрын
Thank you so much for all your efforts to give such an easy explanation🙏
@nishitnishikant8548
@nishitnishikant8548 3 жыл бұрын
Of the two connections from f11 to the second hidden layer, w11^2 is affecting only f21 and not f22(as it affected by w21^2). So, dL/dw11^2 will only have one term instead of two. Anyone, pls correct me if i am wrong.
@sahilvohra8892
@sahilvohra8892 2 жыл бұрын
I agree. i dont know why others didn't realized this same mistake!!!
@mustaphaelammari1128
@mustaphaelammari1128 2 жыл бұрын
i agree, i was looking for someone has the same remark :)
@ismailhossain5114
@ismailhossain5114 2 жыл бұрын
That's the point I am actually looking
@saqueebabdullah9142
@saqueebabdullah9142 2 жыл бұрын
Exactly, cause if I solve the derivative of two terms it results d/dw11^2 *L = d/dw11^2 *L + d/dw12^2 *L , which is wrong
@RUBAYATKHAN89
@RUBAYATKHAN89 2 жыл бұрын
Absolutely.
@rajeeevranjan6991
@rajeeevranjan6991 4 жыл бұрын
simply one word "Great"
@tarun4705
@tarun4705 Жыл бұрын
This is the most clear mathematical explanation I have ever seen till now.
@moksh5743
@moksh5743 8 ай бұрын
kzfaq.info/get/bejne/f96cZtGq0LGraYE.html
@RomeshBorawake
@RomeshBorawake 3 жыл бұрын
Thank you for the perfect DL Playlist to learn, wanted to highlight a change to make it 100% useful (Already at 99.99%), 13:04 - For Every Epoch, the Loss Decreases adjusting according to the Global Minima.
@vishnukce
@vishnukce 9 ай бұрын
But for negative slopes loss has to increase know to reach global maxima
@varunsharma1331
@varunsharma1331 Жыл бұрын
Great explanation. I was looking for this clarity since long...
@manikosuru5712
@manikosuru5712 5 жыл бұрын
Amazing Videos...Only one word to say "Fan"
@devgak7367
@devgak7367 4 жыл бұрын
Just awsome explanation of gradient descent.
@saritagautam9328
@saritagautam9328 3 жыл бұрын
This is really cool. First time samjh aaya. Hats off Man.
@chandanbp
@chandanbp 4 жыл бұрын
Great stuff for free. Kudos to you and your channel
@vishalshukla2happy
@vishalshukla2happy 4 жыл бұрын
Great way to explain man.... keep on going
@skviknesh
@skviknesh 3 жыл бұрын
Thanks ! That was really awesome.
@sandeepganage9717
@sandeepganage9717 4 жыл бұрын
Brilliant explanation!
@channel8048
@channel8048 Жыл бұрын
Thank you so much for this! You are a good teacher
@aminzaiwardak6750
@aminzaiwardak6750 4 жыл бұрын
thank you sir, you explain very good keep it up.
@ZIgoTTo10000
@ZIgoTTo10000 2 жыл бұрын
You have saved my life, i owe you everything
@uddalakmitra1084
@uddalakmitra1084 2 жыл бұрын
Excellent presentation Krish Sir .. You are great
@adityashewale7983
@adityashewale7983 Жыл бұрын
hats off to you sir,Your explanation is top level, THnak you so much for guiding us...
@tanvirantu6623
@tanvirantu6623 3 жыл бұрын
love you sir, love ur effort. love from Bangladesh.
@hashimhafeez21
@hashimhafeez21 3 жыл бұрын
first time i undestand very well by your explanation.
@chartinger-arman
@chartinger-arman 4 жыл бұрын
OP... Nice Teaching... Why don't we get teachers like u in every institute and college??
@maheshvardhan1851
@maheshvardhan1851 5 жыл бұрын
great effort...
@mohammedsaif3922
@mohammedsaif3922 3 жыл бұрын
Krish your awesome finally I understood the chain rule from you thanks Krish again
@camilogonzalezcabrales2227
@camilogonzalezcabrales2227 4 жыл бұрын
Excellent video, I'm new in the field, could someone explain me how the O's are obtained. Are that O's the result of each neuron computation? are the O's numbers equations?
@tintintintin576
@tintintintin576 4 жыл бұрын
so helpful video :) thanks
@bibhutiswain175
@bibhutiswain175 4 жыл бұрын
Really helpful for me.
@sekharpink
@sekharpink 5 жыл бұрын
Very very good explanation..very much understandable. Can I know how many days ur planning to complete this entire playlist?
@dnakhawa
@dnakhawa 4 жыл бұрын
You are too Good Krish , nice Data science content
@deepaktiwari9854
@deepaktiwari9854 3 жыл бұрын
Nice informative video. It helped me in understanding the concept. But i think at end there is a mistake. You should not add the other path to calculate the derivative for W11^2. Addition should be done if we are calculating the derivative for O11. w11^2(new) = (dl/dO31 * dO31/dO21 * dO21/dW11^2)
@grownupgaming
@grownupgaming 2 жыл бұрын
Yes deepak, I noticed the same thing. There's a mistake around 12:21. no addition is needed.
@anupampurkait6066
@anupampurkait6066 2 жыл бұрын
yes deepak you are correct. I also think the same.
@albertmichaelofficial8144
@albertmichaelofficial8144 Жыл бұрын
Is that because we are calculating based on o3 and 03 depends on both output from second layer
@sundara2557
@sundara2557 4 жыл бұрын
I am going through tour videos. You are Rocking Bro.
@sundara2557
@sundara2557 4 жыл бұрын
Your*
@meanuj1
@meanuj1 5 жыл бұрын
Nice and requested to please add some videos on optimizer...
@arpitdas2530
@arpitdas2530 4 жыл бұрын
Your teaching is great sir. But can we get some video also about how we will apply these practically in python?
@punyanaik52
@punyanaik52 4 жыл бұрын
Bro, there is a correction needed in this video... watch out for last 3 mins and correct the mistake. Thanks for your efforts
@aaryamansharma6805
@aaryamansharma6805 3 жыл бұрын
your right
@saygnileri1571
@saygnileri1571 2 жыл бұрын
Nice one thnks a lot!
@cynthiamoricordova5099
@cynthiamoricordova5099 3 жыл бұрын
Thank you so much for all your videos. I have a question respect of the value to assign to bias. This value is a random value? I will appreciate your answer.
@hokapokas
@hokapokas 5 жыл бұрын
Loved it man... Great effort in explaining the maths behind it and chain rule. Pls make a video on its implementation soon. as usual great work.. Looking forward for the videos. Cheers
@shivamjalotra7919
@shivamjalotra7919 4 жыл бұрын
Hello Sunny, I myself have stitched an absolutely brilliant repository explaining all the implementation details behind an ANN. See this: github.com/jalotra/Neural_Network_From_Scratch
@kshitijzutshi
@kshitijzutshi 2 жыл бұрын
@@shivamjalotra7919 Great effort. Starred it. ⭐👍🏼
@shivamjalotra7919
@shivamjalotra7919 2 жыл бұрын
@@kshitijzutshi try to implement it yourself from scratch. See george hotz twitch stream for this.
@kshitijzutshi
@kshitijzutshi 2 жыл бұрын
@@shivamjalotra7919 Any recommendation for understanding image segmentation problem using CNN? resources?
@ZaChaudhry
@ZaChaudhry Жыл бұрын
❤. God bless you, Sir.
@enquiryadmin8326
@enquiryadmin8326 4 жыл бұрын
in the back propagation, calculation of gradients using the chain rule for the w11^1, i think we need to consider 6 paths. please kindly clarify.
@aravindvarma5679
@aravindvarma5679 4 жыл бұрын
Thanks Krish...
@shashireddy7371
@shashireddy7371 4 жыл бұрын
Well explained video
@vishaljhaveri6176
@vishaljhaveri6176 2 жыл бұрын
Thank you sir.
@quranicscience9631
@quranicscience9631 4 жыл бұрын
very good content
@pranjalbahore6983
@pranjalbahore6983 2 жыл бұрын
so insightful @krish
@kamranshabbir2734
@kamranshabbir2734 5 жыл бұрын
the last partial derivative of Loss we have calculated w.r.t. (w11^2) is that correct how we have shown there that it is dependent upon two paths one w11^2 and other w12^2 ......... Please make it clear i am confused about it ??????
@wakeupps
@wakeupps 5 жыл бұрын
I think this is wrong! Maybe he wanted to discuss about the w11^1? However, a forth term should be add in the sum. Idk
@imranuddin5526
@imranuddin5526 4 жыл бұрын
@@wakeupps yes, i think he got confused and it was w11^1
@Ip_man22
@Ip_man22 4 жыл бұрын
assume he is explaining about W11^1 and youll understand everything. From the diagram itself, you can see the connections and can clearly imagine which weights are dependent on each other . Hope this helps
@akrsrivastava
@akrsrivastava 4 жыл бұрын
Yes, he should not have added the second term in the summation.
@gouravdidwania1070
@gouravdidwania1070 2 жыл бұрын
@@akrsrivastava Correct no second term needed for W11^2
@good114
@good114 2 жыл бұрын
Thank you Sir 🙏🙏🙏🙏♥️☺️♥️
@viveksm863
@viveksm863 3 жыл бұрын
Im able to understand the concepts you are explaining, but I dont know that from where do we get values for weights in forward propgation.Could you brief about that once if possible.
@ga43ga54
@ga43ga54 5 жыл бұрын
Can you please do a Live Q&A session !? Great video... Thank you
@krishnaik06
@krishnaik06 5 жыл бұрын
Let me upload some more videos, then I will do a Live Q&A session.
@saitejakandra5640
@saitejakandra5640 5 жыл бұрын
Pls upload ROC auc related concepts
@tobiasfan5407
@tobiasfan5407 11 ай бұрын
thank you ser
@utkarshashinde9167
@utkarshashinde9167 3 жыл бұрын
Sir , If to every single neuron in hidden layer we are giving same weights and features with bias then what is the use of multiple neurons in single layer?
@yuvi12
@yuvi12 4 жыл бұрын
but sir, In other source of internet, they are showing a different loss function. which 1 would i believe?
@sivaveeramallu3645
@sivaveeramallu3645 4 жыл бұрын
excellent Krish
@louerleseigneur4532
@louerleseigneur4532 3 жыл бұрын
thanks sir
@pranjalgupta9427
@pranjalgupta9427 2 жыл бұрын
Nice 👍👏🥰
@ruchikalalit1304
@ruchikalalit1304 4 жыл бұрын
@ 10:28 - 11:22 krish do we need both the paths to get added . since w11 suffix 2 is not affected by lower path ie w12 suffix 2? please tell
@amit_sinha
@amit_sinha 4 жыл бұрын
The second part of the summation should not come in the picture as it will come only when we will be calculating (dL/dw12) with suffix as 2.
@latifbhanger
@latifbhanger 4 жыл бұрын
@@amit_sinha i think that is correct.
@niteshhebbare3339
@niteshhebbare3339 3 жыл бұрын
@@amit_sinha Yes I have the same doubt!
@vishaldas6346
@vishaldas6346 3 жыл бұрын
Not required, its not correct as w11^2 is not affected by lower weights. The 1st part is correct and summation is required , when we are thinking about w11^1.
@grownupgaming
@grownupgaming 2 жыл бұрын
@@vishaldas6346 Yes!
@iamneela
@iamneela 4 жыл бұрын
Hey thanks for everything. I have a question, how we know that we have reached global minima? does loss function will be changed to 0 or 1 at global minima? Thanks
@shivamjalotra7919
@shivamjalotra7919 4 жыл бұрын
The loss function defines how far your predicted output is from the actual output in the output's vector space. It is by definition an element of the set that contains all the real numbers. Now coming to your question the loss function will converge to 0 as a neural network is a universal function approximator or it is able to hypothesise the actual function mapping from input data-points to output data points.
@sapito169
@sapito169 Жыл бұрын
finally i understand it
@sandipansarkar9211
@sandipansarkar9211 4 жыл бұрын
yeah I did understand chain rule but being a fresher please provide some easy to study articles on chain rule so that i can increase my understanding before proceeding further.
@sekharpink
@sekharpink 5 жыл бұрын
Hi Krish, Please upload videos on regular basis. I'm eagerly waiting for your videos. Thanks in Advance
@krishnaik06
@krishnaik06 5 жыл бұрын
Uploaded please check the tutorial 7
@sekharpink
@sekharpink 5 жыл бұрын
@@krishnaik06 thank you..please keep posting more videos..I'm really waiting to watch your videos..really liked your way of explanation
@skc1995
@skc1995 4 жыл бұрын
sir, What is Jacobian and Hessian and how to define both of them in my objective function in python. if you could address that, would be a huge help
@latifbhanger
@latifbhanger 4 жыл бұрын
Awesome Mate. however, I think you got carried away for the second part to be added. read the comments below and correct, please. W12 may not need to be added. But it all makes sense. A very good explanation.
@mohamedanasselyamani4323
@mohamedanasselyamani4323 3 жыл бұрын
Same remark concerning W12, good job Krish Naik and thank you for your efforts
@ravikumarhaligode2949
@ravikumarhaligode2949 3 жыл бұрын
Hi Both, I also have same query
@omkarpatil2854
@omkarpatil2854 4 жыл бұрын
thank you for great explanation, i have a question, with this formula which generates for ( diff(L) / diff (W11)) is completely same for ( diff(L) / diff (W12)) i am i right? does both value gets same difference in weights while back propagation ( though W old value will be different
@SunnyKumar-tj2cy
@SunnyKumar-tj2cy 4 жыл бұрын
Same question. What I think, as we are finding out the new weights, the W11 and W12 for HL2, both should be different and should not be added, or I am missing something.
@abhinaspadhi8351
@abhinaspadhi8351 4 жыл бұрын
@@SunnyKumar-tj2cy Yeah, Both should not be added as they are diff...
@spurthygopal1239
@spurthygopal1239 4 жыл бұрын
Yes i have same question too!
@varunmanjunath6204
@varunmanjunath6204 3 жыл бұрын
@@abhinaspadhi8351 its wrong
@pratikchakane5148
@pratikchakane5148 4 жыл бұрын
If we are calculating the updated weight of W11^2 then why we need to add the weight W12^2 ?
@JaySingh-gv8rm
@JaySingh-gv8rm 4 жыл бұрын
how can we cumpute dL/dO31 or what is the formula for to find dL/dO31 ?
@tabilyst
@tabilyst 3 жыл бұрын
Hi Krish, can you pls let me know, if we are calculating the derivative of W2 11 weight then why we are adding derivative of W2 12 weight in that. ? pls clear
@Skandawin78
@Skandawin78 4 жыл бұрын
Do u update the bias during backpropagation along with weights? Or does it remain constant after the initialization?
@krishnaik06
@krishnaik06 4 жыл бұрын
Yes we have to update the bais too
@hafi029
@hafi029 3 жыл бұрын
a doubt in dL/dw11 is that correct?? we need to add?
@jpovando25
@jpovando25 4 жыл бұрын
Hola. Sabes Redes nueronales (Neural networks) utilizando el software Statistica?
@DP-od4yr
@DP-od4yr 3 жыл бұрын
Hi Sir...plzz show the path for w11 to the suffix of 1... thanks !!
@pratikgudsurkar8892
@pratikgudsurkar8892 4 жыл бұрын
We are solving supervised learning problem that's why we have loss as actual-predicted , what in case of unsupervised where we don't have y actual how the loss is calculated and how the updation happen
@benvelloor
@benvelloor 4 жыл бұрын
I don't think there will be back propogation in unsupervised learning!
@dipankarrahuldey6249
@dipankarrahuldey6249 3 жыл бұрын
I think this part dL/dw11^2 should be (dL/dO31 *dO31/O21 *dO21/dO11^2). If we are taking derivative of dL w.r.t w11^2 then,w12^2 doesn't come into play. So,in that case, dL/dO12^2= (dL/dO31 *dO31/O22 *dO22/dw12^2)
@raj4624
@raj4624 2 жыл бұрын
agree...dw11^2 should be (dL/dO31 *dO31/O21 *dO21/dO11^2). not extra afte addition
@aswinthviswakumar64
@aswinthviswakumar64 3 жыл бұрын
Great Video and a Great initiative sir from 12:07 if we use same method to calculate dL/dW12^2 it will be the same as dL/dW11^2. is this the correct way or am I getting it wrong thank you!
@vishalgupta3175
@vishalgupta3175 3 жыл бұрын
Hi sir, Sorry to say you that which degree you have completed,you are awesome!
@Pink_Bear_
@Pink_Bear_ Жыл бұрын
here we used optimizer to update the weight slope is dl/dw so w here is w_old or something else.
@waynewu7763
@waynewu7763 20 күн бұрын
how do you take the derivative of d(O31)/dO21? what kind of equations are those?
@gunjanagrawal8626
@gunjanagrawal8626 Жыл бұрын
Could you please recheck the video at around 11:00, W11 weight updation should be independent of W12.
@vishalsavade3867
@vishalsavade3867 4 жыл бұрын
sir how we will decide the global minima??
@rajshekharrakshit9058
@rajshekharrakshit9058 3 жыл бұрын
sir i think one thing you are doing is worng. as w^(3)11 impacts O(31) , here is one activation part. so the dL/dw^(3)11 = dL/dO(31) . d0(31)/df1 . df1/dw^(3)11 I might be wrong, can you please clear my query ?
@chaitanyakumarsomagani592
@chaitanyakumarsomagani592 3 жыл бұрын
krish sir, is it w12^2 is depends on w11^2 then only we can do differentiation. w12^2 is going one way and w11^2 is going another way.
@jontyroy1723
@jontyroy1723 Жыл бұрын
In the step where dL/dw[2]11 was shown as addition of two separate chain rule outputs, should it not be dL/dw[2]1 ?
@mikelrecacoechea8730
@mikelrecacoechea8730 2 жыл бұрын
Hey Krish, god explanation I think there is one correction. In the end, you explained for w11^2, what I feel is, it is for w11^1.
@tobiasfan5407
@tobiasfan5407 11 ай бұрын
subscribed
@karthikprasad7991
@karthikprasad7991 4 жыл бұрын
Thanks a lot for this video , some m f wont explain properly
@jerryys
@jerryys 3 жыл бұрын
Great job! Does the last derivative need the second part? I do not get it.
@kartikesood8242
@kartikesood8242 3 жыл бұрын
d(O22) will also be differentiated but with respect to w11, thus it will come out to be zero. Hence take it or not, result will be the same
Tutorial 7- Vanishing Gradient Problem
14:30
Krish Naik
Рет қаралды 199 М.
One moment can change your life ✨🔄
00:32
A4
Рет қаралды 34 МЛН
Heartwarming Unity at School Event #shorts
00:19
Fabiosa Stories
Рет қаралды 20 МЛН
- А что в креме? - Это кАкАооо! #КондитерДети
00:24
Телеканал ПЯТНИЦА
Рет қаралды 7 МЛН
Зачем он туда залез?
00:25
Vlad Samokatchik
Рет қаралды 3,2 МЛН
Tutorial 9- Drop Out Layers in Multi Neural Network
11:31
Krish Naik
Рет қаралды 168 М.
Tutorial 21- What is Convolution operation in CNN?
10:58
Krish Naik
Рет қаралды 261 М.
Backpropagation - solved example
25:31
Salaar Khan
Рет қаралды 1,6 М.
Dynamic Pricing using Machine Learning Demonstrated
8:05
Data Science Demonstrated
Рет қаралды 26 М.
Tutorial 8- Exploding Gradient Problem in Neural Network
11:11
Krish Naik
Рет қаралды 129 М.
Tutorial 24- Max Pooling Layer In CNN
6:37
Krish Naik
Рет қаралды 124 М.
Germany | Can you solve this ? | Math Olympiad  (x,y)=?
11:02
Learncommunolizer
Рет қаралды 12 М.
Tutorial 29- Why Use Recurrent Neural Network and Its Application
10:13
One moment can change your life ✨🔄
00:32
A4
Рет қаралды 34 МЛН