Thanks for watching! If you think I deserve it, please consider hitting that like button as it will help spread this channel. More break downs to come!
@martinleykauf685711 күн бұрын
Hi! I'm writing my thesis currently and using PPO in my project. Your video was of great help to get a more intuitive understanding about the algorithm! Keep it up man, very very helpful.
@user-mx9eu5bb7i8 ай бұрын
I like the clarity that your video provides. Thanks for this primer. A couple things, though, that were a bit unclear and perhaps you could elaborate on here in the comments. - It wasn't obvious to me how/why you would submit all of the states at once (to either network) and update with an average loss as opposed to training on each state independently. I get that we have an episode of related/dependent states here -- maybe that's why we use the average instead of the directly associated discounted future reward? - Secondly, in your initial data sampling stage you collected outputs from the policy network. During the training phase of the network it looks like you're sampling again but your values are different. How is this possible unless you're network has changed somehow? Maybe you're using drop-out or something like that? Forgive the questions -- I'm just learning about this methodology for the first time.
@user-zl7km3jx1kАй бұрын
I'm also interested in the answer to the second question.
@vastabyss64968 ай бұрын
What's the purpose of having a separate policy network and value network? Wouldn't the value network already give you the best move in a given state, since we can simply select the action the value network predicts will have the highest future reward?
@yeeehees29735 ай бұрын
More to do with balancing exploration/exploitation, as simply picking the maximum Q-value from the value network yields suboptimal results due to limited exploration. Alternatively, using on a policy network would yield too noisy updates, resulting in unstable training.
@sudiptasarkar44384 ай бұрын
@@yeeehees2973I feel that this video is misleading at 02:06. Previously I thought value function objective is to estimate the max reward value of current state, but this guy is saying otherwise
@yeeehees29734 ай бұрын
@@sudiptasarkar4438 the Q-values inherently try to maximize the future rewards, so a Q value of being in a certain state can be interpreted as maximums future reward given this state.
@patrickmann41223 ай бұрын
It helps with something called “baselining” which is a variance reduction technique to improve policy gradients
@user-vr3pt7yp9d2 ай бұрын
That’s because this kind of algorithm deals with continuous action not like DQN. That’s the key point of involving policy gradient to Q-learning which is the value network.
@user-ir1pm2pd1k5 ай бұрын
Hi! Great video! Could you answer my question about training policy? This happening on 10:00. Why obtained probability of actions are different from probs, taken on gathering data? I think that we havent changed policy network before this action. So, if we havent changed network yet, on 10:08 we would have received ratio == 1 on every step(
@srivatsa11938 ай бұрын
I ve really enjoyed this series so far. Great work ! The world needs more pasionate teachers like youeself. Cheers!
@CodeEmporium8 ай бұрын
Thanks so much for the kind words I really appreciate it :)
@ericgonzales50576 ай бұрын
WHERE DID YOU LEARN THIS?!??! PLEASE ANSWER
@swagatochakraborty25835 ай бұрын
Great presentation. One question : why the policy network is a separate network than the value network? Seems like the probability of the actions should be based on estimating the expected reward values I think in my Coursera course on Reinforcement learning - I saw they were using the same network and simply copying over the weights from one to another. So they were essentially the time shifted version of the same network and trained just once.
@ashishbhong59018 ай бұрын
Good presentation and break down of concepts. Liked your video.
@burnytechАй бұрын
Great stuff mate
@ZhechengLi-wk8gy8 ай бұрын
Like your channel very much, looking forward to the coding part of RL.😀
@2_Tou3 ай бұрын
I think the calculation shown on 5:45 is not the advantage. The advantage of an action is calculated by taking the average value of all actions in that state and find the difference between the average value and the value of the action you are interested in. That calculation looks more like a MC target to me. Please point out if I made a mistake because I always do...
@0xabaki6 ай бұрын
haha finally no one has done quiz time yet! I propose the following answers: 0) seeing the opportunity cost of an action is low 1) A 2) B 3) D
@OPASNIY_KIRPI47 ай бұрын
Please explain how you can apply back propagation over the network simply by using a single loss number? As far as I understand, an input vector and a target vector are needed to train a neural network. I will be very grateful for an explanation.
@CodeEmporium7 ай бұрын
The single loss is “back propagated” through the network to compute the gradient of the loss with respect to each parameter of the network. This gradient is later used by an optimizer algorithm (like gradient descent) to update the neural network parameter, effectively “learning”. I have a video coming out on this tomorrow explaining back propagation in my new playlist “Deep Learning 101”. So do keep an eye out for this
@OPASNIY_KIRPI47 ай бұрын
Thanks for the answer! I'm waiting for a video on this topic.
@victoruzondu66255 ай бұрын
What are vf updates and how do we get the value for our clipped ratio. You didn't seem to explain them I could only tell the last quiz is a B because the other options complement the policy nextwork not the value network
@inderjeetsingh23678 ай бұрын
Thanks for sharing 🙏
@CodeEmporium8 ай бұрын
My pleasure! Thank you for watching
@footube37 ай бұрын
Could you please explain what up, down, left and right signify. In which data structure are we going up, down, left or right?
@CodeEmporium7 ай бұрын
Up down left and right are individual actions that an agent can possibly take. You could store these data types in an “enum” and sample a random action from this
@obieda_ananbeh8 ай бұрын
Thank you!
@pushkinarora580026 күн бұрын
Q1: B Q2: B Q3:B
@zakariaabderrahmanesadelao30488 ай бұрын
The answer is B.
@CodeEmporium8 ай бұрын
Ding ding ding for the Quiz 1!
@paull9238 ай бұрын
Great video! Especially, the quizzes are a good idea. B B B I‘d say
@CodeEmporium8 ай бұрын
Thanks so much! It’s fun making them too. I thought it would be a good way to engage. And yep the 3 Bs sound right to me too 😊
@BboyDschafar8 ай бұрын
FEEDBACK. Either from experts/ teachers, or from the enviroment.
@sashayakubov69243 ай бұрын
I did not understand nothing... apparently I'll need to ask chatgpt for clarificaions