K-shot learning is a hot topic in research. Let's understand one of the first core algorithms introduced to train meta-models: Model Agnostic Meta Learning (MAML).
Пікірлер: 31
@jayk253 Жыл бұрын
The diagram at the end makes it really easy to understand the mechanism. Thank you so much!
@TwinEdProductions Жыл бұрын
Glad it was useful!
@arminhejazian53062 жыл бұрын
the visualization at the end of the video helped me a lot.
@TwinEdProductions2 жыл бұрын
That's great to hear! Thanks
@user-yr9sf2yr3n4 ай бұрын
great video. U actually managed to unite the formulaic representation with the logic behind it. Thanks.
@TwinEdProductions4 ай бұрын
Thanks!
@xingangguo41692 жыл бұрын
This diagram is amazing! Love it
@TwinEdProductions2 жыл бұрын
Thank you, glad you liked it!
@chantata10 ай бұрын
thank you for your explanation! it is very easy to understanding.
@TwinEdProductions9 ай бұрын
Thanks!
@mahadevprasadpanda6493 Жыл бұрын
thanks for the great explanation !!
@TwinEdProductions Жыл бұрын
Glad it was useful!
@joon01052 жыл бұрын
Cheers to the nice diagram at the last!
@TwinEdProductions2 жыл бұрын
Thanks!
@jinking46627 ай бұрын
Nice explaination!
@TwinEdProductions7 ай бұрын
Thanks!
@ulasfiliz7592 ай бұрын
What I don't understand about MAML is that there is a parameter Phi in some methods, representing the inner loop training, where in your diagram can be related to the update of phi? Thank you very much.
@mainaksen1 Жыл бұрын
can anyone say if MAML can be applied on binary classification or not? if we have a data set that contain only Dog vs Cat images, why we need to apply with MAML...
@shakibyazdani92762 жыл бұрын
Well explained
@TwinEdProductions2 жыл бұрын
Cheers!
@saranpandian6882 Жыл бұрын
What's the difference between transfer learning and mera learning?
@TwinEdProductions Жыл бұрын
Hi! Transfer learning is where a model trained on one task can be applied to another (usually similar task) while meta learning is generally trying to understand things like which parts of their data is most valuable / which approaches generate the best predictions on a given dataset (using machine learning techniques).
@John-sj5sk Жыл бұрын
6:10, how can I backpropagate? In the backpropagation algorithm for weights of a layer, they need input and output_loss which went through the layer. query set has never gone through the original model. How can I calculate gradient descent? I can't under stand...
@vyasraina3930 Жыл бұрын
hi. Each support set is passed to it's respective copy of the model, and then an overall loss is calculated - this I find is easiest to see as a single function for the overall loss at 4.38 ... Now we can calculate the gradient of the overall loss wrt to the original parameters, because the initialisation of each model copy is with the original model parameters (which are tied/same across the copied models) -> i.e. we have a differentiable function from the original model parameters to the output loss which can be differentiated and thus we can do back propagation to calculate this gradient (and then update the model parameters)... hope this is helpful?
@calypsochris989 Жыл бұрын
Nice knowledge share,I wonder what is you use for edit formulation.
@TwinEdProductions Жыл бұрын
Hi, thanks! What do you exactly mean by edit formulation?
@calypsochris989 Жыл бұрын
@@TwinEdProductions sorry,my english is bad,l just wonder the tool that you edit formulation.
@TwinEdProductions Жыл бұрын
@@calypsochris989 We write all mathematical formula using LaTex and convert to images using online tools like latex2png
@arjavgarg5801 Жыл бұрын
What is the difference between this and ensemble learning?
@TwinEdProductions Жыл бұрын
Could you clarify what you mean by ensemble 'learning'? Do you mean training independent models with different seed initialisations and averaging their predictions?
@arjavgarg5801 Жыл бұрын
@@TwinEdProductions yes, I thought about it more and read more about it and I realized that maml creates a generalized model that can be later used to learn more specific things later on, whereas in Ensemble learning (all kinds) we only train on the specific task.