Thanks for the great explanation. question : if we are doing episodic training and creating a large number of few-shot task to train the prototypical network so it mean it also require a large amount of labeled data .how we can say that few-shot learning need less amount of labeled data ? Please guide
@etaifour24 ай бұрын
I watched over 10 videos trying to get through this.. I understood MAML the moment you describes the two parameters update equations clearly..thank you!!
@ivankukanov46996 ай бұрын
Thank you for your great work and high-quality content! Great Job! 👍 👏
@aron29227 ай бұрын
None of them are intuitive results haha
@gadmuhirwa522611 ай бұрын
Thank you so much
@megatronixx11 ай бұрын
cant get work with another image, this one just works with monroe.
@MaxPatacchiola11 ай бұрын
You need a saturated black and white image
@tejasanvekar7367 Жыл бұрын
Thank you so much for the video and few shot series Helped me publish a paper
@quentinmunch3700 Жыл бұрын
Do you use the Deep Gaze library for the tracking and the recognition ? Or is it only PyERA ?
@shafinmahmud2925 Жыл бұрын
Hello @Massimiliano, I am working on a project where I have 3 classes with total of 2000 samples (1000 for training, 400 for validation, 600 for test), as you've mentioned in the example of MINIST, can I still use few shot learning in my data? can you give me a brief overview regarding that? Thanks in advance.
@MaxPatacchiola Жыл бұрын
Hi there, I do not think this is a few-shot setting as you have just a single task. You need a distribution over many tasks. What you can do is to use fine-tuning, meaning pretraining your neural network on a larger (possibly related) set of data and then fine tune on your target data. See methods like Big Transfer for an idea of how to do it in practice.
@manikantabandla3923 Жыл бұрын
Thanks for such a clear explanation. What broad tasks is MAML capable of adapting after training? 1) Suppose MAML trained on Omniglot dataset in few shot setting. -- Can it adapt to MNIST digit recognition. 2) Suppose MAML trained on MiniImageNet in few shot setting. -- Can it adapt to Imagenet dataset classification? I can get idea an idea of Few shot learning by answering the above questions. Thank you in advance.
@MaxPatacchiola Жыл бұрын
Yes, MAML should be able to adapt to these new datasets as they are not so different from the (meta)training distribution. However, recent work showed that pretraining a ResNet on ImageNet it is a good starting point for adaptation to many different datasets (via dfine-tuning). Give a look at the BiT paper for more details ( arxiv.org/abs/1912.11370 ). If adaptation time is a crucial requirement in your work, then BiT may not be a good choice, and a hybrid approach may be better. Check out my recent paper about CaSE layers for additional details ( arxiv.org/abs/2206.09843 ).
@manikantabandla3923 Жыл бұрын
Suppose I have a training dataset of 1000 classes of birds. Can I use the protypical trained network on the birds dataset and classify the medical image dataset. This natural question came to me after looking at partioning of 6492 classes of Omniglot dataset as Train - 4112 Val - 688 Test - 1692 i.e we are testing and validating our trained model on some unseen classes.
@MaxPatacchiola Жыл бұрын
It's not going to work very well because the two set of data are substantially different. You need to train on a larger variety of classes in order to generalize to medical images. Training on something like Meta-Dataset may be better.
@manikantabandla3923 Жыл бұрын
Thanks for the explanation. Say our dataset has 1000 classes. Suppose we are training protypical network in a fashion of 5-way and 3-short episodes. Suppose our test set episodes are also of the same configuration. I'm not getting how we can classify a single image as one among 1000 classes after training?
@ledinhngoc1102 Жыл бұрын
Hello, What if I just have one sample per class so I can't divide into support and query set?
@MaxPatacchiola Жыл бұрын
There may be different solutions here. 1) You could use image augmentation to produce the query set. 2) Alternatively, you can train a backbone on a large dataset via supervised or self-supervised learning, then directly use that backbone at evaluation time as a prototypical network (remove the last linear layer and use the embeddings in the penultimate layer to build the prototypes).
@hakankosebas2085 Жыл бұрын
need tutorial for dummies, need this to additional game control with head
@nikahosseini2244 Жыл бұрын
Thank you so much. No one could have explained better than you.
@amlhassan2739 Жыл бұрын
Great video, Thank you very much :)
@alessandroguazzo5692 Жыл бұрын
Spiegazione eccellente! Ne approfitto per dirti che sto svolgendo un lavoro di tesi utilizzando proprio una prototypical network ma ho ancora tanti dubbi. E' possibile ad esempio utilizzare al posto di immagini dei numpy di dimensione (N,78)? Se si come dovrei modificare la rete?
@MaxPatacchiola Жыл бұрын
Grazie Alessandro. Se vuoi utilizzare vettori come input invece che immagini, allora devi usare una rete feedforward (MLP) invece di una CNN. Il resto rimane uguale.
@alessandroguazzo5692 Жыл бұрын
@@MaxPatacchiola Hai per caso qualche riferimento da cui posso prendere spunto?
@linachato5817 Жыл бұрын
This is the best explanation 👏 thank you so much. Like and subscribe!
@hadarhe Жыл бұрын
Your videos helped me so much. Thank you!!
@bowenzhang4471 Жыл бұрын
BRAVO!
@alidinmohammadi2150 Жыл бұрын
Hi. Thank you for your good explanation. About 2 years have passed since the publication of this tutorial. Is there a complete book and tutorial with subtitles?
@MaxPatacchiola Жыл бұрын
Hi there, thank you for reaching out. There is no book or tutorial with subtitles. I was doing this in my spare time and now I am too busy to publish new material. At the moment I am offering private courses in 1-to-1 online sessions. I found this to be optimal for the student since the material can be adapted on the student's prior knowledge. If you are interested in something like this send me a message, you can find my contacts on my personal website: mpatacchiola.github.io
@alidinmohammadi2150 Жыл бұрын
@@MaxPatacchiola I am from Iran. I am a master's student in artificial intelligence. My English is weak. I can't pay because my country is embargoed. Please introduce a video with English subtitles or a book like "Hands-On Deep Learning Algorithms with Python", there are few shot learning in this field. Thank you for your kindness. Unfortunately, there were limited resources.🙏🙏
@shazzadhasan40672 жыл бұрын
Thank you. I am reading a FSCIL paper, they used a relational network at the final stage, I am struggling to understand how the relational net works, I learned very easily from this video.
@S0ULTrinker2 жыл бұрын
Great video! I have only one question... You get a loss for Theta2 and backpropagate- how is this loss related to Theta1? I don't see how loss flows from Theta2 to Theta1 :( Thank you!
@danningzhao73612 жыл бұрын
Thank you!
@MrSupermonkeyman342 жыл бұрын
I have a question about the loss. When the negative log a don't understand the purpose of the second term? The first term clearly represents the distance to the correct protoype which we obviously want to minimize but I don't really understand what the second term represents and why it woudn't be better to simply just have the first term
@hiuwang15652 жыл бұрын
I guess it doesn't matter if I don't add a negative sign to the distance formula
@MaxPatacchiola2 жыл бұрын
It depends from the implementation. Typically there is a distance metric defined as a function used as part of a loss. The sign inside the distance function should agree with the minimisation objective in the loss. In other words, when the loss is minimised the distance between vectors of the same class should diminish. As a sanity check you can define two vectors and see if the distance diminish as they get closer to each other.
@hiuwang15652 жыл бұрын
@@MaxPatacchiola Got it!!! Thanks a lot!
@hadjdaoudmomo95342 жыл бұрын
Thank you for the wonderful explanation. A question please, What are the suitable domain applications of RN? can it be applied on spectrogram images for speech emotion recognition?
@MaxPatacchiola2 жыл бұрын
Glad you liked the video. I am not familiar with the particular application you are mentioning. I have worked mainly on image classification with RN. I suggest you to search on Google Scholar for papers like that.
@hadjdaoudmomo95342 жыл бұрын
@@MaxPatacchiola Thank you🙏
@hiteshbalapanuru73482 жыл бұрын
How is the weight update rule different from mini-batch gradient descent?
@JONK46352 жыл бұрын
Ciao Massimiliano, Grazie di cuore per il video, davvero spiegato benissimo, finalmente ho capito MAML (ho background hardware)
@MaxPatacchiola2 жыл бұрын
Grazie Nazareno, sono contento che i video ti siano stati di aiuto!
@No_noifuw2 жыл бұрын
Thank you for this amazing video, I have a question, which one of the metric learning approach is better for medical image and why? Thanks again
@tanmaypawar15762 жыл бұрын
How can i do it for multiple person?
@deoabhijit59352 жыл бұрын
wow thanks
@mominabbas1252 жыл бұрын
Explained very well! (Y)
@mohammedy.salemalihorbi12102 жыл бұрын
Very nice explanation! Thanks a lot for these videos.
@maxpietracupa76712 жыл бұрын
Ciao Massimiliano, just wanted to say these videos are fantastic, keep up the great work
@mohammedy.salemalihorbi12102 жыл бұрын
Great! lesson. Thanks a lot
@No_noifuw2 жыл бұрын
can we use few-shot learning with dataset that include only one class?
@MaxPatacchiola2 жыл бұрын
What do you mean with "one class"? A binary classification problem or a regression problem?
@No_noifuw2 жыл бұрын
@@MaxPatacchiola binary classification problem. I want to use few-shot learning with stroke brain dataset (ISLES 2017). it includes only two classes (ischemic or not) with few samples. In few-shot learning, we train the model to recognize a novel class. In my case, I have only two classes in training and testing dataset. How can I use few-shot learning in this case. Thank you
@No_noifuw2 жыл бұрын
two-ways many-shots
@MaxPatacchiola2 жыл бұрын
@@No_noifuw in this case there is no problem, you can apply few-shot learning methods. However, if you want to perform some sort of meta-learning training you need many small datasets from different distributions. The number of data points in a single dataset is small but you should have many of those datasets in order to learn something meaningful. Another solution in you case could be to train with standard supervised learning in a very large dataset that is close to your target data. Then in a second phase just fine-tune your model on the small dataset.
@No_noifuw2 жыл бұрын
@@MaxPatacchiola Thank you so much I really appreciate your help
@sandhyaranidash24292 жыл бұрын
Thank you so much for this video. I am new to RNs and this is the first paper I was reading. Your video filled all gaps in my understanding. Thank you again.
@trolleyproblem43182 жыл бұрын
Can you compare with k-means?
@psychicmario3 жыл бұрын
Thanks for the explanation and code provided. Excellent job Sir
@bibiworm3 жыл бұрын
So in transfer learning, we don’t purposely differentiate support set and query set?
@MaxPatacchiola3 жыл бұрын
Exactly
@bibiworm3 жыл бұрын
@@MaxPatacchiola thank you so much! Your series of lecture on few shot learning is awesome!
@byrdofafeather31843 жыл бұрын
Excellent video
@munirs903 жыл бұрын
Great explanation. I appreciate the efforts. One confusion is that is base model also trained in prototypical fashion or like merged support and query set? Analogy with the transfer learning mechanism.
@mikecooper81423 жыл бұрын
Hello, it a great tutorial. would you mind sharing the slides?
@siddharthshrivastava58233 жыл бұрын
Awesome!
@krzysztofmajchrzak18813 жыл бұрын
Is it possible to use pretrained maml model to classificate one image at a time? I mean I want to put one image through pretrained MAML model and get the answer what class it belongs to? I am sorry for so many questions but I am working on uni project about few shot learning and I can't find any valuable information about this way of using MAML algorithm. Anyway thank You in advance for OYur reply
@MaxPatacchiola3 жыл бұрын
Hi there! At test time yes, you can do the forward pass on a single image. You need a mini-batch when you are training the network, or part of it.
@jayanayana3 жыл бұрын
very cool presentation. Can we get video similar to relational networks in which you explain your loss function
@pingyu5883 жыл бұрын
Thanks. Learned a lot from your video
@krzysztofmajchrzak18813 жыл бұрын
Hi! Great work! I have a question. I created YOLO algorithm and have already pre-trained model of it. How can I connect YOLO with MAML? In your pseudocode, the model is a pretrained set of weights or a neuralnetwork? Thank You in advance for your reply!
@krzysztofmajchrzak18813 жыл бұрын
I mean, when I have pretrained model, how can i adjust it to detect certain objects, using maml, and only limited amount of examples?
@MaxPatacchiola3 жыл бұрын
Hi there, object detection is not the usual setting where MAML has been applied. You should check in the literature if there has been any adaptation in this direction. One solution could be to use the pretrained net as a backbone and then use MAML to adapt the last layer. Take this with a pinch of salt, I'm not familiar enough with object detection to give a more specific advice.
@krzysztofmajchrzak18813 жыл бұрын
@@MaxPatacchiola do You mean the fully connected one?
@krzysztofmajchrzak18813 жыл бұрын
@@MaxPatacchiola and what is the difference? In the video You say that we use specific model, which is my pretrained YOLO model and I cannot see why i can’t use it the way You explain. There is very little about it in literature so i can’t find any specific working implementations 😅
@MaxPatacchiola3 жыл бұрын
@@krzysztofmajchrzak1881 Yes you can apply MAML in the fully connected layer. It will be less flexible than applying it on the whole model but it should still work. The problem you may have on using it with YOLO is that you still need to find a way to define tasks (support + query) to train the last layer. You will need to adapt an object detection dataset to do that. As I said, I am not too familiar with the object detection literature and I do not know if there is anything you can use.
@donglinwang58743 жыл бұрын
Didn't even finish the first minute before I liked. Keep up the good work. I would love see more series from you on other topics such as Explainable AI, Federated Learning, etc.