How to train your MAML

Antreas Antoniou, Harri Edwards, Amos Storkey

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

The field of few-shot learning has recently seen substantial advancements. Most of these advancements came from casting few-shot learning as a meta-learning problem. Model Agnostic Meta Learning or MAML is currently one of the best approaches for few-shot learning via meta-learning. MAML is simple, elegant and very powerful, however, it has a variety of issues, such as being very sensitive to neural network
architectures, often leading to instability during training, requiring arduous hyperparameter searches to stabilize training and achieve high generalization and being very computationally expensive at both training and inference times. In this paper, we propose various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAML, which we call MAML++.
Original languageEnglish
Number of pages11
Publication statusPublished - 2019
EventSeventh International Conference on Learning Representations - New Orleans, United States
Duration: 6 May 20199 May 2019
https://iclr.cc/

Conference

ConferenceSeventh International Conference on Learning Representations
Abbreviated titleICLR 2019
Country/TerritoryUnited States
CityNew Orleans
Period6/05/199/05/19
Internet address

Fingerprint

Dive into the research topics of 'How to train your MAML'. Together they form a unique fingerprint.

Cite this