Abstract

The field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent years. Contrary to conventional approaches to AI where a given task is solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learning algorithm itself, given the experience of multiple learning episodes. This paradigm provides an opportunity to tackle many of the conventional challenges of deep learning, including data and computation bottlenecks, as well as the fundamental issue of generalization. In this survey we describe the contemporary meta-learning landscape. We first discuss definitions of meta-learning and position it with respect to related fields, such as transfer learning, multi-task learning, and hyperparameter optimization. We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods today. We survey promising applications and successes of meta-learning including few-shot learning, reinforcement learning and architecture search. Finally, we discuss outstanding challenges and promising areas for future research.
Original languageEnglish
Number of pages20
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Early online date11 May 2021
DOIs
Publication statusE-pub ahead of print - 11 May 2021

Keywords

  • Meta-Learning
  • Learning-to-Learn
  • Few-Shot Learning
  • Transfer Learning
  • Neural Architecture Search

Fingerprint

Dive into the research topics of 'Meta-Learning in Neural Networks: A Survey'. Together they form a unique fingerprint.

Cite this