Abstract
Many gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optimization, which can be considered as hyperparameters. Although such hyperparameters can be optimized using the existing gradient-based hyperparameter optimization (HO) methods, they suffer from the following issues. Unrolled differentiation methods do not scale well to high-dimensional hyperparameters or horizon length, Implicit Function Theorem (IFT) based methods are restrictive for online optimization, and short horizon approximations suffer from short horizon bias. In this work, we propose a novel HO method that can overcome these limitations, by approximating the second-order term with knowledge distillation. Specifically, we parameterize a single Jacobian-vector product (JVP) for each HO step and minimize the distance from the true second-order term. Our method allows online optimization and also is scalable to the hyperparameter dimension and the horizon length. We demonstrate the effectiveness of our method on three different meta-learning methods and two benchmark datasets.
Original language | English |
---|---|
Title of host publication | International Conference on Learning Representations (ICLR 2022) |
Number of pages | 16 |
Publication status | Published - 25 Apr 2022 |
Event | Tenth International Conference on Learning Representations 2022 - Virtual Conference Duration: 25 Apr 2022 → 29 Apr 2022 Conference number: 10 https://iclr.cc/ |
Conference
Conference | Tenth International Conference on Learning Representations 2022 |
---|---|
Abbreviated title | ICLR 2022 |
Period | 25/04/22 → 29/04/22 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- Hyperparameter Optimization
- Meta-learning