Abstract
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is particularly challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which adaptively accounts for latent domains in data, and significantly enhances learning in such settings. Our method can be paired seamlessly with existing models, and boosts performance in conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
Original language | English |
---|---|
Title of host publication | International Conference on Learning Representations (ICLR 2022) |
Number of pages | 18 |
Publication status | Published - 25 Apr 2022 |
Event | Tenth International Conference on Learning Representations 2022 - Virtual Conference Duration: 25 Apr 2022 → 29 Apr 2022 Conference number: 10 https://iclr.cc/ |
Conference
Conference | Tenth International Conference on Learning Representations 2022 |
---|---|
Abbreviated title | ICLR 2022 |
Period | 25/04/22 → 29/04/22 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- transfer learning
- latent domains
- computer vision