Visual Representation Learning over Latent Domains

Research output: Chapter in Book/Report/Conference proceedingConference contribution


A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While multi-domain learning enables the learning of compact models that span multiple visual domains, these rely on the presence of domain labels, in turn requiring laborious curation of datasets. This paper proposes a less explored, but highly realistic new setting called latent domain learning: learning over data from different domains, without access to domain annotations. Experiments show that this setting is particularly challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which adaptively accounts for latent domains in data, and significantly enhances learning in such settings. Our method can be paired seamlessly with existing models, and boosts performance in conceptually related tasks, e.g. empirical fairness problems and long-tailed recognition.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations (ICLR 2022)
Number of pages18
Publication statusPublished - 25 Apr 2022
EventTenth International Conference on Learning Representations 2022 - Virtual Conference
Duration: 25 Apr 202229 Apr 2022
Conference number: 10


ConferenceTenth International Conference on Learning Representations 2022
Abbreviated titleICLR 2022
Internet address


  • transfer learning
  • latent domains
  • computer vision


Dive into the research topics of 'Visual Representation Learning over Latent Domains'. Together they form a unique fingerprint.

Cite this