Learning multiple visual domains with residual adapters

Sylvestre-Alvise Rebuffi, Hakan Bilen, Andrea Vedaldi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to perform well uniformly.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 30 (NIPS 2017)
Place of PublicationCalifornia, United States
PublisherNeural Information Processing Systems Foundation, Inc
Number of pages11
Publication statusPublished - 9 Dec 2017
EventNIPS 2017: 31st Conference on Neural Information Processing Systems - Long Beach, California, United States
Duration: 4 Dec 20179 Dec 2017

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Electronic)1049-5258


ConferenceNIPS 2017
Abbreviated titleNIPS 2017
Country/TerritoryUnited States
Internet address


Dive into the research topics of 'Learning multiple visual domains with residual adapters'. Together they form a unique fingerprint.

Cite this