Abstract
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to perform well uniformly.
Original language | English |
---|---|
Title of host publication | Advances in Neural Information Processing Systems 30 (NIPS 2017) |
Place of Publication | California, United States |
Publisher | Neural Information Processing Systems Foundation, Inc |
Pages | 506-516 |
Number of pages | 11 |
Publication status | Published - 9 Dec 2017 |
Event | NIPS 2017: 31st Conference on Neural Information Processing Systems - Long Beach, California, United States Duration: 4 Dec 2017 → 9 Dec 2017 https://nips.cc/ https://nips.cc/Conferences/2017 |
Publication series
Name | Advances in Neural Information Processing Systems |
---|---|
Volume | 30 |
ISSN (Electronic) | 1049-5258 |
Conference
Conference | NIPS 2017 |
---|---|
Abbreviated title | NIPS 2017 |
Country/Territory | United States |
City | California |
Period | 4/12/17 → 9/12/17 |
Internet address |
Fingerprint
Dive into the research topics of 'Learning multiple visual domains with residual adapters'. Together they form a unique fingerprint.Profiles
-
Hakan Bilen
- School of Informatics - Reader
- Institute of Perception, Action and Behaviour
- Language, Interaction and Robotics
Person: Academic: Research Active