A Unified Perspective on Multi-Domain and Multi-Task Learning

Yongxin Yang, Timothy Hospedales

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL). By introducing the concept of a semantic descriptor, this framework unifies MDL and MTL as well as encompassing various classic and recent MTL/MDL algorithms by interpreting them as different ways of constructing semantic descriptors. Our interpretation
provides an alternative pipeline for zero-shot learning (ZSL), where a model for a novel class can be constructed without training data. Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated by its semantic descriptor. Experiments across this range of problems demonstrate that our framework outperforms a variety of alternatives.
Original languageEnglish
Title of host publication3rd International Conference on Learning Representations (ICLR)
Number of pages9
Publication statusPublished - 2015
Event3rd International Conference on Learning Representations - The Hilton San Diego Resort & Spa, San Diego, United States
Duration: 7 May 20159 May 2015
https://iclr.cc/archive/www/doku.php%3Fid=iclr2015:main.html

Conference

Conference3rd International Conference on Learning Representations
Abbreviated titleICLR 2015
Country/TerritoryUnited States
CitySan Diego
Period7/05/159/05/15
Internet address

Fingerprint

Dive into the research topics of 'A Unified Perspective on Multi-Domain and Multi-Task Learning'. Together they form a unique fingerprint.

Cite this