Deep Multi-task Representation Learning: A Tensor Factorisation Approach

Yongxin Yang, Timothy Hospedales

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations (ICLR 2017)
Number of pages12
Publication statusE-pub ahead of print - 26 Apr 2017
Event5th International Conference on Learning Representations - Palais des Congrès Neptune, Toulon, France
Duration: 24 Apr 201726 Apr 2017
https://iclr.cc/archive/www/2017.html

Conference

Conference5th International Conference on Learning Representations
Abbreviated titleICLR 2017
Country/TerritoryFrance
CityToulon
Period24/04/1726/04/17
Internet address

Fingerprint

Dive into the research topics of 'Deep Multi-task Representation Learning: A Tensor Factorisation Approach'. Together they form a unique fingerprint.

Cite this