We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible -- this is the main motivation behind multi-task learning. In contrast to many deep multi-task learning models, we do not predefine a parameter sharing strategy by specifying which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way.
|Number of pages||4|
|Publication status||E-pub ahead of print - 16 Apr 2017|
|Event||5th International Conference on Learning Representations - Palais des Congrès Neptune, Toulon, France|
Duration: 24 Apr 2017 → 26 Apr 2017
|Conference||5th International Conference on Learning Representations|
|Abbreviated title||ICLR 2017|
|Period||24/04/17 → 26/04/17|