Multitask Soft Option Learning

Maximilian Igl, Andrew Gambardell, Nantas Nardelli, Jinke He, N. Siddharth, Wendelinv Böhmer, Shimon Whiteson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We present Multitask Soft Option Learning (MSOL), a hierarchical multitask framework based on Planning as Inference. MSOL extends the concept of options, using separate variational posteriors for each task, regularized by a shared prior. This “soft” version of options avoids several instabilities during training in a multitask setting, and provides a natural way to learn both intra-option policies and their terminations. Furthermore, it allows fine-tuning of options for new tasks without forgetting their learned policies, leading to faster training without reducing the expressiveness of the hierarchical policy. We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines.
Original languageEnglish
Title of host publicationProceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)
Number of pages10
Publication statusPublished - 6 Aug 2020
Event36th Conference on Uncertainty in Artificial Intelligence 2020 - Virtual conference, Canada
Duration: 3 Aug 20206 Aug 2020

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


Conference36th Conference on Uncertainty in Artificial Intelligence 2020
Abbreviated titleUAI 2020
CityVirtual conference
Internet address


Dive into the research topics of 'Multitask Soft Option Learning'. Together they form a unique fingerprint.

Cite this