Multitask Soft Option Learning

Maximilian Igl, Andrew Gambardell, Nantas Nardelli, Jinke He, N. Siddharth, Wendelinv Böhmer, Shimon Whiteson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present Multitask Soft Option Learning (MSOL), a hierarchical multitask framework based on Planning as Inference. MSOL extends the concept of options, using separate variational posteriors for each task, regularized by a shared prior. This “soft” version of options avoids several instabilities during training in a multitask setting, and provides a natural way to learn both intra-option policies and their terminations. Furthermore, it allows fine-tuning of options for new tasks without forgetting their learned policies, leading to faster training without reducing the expressiveness of the hierarchical policy. We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines.
Original languageEnglish
Title of host publicationProceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)
PublisherPMLR
Pages969-978
Number of pages10
Publication statusPublished - 6 Aug 2020
Event36th Conference on Uncertainty in Artificial Intelligence 2020 - Virtual conference, Canada
Duration: 3 Aug 20206 Aug 2020
http://www.auai.org/~w-auai/uai2020/index.php

Publication series

NameProceedings of Machine Learning Research
Volume124
ISSN (Electronic)2640-3498

Conference

Conference36th Conference on Uncertainty in Artificial Intelligence 2020
Abbreviated titleUAI 2020
CountryCanada
CityVirtual conference
Period3/08/206/08/20
Internet address

Fingerprint Dive into the research topics of 'Multitask Soft Option Learning'. Together they form a unique fingerprint.

Cite this