Robust Domain Randomised Reinforcement Learning through Peer-to-Peer Distillation

Chenyang Zhao, Timothy M Hospedales

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In reinforcement learning, domain randomisation is an increasingly popular technique for learning more general policies that are robust to domain-shifts at deployment. However, naively aggregating information from randomised domains may lead to high variance in gradient estimation and unstable learning process. To address this issue, we present a peer-to-peer online distillation strategy for RL termed P2PDRL, where multiple workers are each assigned to a different environment, and exchange knowledge through mutual regularisation based on Kullback–Leibler divergence. Our experiments on continuous control tasks show that P2PDRL enables robust learning across a wider randomisation distribution than baselines, and more robust generalisation to new environments at testing.
Original languageEnglish
Title of host publicationProceedings of The 13th Asian Conference on Machine Learning
EditorsVineeth N. Balasubramanian, Ivor Tsang
PublisherPMLR
Pages1237-1252
Number of pages16
Publication statusPublished - 17 Nov 2021
Event13th Asian Conference on Machine Learning - Virtual
Duration: 17 Nov 202119 Nov 2021
http://www.acml-conf.org/2021/

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume157
ISSN (Electronic)2640-3498

Conference

Conference13th Asian Conference on Machine Learning
Abbreviated titleACML 2021
Period17/11/2119/11/21
Internet address

Keywords / Materials (for Non-textual outputs)

  • domain randomisation
  • deep reinforcement learning
  • mutual learning

Fingerprint

Dive into the research topics of 'Robust Domain Randomised Reinforcement Learning through Peer-to-Peer Distillation'. Together they form a unique fingerprint.

Cite this