Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning

Filippos Christianos, Lukas Schäfer, Stefano V Albrecht

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Exploration in multi-agent reinforcement learning is a challenging problem, especially in environments with sparse rewards. We propose a general method for efficient exploration by sharing experience amongst agents. Our proposed algorithm, called Shared Experience Actor-Critic (SEAC), applies experience sharing in an actor-critic framework. We evaluate SEAC in a collection of sparse-reward multi-agent environments and find that it consistently outperforms two baselines and two state-of-the-art algorithms by learning in fewer steps and converging to higher returns. In some harder environments, experience sharing makes the difference between learning to solve the task and not learning at all.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 33 (NeurIPS 2020)
PublisherCurran Associates Inc
Pages10707-10717
Number of pages16
Publication statusPublished - 6 Dec 2020
EventThirty-Fourth Conference on Neural Information Processing Systems - Virtual Conference
Duration: 6 Dec 202012 Dec 2020
https://nips.cc/Conferences/2020

Publication series

Name
ISSN (Print)1049-5258

Conference

ConferenceThirty-Fourth Conference on Neural Information Processing Systems
Abbreviated titleNeurIPS 2020
CityVirtual Conference
Period6/12/2012/12/20
Internet address

Fingerprint

Dive into the research topics of 'Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning'. Together they form a unique fingerprint.

Cite this