Hierarchical reinforcement learning for communicating agents

Michael Rovatsos, Felix Fischer, Gerhard Weiss

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes hierarchical reinforcement learning (RL) methods for communication in multiagent coordination problems modelled as Markov Decision Processes (MDPs). To bridge the gap between the MDP view and the methods used to specify communication protocols in multiagent systems (using logical conditions and propositional message structure), we utilise interaction frames as powerful policy abstractions that can be combined with case-based reasoning techniques. Also, we exploit the fact that breaking communication processes down to manageable “chunks ” of interaction sequences (as suggested by the interaction frames approach) naturally corresponds to methods proposed in the area of hierarchical RL. The approach is illustrated and validated through experiments in a complex application domain which prove that it is capable of handling large state and action spaces.
Original languageEnglish
Title of host publicationProceedings of the 2nd European Workshop on Multiagent Systems (EUMAS)
Pages593-604
Number of pages12
Publication statusPublished - 2004

Fingerprint

Dive into the research topics of 'Hierarchical reinforcement learning for communicating agents'. Together they form a unique fingerprint.

Cite this