This paper proposes hierarchical reinforcement learning (RL) methods for communication in multiagent coordination problems modelled as Markov Decision Processes (MDPs). To bridge the gap between the MDP view and the methods used to specify communication protocols in multiagent systems (using logical conditions and propositional message structure), we utilise interaction frames as powerful policy abstractions that can be combined with case-based reasoning techniques. Also, we exploit the fact that breaking communication processes down to manageable “chunks ” of interaction sequences (as suggested by the interaction frames approach) naturally corresponds to methods proposed in the area of hierarchical RL. The approach is illustrated and validated through experiments in a complex application domain which prove that it is capable of handling large state and action spaces.
|Title of host publication||Proceedings of the 2nd European Workshop on Multiagent Systems (EUMAS)|
|Number of pages||12|
|Publication status||Published - 2004|