Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment. In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies. Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system. By taking advantage of recent advances in unsupervised learning, we propose modeling opponents using variational autoencoders. Additionally, many existing methods in the literature assume that the opponent models have access to opponent's observations and actions during both training and execution. To eliminate this assumption, we propose a modification that attempts to identify the underlying opponent model using only local information of our agent, such as its observations, actions, and rewards. The experiments indicate that our opponent modeling methods achieve equal or greater episodic returns in reinforcement learning tasks against another modeling method.
|Number of pages||8|
|Publication status||Published - 8 Feb 2020|
|Event||AAAI 2020 Workshop on Reinforcement Learning in Games - New York, United States|
Duration: 8 Feb 2020 → 8 Feb 2020
|Workshop||AAAI 2020 Workshop on Reinforcement Learning in Games|
|Period||8/02/20 → 8/02/20|