Conditional Mutual Information for Disentangled Representations in Reinforcement Learning

  • Trevor McInroe (Creator)
  • Mhairi Dunion (Creator)
  • Kevin Sebastian Luck (Creator)
  • Josiah Hanna (Creator)
  • Stefano Albrecht (Creator)

Dataset

Abstract

Reinforcement Learning (RL) environments can produce training data with spurious correlations between features due to the amount of training data or its limited feature coverage. This can lead to RL agents encoding these misleading correlations in their latent representation, preventing the agent from generalising if the correlation changes within the environment or when deployed in the real world. Disentangled representations can improve robustness, but existing disentanglement techniques that minimise mutual information between features require independent features, thus they cannot disentangle correlated features. We propose an auxiliary task for RL algorithms that learns a disentangled representation of high-dimensional observations with correlated features by minimising the conditional mutual information between features in the representation. We demonstrate experimentally, using continuous control tasks, that our approach improves generalisation under correlation shifts, as well as improving the training performance of RL algorithms in the presence of correlated features. This is the data for the experiment results in the paper 'Conditional Mutual Information for Disentangled Representations in Reinforcement Learning' (https://arxiv.org/abs/2305.14133). These files contain the evaluation returns for all algorithms and seeds used to create Figures 4 and 5 in the paper. Further details are provided in the README file.
Date made available24 Oct 2023
PublisherEdinburgh DataShare

Cite this