Action Priors for Learning Domain Invariances

Benjamin Rosman*, Subramanian Ramamoorthy

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

An agent tasked with solving a number of different decision making problems in similar environments has an opportunity to learn over a longer timescale than each individual task. Through examining solutions to different tasks, it can uncover behavioral invariances in the domain, by identifying actions to be prioritized in local contexts, invariant to task details. This information has the effect of greatly increasing the speed of solving new problems. We formalise this notion as action priors, defined as distributions over the action space, conditioned on environment state, and show how these can be learnt from a set of value functions. We apply action priors in the setting of reinforcement learning, to bias action selection during exploration. Aggressive use of action priors performs context based pruning of the available actions, thus reducing the complexity of lookahead during search. We additionally define action priors over observation features, rather than states, which provides further flexibility and generalizability, with the additional benefit of enabling feature selection. Action priors are demonstrated in experiments in a simulated factory environment and a large random graph domain, and show significant speed ups in learning new tasks. Furthermore, we argue that this mechanism is cognitively plausible, and is compatible with findings from cognitive psychology.

Original languageEnglish
Pages (from-to)107-118
Number of pages12
JournalIEEE Transactions on Autonomous Mental Development
Volume7
Issue number2
DOIs
Publication statusPublished - Jun 2015

Keywords / Materials (for Non-textual outputs)

  • Action ordering
  • action selection
  • reinforcement learning
  • search pruning
  • transfer learning
  • STOCHASTIC DOMAINS
  • SKILL
  • CHESS

Fingerprint

Dive into the research topics of 'Action Priors for Learning Domain Invariances'. Together they form a unique fingerprint.

Cite this