Clustering markov decision processes for continual transfer

M. M. Mahmud, Majd Hawasly, Benjamin Rosman, Subramanian Ramamoorthy

Research output: Working paper

Abstract

We present algorithms to effectively represent a set of Markov decision processes (MDPs), whose optimal policies have already been learned, by a smaller source subset for lifelong, policy-reuse-based transfer learning in reinforcement learning. This is necessary when the number of previous tasks is large and the cost of measuring similarity counteracts the benefit of transfer. The source subset forms an `ϵ-net' over the original set of MDPs, in the sense that for each previous MDP Mp, there is a source Ms whose optimal policy has <ϵ regret in Mp. Our contributions are as follows. We present EXP-3-Transfer, a principled policy-reuse algorithm that optimally reuses a given source policy set when learning for a new MDP. We present a framework to cluster the previous MDPs to extract a source subset. The framework consists of (i) a distance dV over MDPs to measure policy-based similarity between MDPs; (ii) a cost function g(⋅) that uses dV to measure how good a particular clustering is for generating useful source tasks for EXP-3-Transfer and (iii) a provably convergent algorithm, MHAV, for finding the optimal clustering. We validate our algorithms through experiments in a surveillance domain.
Original languageEnglish
Number of pages56
Publication statusPublished - 2013

Fingerprint

Dive into the research topics of 'Clustering markov decision processes for continual transfer'. Together they form a unique fingerprint.

Cite this