TY - UNPB
T1 - Clustering markov decision processes for continual transfer
AU - Mahmud, M. M.
AU - Hawasly, Majd
AU - Rosman, Benjamin
AU - Ramamoorthy, Subramanian
PY - 2013
Y1 - 2013
N2 - We present algorithms to effectively represent a set of Markov decision processes (MDPs), whose optimal policies have already been learned, by a smaller source subset for lifelong, policy-reuse-based transfer learning in reinforcement learning. This is necessary when the number of previous tasks is large and the cost of measuring similarity counteracts the benefit of transfer. The source subset forms an `ϵ-net' over the original set of MDPs, in the sense that for each previous MDP Mp, there is a source Ms whose optimal policy has <ϵ regret in Mp. Our contributions are as follows. We present EXP-3-Transfer, a principled policy-reuse algorithm that optimally reuses a given source policy set when learning for a new MDP. We present a framework to cluster the previous MDPs to extract a source subset. The framework consists of (i) a distance dV over MDPs to measure policy-based similarity between MDPs; (ii) a cost function g(⋅) that uses dV to measure how good a particular clustering is for generating useful source tasks for EXP-3-Transfer and (iii) a provably convergent algorithm, MHAV, for finding the optimal clustering. We validate our algorithms through experiments in a surveillance domain.
AB - We present algorithms to effectively represent a set of Markov decision processes (MDPs), whose optimal policies have already been learned, by a smaller source subset for lifelong, policy-reuse-based transfer learning in reinforcement learning. This is necessary when the number of previous tasks is large and the cost of measuring similarity counteracts the benefit of transfer. The source subset forms an `ϵ-net' over the original set of MDPs, in the sense that for each previous MDP Mp, there is a source Ms whose optimal policy has <ϵ regret in Mp. Our contributions are as follows. We present EXP-3-Transfer, a principled policy-reuse algorithm that optimally reuses a given source policy set when learning for a new MDP. We present a framework to cluster the previous MDPs to extract a source subset. The framework consists of (i) a distance dV over MDPs to measure policy-based similarity between MDPs; (ii) a cost function g(⋅) that uses dV to measure how good a particular clustering is for generating useful source tasks for EXP-3-Transfer and (iii) a provably convergent algorithm, MHAV, for finding the optimal clustering. We validate our algorithms through experiments in a surveillance domain.
M3 - Working paper
BT - Clustering markov decision processes for continual transfer
ER -