Using dimensionality reduction to exploit constraints in reinforcement learning

S. Bitzer, M. Howard, S. Vijayakumar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Reinforcement learning in the high-dimensional, continuous spaces typical in robotics, remains a challenging problem. To overcome this challenge, a popular approach has been to use demonstrations to find an appropriate initialisation of the policy in an attempt to reduce the number of iterations needed to find a solution. Here, we present an alternative way to incorporate prior knowledge from demonstrations of individual postures into learning, by extracting the inherent problem structure to find an efficient state representation. In particular, we use probabilistic, nonlinear dimensionality reduction to capture latent constraints present in the data. By learning policies in the learnt latent space, we are able to solve the planning problem in a reduced space that automatically satisfies task constraints. As shown in our experiments, this reduces the exploration needed and greatly accelerates the learning. We demonstrate our approach for learning a bimanual reaching task on the 19-DOF KHR-1HV humanoid.
Original languageEnglish
Title of host publicationIntelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on
Place of PublicationNew York
PublisherInstitute of Electrical and Electronics Engineers
Pages3219-3225
Number of pages7
ISBN (Print)978-1-4244-6675-7
DOIs
Publication statusPublished - 2010

Fingerprint

Dive into the research topics of 'Using dimensionality reduction to exploit constraints in reinforcement learning'. Together they form a unique fingerprint.

Cite this