Learning from Demonstration withWeakly Supervised Disentanglement

Yordan Hristov, Subramanian Ramamoorthy

Research output: Contribution to conferencePosterpeer-review


Robotic manipulation tasks, such as wiping with a soft sponge, require control from multiple rich sensory modalities. Human-robot interaction, aimed at teaching robots, is difficult in this setting as there is potential for mismatch between human and machine comprehension of the rich data streams. We treat the task of interpretable learning from demonstration as an optimisation problem over a probabilistic generative model. To account for the high-dimensionality of the data, a high-capacity neural network is chosen to represent the model. The latent variables in this model are explicitly aligned with high-level notions and concepts that are manifested in a set of demonstrations. We show that such alignment is best achieved through the use of labels from the end user, in an appropriately restricted vocabulary, in contrast to the conventional approach of the designer picking a prior over the latent variables. Our approach is evaluated in the context of a table-top robot manipulation task performed by a PR2 robot – that of dabbing liquids with a sponge (forcefully pressing a sponge and moving it along a surface). The robot provides visual information, arm joint positions and arm joint efforts. We have made videos of the task and data available - see supplementary materials at: https://sites.google.com/view/weak-label-lfd.
Original languageEnglish
Number of pages16
Publication statusAccepted/In press - 12 Jan 2020
EventNinth International Conference on Learning Representations 2021 - Virtual Conference
Duration: 4 May 20217 May 2021


ConferenceNinth International Conference on Learning Representations 2021
Abbreviated titleICLR 2021
CityVirtual Conference
Internet address

Fingerprint Dive into the research topics of 'Learning from Demonstration withWeakly Supervised Disentanglement'. Together they form a unique fingerprint.

Cite this