Abstract / Description of output
Robotic manipulation tasks, such as wiping with a soft sponge, require control from multiple rich sensory modalities. Human-robot interaction, aimed at teach- ing robots, is difficult in this setting as there is potential for mismatch between human and machine comprehension of the rich data streams. We treat the task of interpretable learning from demonstration as an optimisation problem over a probabilistic generative model. To account for the high-dimensionality of the data, a high-capacity neural network is chosen to represent the model. The latent variables in this model are explicitly aligned with high-level notions and concepts that are manifested in a set of demonstrations. We show that such alignment is best achieved through the use of labels from the end user, in an appropriately restricted vocabulary, in contrast to the conventional approach of the designer picking a prior over the latent variables. Our approach is evaluated in the context of two table-top robot manipulation tasks performed by a PR2 robot – that of dabbing liquids with a sponge (forcefully pressing a sponge and moving it along a surface) and pouring between different containers. The robot provides visual information, arm joint positions and arm joint efforts. We have made videos of the tasks and data available - see supplementary materials at: https://sites.google.com/view/weak-label-lfd.
Original language | English |
---|---|
Title of host publication | International Conference on Learning Representations (ICLR 2021) |
Number of pages | 18 |
Publication status | Published - 4 May 2021 |
Event | Ninth International Conference on Learning Representations 2021 - Virtual Conference Duration: 4 May 2021 → 7 May 2021 https://iclr.cc/Conferences/2021/Dates |
Conference
Conference | Ninth International Conference on Learning Representations 2021 |
---|---|
Abbreviated title | ICLR 2021 |
City | Virtual Conference |
Period | 4/05/21 → 7/05/21 |
Internet address |