Interpretable Latent Spaces for Learning from Demonstration

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding highdimensional sensory input stream from the world. Models such as deep neural networks, with high capacity through their large parameter spaces, can be used to compress the high-dimensional sensory data to lower dimensional representations. These
low-dimensional representations facilitate symbol grounding, but may not guarantee that the representation would be human-interpretable. We propose a method which utilises the grouping of user-defined symbols and their corresponding sensory observations in order to align the learnt compressed latent representation with the semantic notions contained in the abstract labels. We demonstrate this through experiments with both simulated and real-world object data, showing that such alignment can be achieved in a process of physical symbol grounding.
Original languageEnglish
Title of host publicationProc. Conference on Robot Learning (CoRL), 2018
Place of PublicationZürich, Switzerland
PublisherPMLR
Pages957-968
Number of pages12
Volume87
Publication statusPublished - 2018
EventConference on Robot Learning (CoRL) - 2018 Edition - Zürich, Switzerland
Duration: 29 Oct 201831 Oct 2018
http://www.robot-learning.org/

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume87
ISSN (Electronic)2640-3498

Conference

ConferenceConference on Robot Learning (CoRL) - 2018 Edition
Abbreviated titleCoRL 2018
Country/TerritorySwitzerland
CityZürich
Period29/10/1831/10/18
Internet address

Fingerprint

Dive into the research topics of 'Interpretable Latent Spaces for Learning from Demonstration'. Together they form a unique fingerprint.

Cite this