Abstract / Description of output
Robots interacting with humans often have to recognize, reason about and describe the spatial relations between objects. Prepositions are often used to describe such spatial relations, but it is difficult to equip a robot with comprehensive knowledge of these prepositions. This paper describes an architecture for incrementally learning and revising the grounding of spatial relations between objects. Answer Set Prolog, a declarative language, is used to represent and reason with incomplete knowledge that includes prepositional relations between objects in a scene. A generic grounding of prepositionsfor spatial relations, human input (when available), and nonmonotonic logical inference, are used to infer spatial relations in 3D point clouds of given scenes, incrementally acquiring and revising a specialized metric grounding of the prepositions, and learning the relative confidence associated with each grounding. The architecture is evaluated on a benchmark dataset of tabletop images and on complex, simulated scenes of furniture.
Original language | English |
---|---|
Title of host publication | Workshop on Perception, Inference and Learning for Joint Semantic, Geometric and Physical Understanding at ICRA 2018, Brisbane, Australia, 21/05/18 |
Number of pages | 6 |
Publication status | Published - 21 May 2018 |
Event | Workshop on Perception, Inference and Learning for Joint Semantic, Geometric and Physical Understanding at ICRA 2018 - Brisbane, Australia Duration: 21 May 2018 → … |
Workshop
Workshop | Workshop on Perception, Inference and Learning for Joint Semantic, Geometric and Physical Understanding at ICRA 2018 |
---|---|
Country/Territory | Australia |
City | Brisbane |
Period | 21/05/18 → … |