Towards a Theory of Explanations for Human–Robot Collaboration

Mohan Sridharan*, Ben Meadows

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

This paper makes two contributions towards enabling a robot to provide explanatory descriptions of its decisions, the underlying knowledge and beliefs, and the experiences that informed these beliefs. First, we present a theory of explanations comprising (i) claims about representing, reasoning with, and learning domain knowledge to support the construction of explanations; (ii) three fundamental axes to characterize explanations; and (iii) a methodology for constructing these explanations. Second, we describe an architecture for robots that implements this theory and supports scalability to complex domains and explanations. We demonstrate the architecture’s capabilities in the context of a simulated robot (a) moving target objects to desired locations or people; or (b) following recipes to bake biscuits.
Original languageEnglish
Pages (from-to)331-342
JournalKI - Künstliche Intelligenz
Volume33
DOIs
Publication statusPublished - 23 Sept 2019

Keywords / Materials (for Non-textual outputs)

  • Human–robot collaboration
  • Explanations
  • Non-monotonic logical reasoning
  • Probabilistic planning

Fingerprint

Dive into the research topics of 'Towards a Theory of Explanations for Human–Robot Collaboration'. Together they form a unique fingerprint.

Cite this