Explanation Styles for Trustworthy Autonomous Systems

David Robb, Xingkun Liu, Helen Hastie

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a study that explores the formulation of natural language explanations for managing the appropriate amount of trust in a remote autonomous system that fails to complete its mission. Online crowd-sourced participants were shown video vignettes of robots performing an inspection task. We measured participants' mental models, their confidence in their understanding of the robot behaviour and their trust in the robot. We found that including history in the explanation increases trust and confidence, and helps maintain an accurate mental model, but only if context is also included. In addition, our study exposes that some explanation formulations lacking in context can lead to misplaced participant confidence.
Original languageEnglish
Title of host publicationProceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023)
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems
Pages2298-2300
ISBN (Electronic)9781450394321
DOIs
Publication statusPublished - 30 May 2023
EventThe 22nd International Conference on Autonomous Agents and Multiagent Systems - ExCel London, London, United Kingdom
Duration: 29 May 20232 Jun 2023
Conference number: 22
https://aamas2023.soton.ac.uk/

Conference

ConferenceThe 22nd International Conference on Autonomous Agents and Multiagent Systems
Abbreviated titleAAMAS 2023
Country/TerritoryUnited Kingdom
CityLondon
Period29/05/232/06/23
Internet address

Keywords / Materials (for Non-textual outputs)

  • Explanations
  • transparency
  • trust
  • robot faults
  • mental models

Fingerprint

Dive into the research topics of 'Explanation Styles for Trustworthy Autonomous Systems'. Together they form a unique fingerprint.

Cite this