Seeing eye to eye: trustworthy embodiment for task-based conversational agents

David A. Robb*, José Lopes, Muneeb I. Ahmad, Peter E. McKenna, Xingkun Liu, Katrin Lohan, Helen Hastie

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Smart speakers and conversational agents have been accepted into our homes for a number of tasks such as playing music, interfacing with the internet of things, and more recently, general chit-chat. However, they have been less readily accepted in our workplaces. This may be due to data privacy and security concerns that exist with commercially available smart speakers. However, one of the reasons for this may be that a smart speaker is simply too abstract and does not portray the social cues associated with a trustworthy work colleague. Here, we present an in-depth mixed method study, in which we investigate this question of embodiment in a serious task-based work scenario of a first responder team. We explore the concepts of trust, engagement, cognitive load, and human performance using a humanoid head style robot, a commercially available smart speaker, and a specially developed dialogue manager. Studying the effect of embodiment on trust, being a highly subjective and multi-faceted phenomena, is clearly challenging, and our results indicate that potentially, the robot, with its anthropomorphic facial features, expressions, and eye gaze, was trusted more than the smart speaker. In addition, we found that embodying a conversational agent helped increase task engagement and performance compared to the smart speaker. This study indicates that embodiment could potentially be useful for transitioning conversational agents into the workplace, and further in situ, “in the wild” experiments with domain workers could be conducted to confirm this.
Original languageEnglish
Article number1234767
Pages (from-to)1-18
Number of pages18
JournalFrontiers in Robotics and AI
Volume10
DOIs
Publication statusPublished - 30 Aug 2023

Keywords / Materials (for Non-textual outputs)

  • autonomous systems
  • cognitive load
  • conversational agent
  • human–robot teaming
  • remote robots
  • social robotics
  • user engagement

Fingerprint

Dive into the research topics of 'Seeing eye to eye: trustworthy embodiment for task-based conversational agents'. Together they form a unique fingerprint.

Cite this