White lies on silver tongues: Why robots need to deceive (and how)

Alistair Isaac, Will Bridewell

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

It is easy to see that social robots will need the ability to detect and evaluate deceptive speech, otherwise they will be vulnerable to manipulation by malevolent humans. More surprisingly, we argue that effective social robots must also be able to produce deceptive speech. Many forms of technically deceptive speech perform a positive pro-social function, and the social integration of artificial agents will only be possible if they participate in this market of constructive deceit. We demonstrate that a crucial condition for detecting and producing deceptive speech is possession of a theory of mind. Furthermore, strategic reasoning about deception requires identifying a distinguished type of goal, which we call an ulterior motive. We argue that these goals are the appropriate target for ethical evaluation, not the veridicality of speech per se. Consequently, deception-capable robots are compatible with the most prominent programs to ensure robots behave ethically.
Original languageEnglish
Title of host publicationRobot Ethics 2.0
Subtitle of host publicationFrom autonomous cars to artificial intelligence
EditorsPatrick Lin, Keith Abney, Ryan Robert Jenkins
PublisherOxford University Press
ISBN (Print)9780190652951
DOIs
Publication statusPublished - Nov 2017

Fingerprint

Dive into the research topics of 'White lies on silver tongues: Why robots need to deceive (and how)'. Together they form a unique fingerprint.

Cite this