Activities per year
It is easy to see that social robots will need the ability to detect and evaluate deceptive speech, otherwise they will be vulnerable to manipulation by malevolent humans. More surprisingly, we argue that effective social robots must also be able to produce deceptive speech. Many forms of technically deceptive speech perform a positive pro-social function, and the social integration of artificial agents will only be possible if they participate in this market of constructive deceit. We demonstrate that a crucial condition for detecting and producing deceptive speech is possession of a theory of mind. Furthermore, strategic reasoning about deception requires identifying a distinguished type of goal, which we call an ulterior motive. We argue that these goals are the appropriate target for ethical evaluation, not the veridicality of speech per se. Consequently, deception-capable robots are compatible with the most prominent programs to ensure robots behave ethically.
|Title of host publication||Robot Ethics 2.0|
|Subtitle of host publication||From autonomous cars to artificial intelligence|
|Editors||Patrick Lin, Keith Abney, Ryan Robert Jenkins|
|Publisher||Oxford University Press|
|Publication status||Published - Nov 2017|
FingerprintDive into the research topics of 'White lies on silver tongues: Why robots need to deceive (and how)'. Together they form a unique fingerprint.
Why We'll Eventually Want Our Robots to Deceive Us
1 item of Media coverage
Alistair Isaac On EURA - Jean Monnet Centre of Excellence- Deception: actions or goals?
Alistair Isaac (Advisor)13 Jan 2020
Activity: Participating in or organising an event types › Public Engagement – Media article or participation
When to Trust a Liar
Alistair Isaac (Invited speaker)31 Oct 2019
Activity: Academic talk or presentation types › Invited talkFile
- School of Philosophy, Psychology and Language Sciences - Senior Lecturer
Person: Academic: Research Active