Description
Our folk understanding of “deception” combines two features: (i) actions aimed to induce false belief; and (ii) malicious intent. Yet these come apart in practice. (ii) is present without (i) in the case of “paltering,” or the utterance of truths with the intent to mislead; (i) is present without (ii) in the case of “white lies” that serve a pro-social function. I argue that, in order to engage socially with humans, robots must be capable of regularly producing (i). In fact, there are positive ethical reasons for wanting robots to have the capacity to lie in complex social interactions, as they may need to do so to satisfy Asimov’s Laws. The real requirement for ensuring trustworthy robots is to solve the problem of preventing malicious intent. This problem is much more pressing, as it already confronts us today, for instance in military drones.Period | 31 Oct 2019 |
---|---|
Event title | Human Robot Interaction between Trust and Deception |
Event type | Conference |
Location | Pisa, ItalyShow on map |
Degree of Recognition | International |
Documents & Links
Related content
-
Research output
-
White lies on silver tongues: Why robots need to deceive (and how)
Research output: Chapter in Book/Report/Conference proceeding › Chapter
-
Activities
-
Alistair Isaac On EURA - Jean Monnet Centre of Excellence- Deception: actions or goals?
Activity: Participating in or organising an event types › Public Engagement – Media article or participation