When to Trust a Liar

Activity: Academic talk or presentation typesInvited talk


Our folk understanding of “deception” combines two features: (i) actions aimed to induce false belief; and (ii) malicious intent. Yet these come apart in practice. (ii) is present without (i) in the case of “paltering,” or the utterance of truths with the intent to mislead; (i) is present without (ii) in the case of “white lies” that serve a pro-social function. I argue that, in order to engage socially with humans, robots must be capable of regularly producing (i). In fact, there are positive ethical reasons for wanting robots to have the capacity to lie in complex social interactions, as they may need to do so to satisfy Asimov’s Laws. The real requirement for ensuring trustworthy robots is to solve the problem of preventing malicious intent. This problem is much more pressing, as it already confronts us today, for instance in military drones.
Period31 Oct 2019
Event titleHuman Robot Interaction between Trust and Deception
Event typeConference
LocationPisa, ItalyShow on map
Degree of RecognitionInternational