Responsibility gaps and the reactive attitudes

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, filters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifically, we might find ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fine, but with the ever increasing sophistication of AI-systems, this might become a problem. While it seems unproblematic to realize that being angry at your car for breaking down is unfitting, can the same be said for AI-systems? In this paper, therefore, I will investigate the so-called “reactive attitudes”, and their important link to our responsibility practices. I then show how within this framework there exist exemption and excuse conditions, and test whether our adopting the “objective attitude” toward agential AI is justified. I argue that such an attitude is appropriate in the context of three distinct senses of responsibility (answerability, attributability, and accountability), and that, therefore, AI-systems do not undermine our responsibility ascriptions.
Original languageEnglish
Pages (from-to)295-302
Number of pages8
JournalAI and Ethics
Volume3
Issue number1
Early online date30 May 2022
DOIs
Publication statusPublished - Feb 2023

Keywords / Materials (for Non-textual outputs)

  • reactive attitudes
  • responsibility gaps
  • artificial intelligence

Fingerprint

Dive into the research topics of 'Responsibility gaps and the reactive attitudes'. Together they form a unique fingerprint.

Cite this