Is AI a problem for forward looking moral responsibility? The problem followed by a solution

Fabio Tollon

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Recent work in AI ethics has come to bear on questions of responsibility. Specifically, questions of whether the nature of AI-based systems render various notions of responsibility inappropriate. While substantial attention has been given to backward-looking senses of responsibility, there has been little consideration of forward-looking senses of responsibility. This paper aims to plug this gap, and will concern itself with responsibility as moral obligation, a particular kind of forward-looking sense of responsibility. Responsibility as moral obligation is predicated on the idea that agents have at least some degree of control over the kinds of systems they create and deploy. AI systems, by virtue of their ability to learn from experience once deployed, and their often experimental nature, may therefore pose a significant challenge to forward-looking responsibility. Such systems might not be able to have their course altered, and so even if their initial programming determines their goals, the means by which they achieve these goals may be outside the control of human operators. In cases such as this, we might say that there is a gap in moral obligation. However, in this paper, I argue that there are no “gaps” in responsibility as moral obligation, as this question comes to bear on AI systems. I support this conclusion by focusing on the nature of risks when developing technology, and by showing that technological assessment is not only about the consequences that a specific technology might have. Technological assessment is more than merely consequentialist, and should also include a hermeneutic component, which looks at the societal meaning of the system. Therefore, while it may be true that the creators of AI systems might not be able to fully appreciate what the consequences of their systems might be, this does not undermine or render improper their responsibility as moral obligation.
Original languageEnglish
Title of host publicationArtificial Intelligence Research
Subtitle of host publicationSecond Southern African Conference, SACAIR 2021, Durban, South Africa, December 6–10, 2021, Proceedings
EditorsEdgar Jembere, Aurona Gerber, Serestina Viriri, Anban Pillay
PublisherSpringer, Cham
Pages307-318
Number of pages12
Volume1551
Edition1
ISBN (Electronic)9783030950705
ISBN (Print)9783030950699
DOIs
Publication statusPublished - Jan 2022
EventSouthern African Artificial Intelligence Conference: Artificial Intelligence for Science, Technology and Society - Online
Duration: 6 Dec 202110 Dec 2021
https://2021.sacair.org.za/

Publication series

NameCommunications in Computer and Information Science
PublisherSpringer Cham
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

ConferenceSouthern African Artificial Intelligence Conference
Abbreviated titleSACAIR2021
CityOnline
Period6/12/2110/12/21
Internet address

Keywords / Materials (for Non-textual outputs)

  • responsibility gaps
  • forward-looking responsibility
  • technological assessment
  • moral obligation

Fingerprint

Dive into the research topics of 'Is AI a problem for forward looking moral responsibility? The problem followed by a solution'. Together they form a unique fingerprint.

Cite this