Do others mind? Moral agents without mental states

Research output: Contribution to journalArticlepeer-review

Abstract

As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities that have internal mental states. Against this thesis I argue that the requirement of internal states is philosophically unsound as it runs up against the problem of other minds. In place of intentional accounts, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. The implications of this thesis are that at some point in the future we may be faced with moral situations in which no human being is responsible, but a machine may be. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not.
Original languageEnglish
Pages (from-to)182-194
Number of pages13
JournalSouth African Journal of Philosophy
Volume40
Issue number2
Early online date29 Jun 2021
DOIs
Publication statusPublished - Jun 2021

Fingerprint

Dive into the research topics of 'Do others mind? Moral agents without mental states'. Together they form a unique fingerprint.

Cite this