The moral psychology of artificial intelligence

Ali Ladak*, Steve Loughnan, Matti Wilks

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Artificial intelligences (AIs), although often perceived as mere tools, have increasingly advanced cognitive and social capacities. In response, psychologists are studying people’s perceptions of AIs as moral agents (entities that can do right and wrong) and moral patients (entities that can be targets of right and wrong actions). This article reviews the extent to which people see AIs as moral agents and patients and how they feel about such AIs. We also examine how characteristics about ourselves and the AIs affect attributions of moral agency and patiency. We find multiple factors that contribute to attributions of moral agency and patiency in AIs, some of which overlap with attributions of morality to humans (e.g., mind perception) and some that are unique (e.g., sci-fi fan identity). We identify several future directions, including studying agency and patiency attributions to the latest generation of chatbots and to likely more advanced future AIs that are being rapidly developed.

Original languageEnglish
Pages (from-to)1-8
Number of pages8
JournalCurrent Directions in Psychological Science
Early online date30 Nov 2023
DOIs
Publication statusE-pub ahead of print - 30 Nov 2023

Keywords / Materials (for Non-textual outputs)

  • moral agency
  • moral patiency
  • morality
  • artificial intelligence
  • robots

Fingerprint

Dive into the research topics of 'The moral psychology of artificial intelligence'. Together they form a unique fingerprint.

Cite this