Artificial moral advisors: A new perspective from moral psychology

Yuxin Liu*, Adam Moore, Jamie Webb, Shannon Vallor

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Philosophers have recently put forward the possibility of achieving moral enhancement through artificial intelligence (e.g., Giubilini and Savulescu’s version [32]), proposing various forms of “artificial moral advisor” (AMA) to help people make moral decisions without the drawbacks of human cognitive limitations. In this paper, we provide a new perspective on the AMA, drawing on empirical evidence from moral psychology to point out several challenges to these proposals that have been largely neglected by AI ethicists. In particular, we suggest that the AMA at its current conception is fundamentally misaligned with human moral psychology – it incorrectly assumes a static moral values framework underpinning the AMA’s attunement to individual users, and people’s reactions and subsequent (in)actions in response to the AMA suggestions will likely diverge substantially from expectations. As such, we note the necessity for a coherent understanding of human moral psychology in the future development of AMAs.
Original languageEnglish
Title of host publicationAIES '22
Subtitle of host publicationArtificial Moral Advisors: A New Perspective from Moral Psychology
Place of PublicationOxford, UK
PublisherAssociation for Computing Machinery, Inc
Pages436–445
Number of pages10
ISBN (Print)9781450392471
DOIs
Publication statusPublished - Jul 2022

Keywords / Materials (for Non-textual outputs)

  • moral psychology
  • artificial moral advisor
  • AI moral enhancement
  • AI ethics
  • normative ethics

Fingerprint

Dive into the research topics of 'Artificial moral advisors: A new perspective from moral psychology'. Together they form a unique fingerprint.

Cite this