Abstract / Description of output
Philosophers have recently put forward the possibility of achieving moral enhancement through artificial intelligence (e.g., Giubilini and Savulescu’s version [32]), proposing various forms of “artificial moral advisor” (AMA) to help people make moral decisions without the drawbacks of human cognitive limitations. In this paper, we provide a new perspective on the AMA, drawing on empirical evidence from moral psychology to point out several challenges to these proposals that have been largely neglected by AI ethicists. In particular, we suggest that the AMA at its current conception is fundamentally misaligned with human moral psychology – it incorrectly assumes a static moral values framework underpinning the AMA’s attunement to individual users, and people’s reactions and subsequent (in)actions in response to the AMA suggestions will likely diverge substantially from expectations. As such, we note the necessity for a coherent understanding of human moral psychology in the future development of AMAs.
Original language | English |
---|---|
Title of host publication | AIES '22 |
Subtitle of host publication | Artificial Moral Advisors: A New Perspective from Moral Psychology |
Place of Publication | Oxford, UK |
Publisher | Association for Computing Machinery, Inc |
Pages | 436–445 |
Number of pages | 10 |
ISBN (Print) | 9781450392471 |
DOIs | |
Publication status | Published - Jul 2022 |
Keywords / Materials (for Non-textual outputs)
- moral psychology
- artificial moral advisor
- AI moral enhancement
- AI ethics
- normative ethics