Abstract / Description of output
Implementing sensitivity to norms, laws, and human values in computational systems has transitioned from philosophical reflection to an actual engineering challenge. The “value alignment” approach to dealing with superintelligent AIs tends to employ computationally friendly concepts such as utility functions, system goals, agent preferences, and value optimizers, which, this chapter argues, do not have intrinsic ethical significance. This chapter considers what may be lost in the excision of intrinsically ethical concepts from the project of engineering moral machines. It argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.
Original language | English |
---|---|
Title of host publication | Ethics of Artificial Intelligence |
Editors | S. Matthew Liao |
Place of Publication | New York |
Publisher | Oxford University Press |
Chapter | 13 |
Pages | 383-412 |
Number of pages | 29 |
ISBN (Print) | 9780190905033, 9780190905040 |
DOIs | |
Publication status | Published - 1 Sept 2020 |
Keywords / Materials (for Non-textual outputs)
- ethics of AI
- virtue ethics
- machine intelligence
- value alignment
- virtue embodiment
- superintelligence
- intrinsic ethical significance
- utility function
- human-level AI
- beneficial AI
- moral character