Moral machines: From value alignment to embodied virtue

Wendell Wallach, Shannon Vallor

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

Abstract / Description of output

Implementing sensitivity to norms, laws, and human values in computational systems has transitioned from philosophical reflection to an actual engineering challenge. The “value alignment” approach to dealing with superintelligent AIs tends to employ computationally friendly concepts such as utility functions, system goals, agent preferences, and value optimizers, which, this chapter argues, do not have intrinsic ethical significance. This chapter considers what may be lost in the excision of intrinsically ethical concepts from the project of engineering moral machines. It argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.
Original languageEnglish
Title of host publicationEthics of Artificial Intelligence
EditorsS. Matthew Liao
Place of PublicationNew York
PublisherOxford University Press
Chapter13
Pages383-412
Number of pages29
ISBN (Print)9780190905033, 9780190905040
DOIs
Publication statusPublished - 1 Sept 2020

Keywords / Materials (for Non-textual outputs)

  • ethics of AI
  • virtue ethics
  • machine intelligence
  • value alignment
  • virtue embodiment
  • superintelligence
  • intrinsic ethical significance
  • utility function
  • human-level AI
  • beneficial AI
  • moral character

Fingerprint

Dive into the research topics of 'Moral machines: From value alignment to embodied virtue'. Together they form a unique fingerprint.

Cite this