Algorithmic sentencing: Drawing lessons from human factors research

John Zerilli

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

Abstract

Researchers in the field of “human factors” have long been aware that when humans devolve certain of their functions to technology, the transfer from human to machine can restructure more than the division of labor between them: humans’ perceptions of themselves and their abilities may also change. Such findings are relevant to the use of algorithmic and data-driven technologies, but whether they hold up in the specific context of recidivism risk assessment is only beginning to be considered. This chapter describes and analyzes some pertinent human factors results, and assesses the extent to which they pose a problem for the use of algorithms in the sentencing of offenders. While the findings from human factors research are themselves robust, they do not seem to translate neatly to the judicial sphere. The incentives, objectives, and ideologies of sentencing judges appear to upset the usual pattern of results seen in many other domains of human factors research.
Original languageEnglish
Title of host publicationSentencing and Artificial Intelligence
EditorsJesper Ryberg, Julian V. Roberts
PublisherOxford University Press
Chapter9
Pages165-183
Number of pages19
ISBN (Electronic)9780197539569
ISBN (Print)9780197539538
DOIs
Publication statusPublished - 2021

Publication series

NameStudies in Penal Theory and Philosophy
PublisherOxford University Press

Keywords / Materials (for Non-textual outputs)

  • automation bias
  • automation complacency
  • human factors
  • human-computer interaction
  • sentencing

Fingerprint

Dive into the research topics of 'Algorithmic sentencing: Drawing lessons from human factors research'. Together they form a unique fingerprint.

Cite this