Algorithmic and human decision making: for a double standard of transparency

Mario Günther, Atoosa Kasirzadeh

Research output: Contribution to journalArticlepeer-review

Abstract

Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.
Original languageEnglish
Pages (from-to)375-381
JournalAI and Society
Volume37
Issue number1
Early online date6 Apr 2021
DOIs
Publication statusPublished - 2022

Keywords

  • algorithmic decision making
  • transparency
  • explainable AI

Fingerprint

Dive into the research topics of 'Algorithmic and human decision making: for a double standard of transparency'. Together they form a unique fingerprint.

Cite this