Abstract
Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.
| Original language | English |
|---|---|
| Pages (from-to) | 375-381 |
| Journal | AI and Society |
| Volume | 37 |
| Issue number | 1 |
| Early online date | 6 Apr 2021 |
| DOIs | |
| Publication status | Published - 2022 |
Keywords / Materials (for Non-textual outputs)
- algorithmic decision making
- transparency
- explainable AI