TY - GEN
T1 - Why reliabilism is not enough
T2 - 3rd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, co-located with AAAI 2020
AU - Smart, Andrew
AU - James, Larry
AU - Hutchinson, Ben
AU - Wu, Simone
AU - Vallor, Shannon
PY - 2020/2/7
Y1 - 2020/2/7
N2 - In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed widespread adoption of machine learning? We argue that, in general, people implicitly adopt reliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable process or method [18]. We argue that, in cases where model deployments require moral justification, reliabilism is not sufficient, and instead justifying deployment requires establishing robust human processes as a moral "wrapper" around machine outputs. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification-moral justification. Finally, we offer cautions relevant to the (implicit or explicit) adoption of the reliabilist interpretation of machine learning.
AB - In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed widespread adoption of machine learning? We argue that, in general, people implicitly adopt reliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable process or method [18]. We argue that, in cases where model deployments require moral justification, reliabilism is not sufficient, and instead justifying deployment requires establishing robust human processes as a moral "wrapper" around machine outputs. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification-moral justification. Finally, we offer cautions relevant to the (implicit or explicit) adoption of the reliabilist interpretation of machine learning.
U2 - 10.1145/3375627.3375866
DO - 10.1145/3375627.3375866
M3 - Conference contribution
AN - SCOPUS:85082172130
T3 - AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
SP - 372
EP - 377
BT - AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
Y2 - 7 February 2020 through 8 February 2020
ER -