Right to an Explanation Considered Harmful

Andy Crabtree, Lachlan Urquhart, Jiahong Chen

Research output: Working paper

Abstract

Lay and professional reasoning has it that newly introduced data protection regulation in Europe – GDPR – mandates a ‘right to an explanation’. This has been read as requiring that the machine learning (ML) community build ‘explainable machines’ to enable legal compliance. In reviewing relevant accountability requirements of GDPR and measures developed within the ML community to enable human interpretation of ML models, we argue that this reading should be considered harmful as it creates unrealistic expectations for the ML community and society at large. GDPR does not require that machines provide explanations, but that data controllers – i.e., human beings – do. We consider the implications of this requirement for the‘explainable machines’ agenda.
Original languageEnglish
PublisherSocial Science Research Network (SSRN)
Publication statusPublished - 2019

Fingerprint

Dive into the research topics of 'Right to an Explanation Considered Harmful'. Together they form a unique fingerprint.

Cite this