TY - JOUR
T1 - How transparency modulates trust in artificial intelligence
AU - Zerilli, John
AU - Bhatt, Umang
AU - Weller, Adrian
N1 - Funding Information:
J.Z. is part-funded by the Leverhulme Trust ( ECF-2020-428 ). U.B. acknowledges support from DeepMind and the Leverhulme Trust via the Leverhulme Center for the Future of Intelligence (CFI) and from the Mozilla Foundation . A.W. acknowledges support from a Turing AI Fellowship under grant EP/V025379/1 , The Alan Turing Institute , and the Leverhulme Trust via CFI. We are grateful to Simone Schnall and Reuben Binns for very helpful discussion and comments.
PY - 2022/4/8
Y1 - 2022/4/8
N2 - The study of human-machine systems is central to a variety of behavioral and engineering disciplines, including management science, human factors, robotics, and human-computer interaction. Recent advances in artificial intelligence (AI) and machine learning have brought the study of human-AI teams into sharper focus. An important set of questions for those designing human-AI interfaces concerns trust, transparency, and error tolerance. Here, we review the emerging literature on this important topic, identify open questions, and discuss some of the pitfalls of human-AI team research. We present opposition (extreme algorithm aversion or distrust) and loafing (extreme automation complacency or bias) as lying at opposite ends of a spectrum, with algorithmic vigilance representing an ideal mid-point. We suggest that, while transparency may be crucial for facilitating appropriate levels of trust in AI and thus for counteracting aversive behaviors and promoting vigilance, transparency should not be conceived solely in terms of the explainability of an algorithm. Dynamic task allocation, as well as the communication of confidence and performance metrics—among other strategies—may ultimately prove more useful to users than explanations from algorithms and significantly more effective in promoting vigilance. We further suggest that, while both aversive and appreciative attitudes are detrimental to optimal human-AI team performance, strategies to curb aversion are likely to be more important in the longer term than those attempting to mitigate appreciation. Our wider aim is to channel disparate efforts in human-AI team research into a common framework and to draw attention to the ecological validity of results in this field.
AB - The study of human-machine systems is central to a variety of behavioral and engineering disciplines, including management science, human factors, robotics, and human-computer interaction. Recent advances in artificial intelligence (AI) and machine learning have brought the study of human-AI teams into sharper focus. An important set of questions for those designing human-AI interfaces concerns trust, transparency, and error tolerance. Here, we review the emerging literature on this important topic, identify open questions, and discuss some of the pitfalls of human-AI team research. We present opposition (extreme algorithm aversion or distrust) and loafing (extreme automation complacency or bias) as lying at opposite ends of a spectrum, with algorithmic vigilance representing an ideal mid-point. We suggest that, while transparency may be crucial for facilitating appropriate levels of trust in AI and thus for counteracting aversive behaviors and promoting vigilance, transparency should not be conceived solely in terms of the explainability of an algorithm. Dynamic task allocation, as well as the communication of confidence and performance metrics—among other strategies—may ultimately prove more useful to users than explanations from algorithms and significantly more effective in promoting vigilance. We further suggest that, while both aversive and appreciative attitudes are detrimental to optimal human-AI team performance, strategies to curb aversion are likely to be more important in the longer term than those attempting to mitigate appreciation. Our wider aim is to channel disparate efforts in human-AI team research into a common framework and to draw attention to the ecological validity of results in this field.
KW - artificial intelligence
KW - explainable AI
KW - human factors
KW - human-AI teams
KW - human-computer interaction
KW - machine learning
KW - transparency
KW - trust
UR - http://www.scopus.com/inward/record.url?scp=85127735847&partnerID=8YFLogxK
U2 - 10.1016/j.patter.2022.100455
DO - 10.1016/j.patter.2022.100455
M3 - Review article
AN - SCOPUS:85127735847
SN - 2666-3899
VL - 3
SP - 1
EP - 10
JO - Patterns
JF - Patterns
IS - 4
M1 - 100455
ER -