Abstract
Collaborative human–AI problem-solving and decision making rely on effective communications between both agents. Such communication processes comprise explanations and interactions between a sender and a receiver. Investigating these dynamics is crucial to avoid miscommunication problems. Hence, in this article, we propose a communication dynamics model, examining the impact of the sender’s explanation intention and strategy on the receiver’s perception of explanation effects. We further present potential biases and reasoning pitfalls with the aim of contributing to the design of hybrid intelligence systems. Finally, we propose six desiderata for human-centered explainable AI and discuss future research opportunities.
Original language | English |
---|---|
Pages (from-to) | 11-23 |
Number of pages | 13 |
Journal | IEEE Computer Graphics and Applications |
Volume | 42 |
Issue number | 6 |
Early online date | 12 Sept 2022 |
DOIs | |
Publication status | Published - 13 Dec 2022 |
Keywords / Materials (for Non-textual outputs)
- adaptation models
- Artificial Intelligence
- receivers
- knowledge representation
- cognition
- decision-making
- problem solving