Abstract
In this paper, we present a methodology that bring experiential methods to bear on the challenge of developing understanding of AI systems – their operations, limitations, peculiarities and implications. We describe an approach that uses art and tangible experiences to communicate black-boxed decisions and nuanced social implications in engaging, experiential ways, with high fidelity to the concepts. In this approach, that we call Experiential AI, scientists, artists and other interdisciplinary actors come together to understand and communicate the functionality of AI and intelligent robots, their limitations, and consequences, through informative and compelling experiences.
We propose that experiential methods offer significant contributions to both intelligence and interaction in the design of interactive intelligent systems for explainable AI. We specifically look at strategies and methods in the AI arts that offer new modalities of explanation for human-centred explainable AI, and reframe explanation as a more holistic form of understanding. This leads us to the hypothesis that art and tangible experiences can mediate between impenetrable computer code and human understanding, making not just AI systems but also their values and implications more transparent and legible. Through three case studies, we develop insights on inclusivity, empowerment and responsibility in machine intelligence and user interaction. We go on to present new methodology for the design, development, and evaluation of human-centred explainable AI, and argue that legible intelligent systems need to be open to understanding and intervention at four levels: Aspect, Algorithm, Affect and Apprehension.
We propose that experiential methods offer significant contributions to both intelligence and interaction in the design of interactive intelligent systems for explainable AI. We specifically look at strategies and methods in the AI arts that offer new modalities of explanation for human-centred explainable AI, and reframe explanation as a more holistic form of understanding. This leads us to the hypothesis that art and tangible experiences can mediate between impenetrable computer code and human understanding, making not just AI systems but also their values and implications more transparent and legible. Through three case studies, we develop insights on inclusivity, empowerment and responsibility in machine intelligence and user interaction. We go on to present new methodology for the design, development, and evaluation of human-centred explainable AI, and argue that legible intelligent systems need to be open to understanding and intervention at four levels: Aspect, Algorithm, Affect and Apprehension.
Original language | English |
---|---|
Number of pages | 33 |
Publication status | Published - 18 Mar 2022 |