Edinburgh Research Explorer

Explainable Argumentation for Wellness Consultation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Original languageEnglish
Title of host publicationExplainable, Transparent Autonomous Agents and Multi-Agent Systems
EditorsDavide Calvaresi, Amro Najjar, Michael Schumacher, Kary Främling
Place of PublicationCham
PublisherSpringer International Publishing AG
Pages186-202
Number of pages17
ISBN (Electronic)978-3-030-30391-4
ISBN (Print)978-3-030-30390-7
DOIs
Publication statusE-pub ahead of print - 11 Sep 2019
Event1st International Workshop on Explainable Transparent Autonomous Agents and Multi-Agent Systems - Montreal, Canada
Duration: 13 May 201914 May 2019
https://extraamas.ehealth.hevs.ch/index.html

Publication series

NameLecture Notes in Computer Science (LNCS)
PublisherSpringer, Cham
Volume11763
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Workshop

Workshop1st International Workshop on Explainable Transparent Autonomous Agents and Multi-Agent Systems
Abbreviated titleEXTRAAMAS 2019
CountryCanada
CityMontreal
Period13/05/1914/05/19
Internet address

Abstract

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.

    Research areas

  • Explanation, Explainability, Interpretability, Explainable AI, Transparency

ID: 118095252