Projects per year
Abstract
Explainability of intelligent systems is key for future adoption. While much work is ongoing with regards to developing methods of explaining complex opaque systems, there is little current work on evaluating how effective these explanations are, in particular with respect to the user’s understanding. Natural language (NL) explanations can be seen as an intuitive channel between humans and artificial intelligence systems, in particular for enhancing transparency. This paper presents existing work on how evaluation methods from the field of Natural Language Generation (NLG) can be mapped onto NL explanations. Also, we present a preliminary investigation into the relationship between linguistic features and human evaluation, using a dataset of NL explanations derived from Bayesian Networks.
Original language | English |
---|---|
Pages (from-to) | 17-24 |
Number of pages | 8 |
Journal | CEUR Workshop Proceedings |
Volume | 2894 |
Publication status | Published - 2 Jul 2021 |
Event | SICSA Workshop on eXplainable Artificial Intelligence 2021 - Aberdeen, United Kingdom Duration: 1 Jun 2021 → 1 Jun 2021 https://sites.google.com/view/sicsa-xai-workshop/ |
Keywords / Materials (for Non-textual outputs)
- evaluation
- explanations
- natural language
Fingerprint
Dive into the research topics of 'I don't understand! Evaluation methods for natural language explanations'. Together they form a unique fingerprint.Projects
- 1 Finished
-
UK Robotics and Artificial Intelligence Hub for Offshore Energy Asset Integrity Management (ORCA)
Vijayakumar, S. (Principal Investigator), Mistry, M. (Co-investigator), Ramamoorthy, R. (Co-investigator) & Williams, C. (Co-investigator)
1/10/17 → 31/03/22
Project: Research