Explainability in machine learning: A pedagogical perspective

Andreas Bueff, Ioannis Papantonis*, Auste Simkute, Vaishak Belle

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Introduction: Machine learning courses usually focus on getting students prepared to apply various models in real-world settings, but much less attention is given to teaching students the various techniques to explain a model’s decision making process. This gap is particularly concerning given the increasing deployment of AI systems in high-stakes domains where interpretability is crucial for trust, regulatory compliance, and ethical decision-making. Despite the growing importance of explainable AI (XAI) in professional practice, systematic pedagogical approaches for teaching these techniques remain underdeveloped.

Method: In an attempt to fill this gap, we provide a pedagogical perspective on how to structure a course to better impart knowledge to students and researchers in machine learning about when and how to implement various explainability techniques. We developed a comprehensive XAI course, focused on the conceptual characteristics of the different explanation types. Moreover, the course featured four structured workbooks focused on implementation, culminating in a final project requiring students to apply multiple XAI techniques to convince stakeholders about model decisions.

Results: Course evaluation using a modified Course Experience Questionnaire (CEQ) from 16 MSc students revealed high perceived quality (CEQ score of 12,050) and strong subjective ratings regarding students’ ability to analyze, design, apply, and evaluate XAI outcomes. All students successfully completed the course, with 89% of them demonstrating confidence in multi-perspective model analysis.

Discussion: The survey results demonstrated that interactive tutorials and practical workbooks were crucial for translating XAI theory into practical skills. Students particularly valued the balance between theoretical concepts and hands-on implementation, though evaluation of XAI outputs remained the most challenging aspect, suggesting future courses should include more structured interpretation exercises and analysis templates.
Original languageEnglish
Pages (from-to)1-14
Number of pages14
JournalFrontiers in Education
Volume10
DOIs
Publication statusPublished - 21 Jul 2025

Keywords / Materials (for Non-textual outputs)

  • XAI
  • ML
  • AI
  • pedagogy
  • education

Fingerprint

Dive into the research topics of 'Explainability in machine learning: A pedagogical perspective'. Together they form a unique fingerprint.

Cite this