Interpretable machine learning in damage detection using Shapley Additive Explanations

Artur Movsessian, David Garcia Cava, Dmitri Tcherniak

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

In recent years, Machine Learning (ML) techniques have gained popularity in Structural Health Monitoring (SHM). These have been particularly used for damage detection in a wide range of engineering applications such as wind turbine blades. The outcomes of previous research studies in this area have demonstrated the capabilities of ML for robust damage detection. However, the primary challenge facing ML in SHM is the lack of interpretability of the prediction models hindering the broader implementation of these techniques. For this purpose, this study integrates the novel Shapley Additive exPlanations (SHAP) method into a ML-based damage detection process as a tool for introducing interpretability and, thus, build evidence for reliable decision-making in SHM applications. The SHAP method is based on coalitional game theory and adds global and local interpretability to ML-based models by computing the marginal contribution of each feature. The contribution is used to understand the nature of damage indices (DIs). The applicability of the SHAP method is first demonstrated on a simple lumped mass-spring-damper system with simulated temperature variabilities. Later, the SHAP method has been evaluated on data from an in operation V27 wind turbine with artificially introduced damage in one of its blades. The results show the relationship between the environmental and operational variabilities (EOVs) and their direct influence on the damage indices. This ultimately helps to understand the difference between false positives caused by EOVs and true positives resulting from damage in the structure.
Original languageEnglish
Article number RISK-21-1036
JournalASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering
Early online date20 Dec 2021
Publication statusE-pub ahead of print - 20 Dec 2021

Keywords / Materials (for Non-textual outputs)

  • Damage
  • Machine learning
  • RISK
  • Structural Health Monitoring
  • Blades
  • Wind Turbines
  • Decision Making
  • Engineering systems and industry applications
  • performance
  • Springs
  • Temperature


Dive into the research topics of 'Interpretable machine learning in damage detection using Shapley Additive Explanations'. Together they form a unique fingerprint.

Cite this