Interpretable machine learning for imbalanced credit scoring datasets

Yujia Chen*, Raffaella Calabrese, Belen Martin-Barragan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

The class imbalance problem is common in the credit scoring domain, as the number of defaulters is usually much less than the number of non-defaulters. To date, research on investigating the class imbalance problem has mainly focused on indicating and reducing the adverse effect of the class imbalance on the predictive accuracy of machine learning techniques, while the impact of that on machine learning interpretability has never been studied in the literature. This paper fills this gap by analysing how the stability of Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), two popular interpretation methods, are affected by class imbalance. Our experiments use 2016-2020 UK residential mortgage data collected from European Datawarehouse. We evaluate the stability of LIME and SHAP on datasets of progressively increased class imbalance. The results show that interpretations generated from LIME and SHAP are less stable as the class imbalance increases, which indicates that the class imbalance does have an adverse effect on machine learning interpretability. To check the robustness of our outcomes, we also analyse two open-source credit scoring datasets and we obtain similar results.
Original languageEnglish
Pages (from-to)357-372
Number of pages16
JournalEuropean Journal of Operational Research
Volume312
Issue number1
Early online date23 Jun 2023
DOIs
Publication statusPublished - 1 Jan 2024

Keywords / Materials (for Non-textual outputs)

  • OR in banking
  • interpretability
  • stability
  • credit scoring
  • machine learning

Fingerprint

Dive into the research topics of 'Interpretable machine learning for imbalanced credit scoring datasets'. Together they form a unique fingerprint.

Cite this