Abstract / Description of output
The class imbalance problem is common in the credit scoring domain, as the number of defaulters is usually much less than the number of non-defaulters. To date, research on investigating the class imbalance problem has mainly focused on indicating and reducing the adverse effect of the class imbalance on the predictive accuracy of machine learning techniques, while the impact of that on machine learning interpretability has never been studied in the literature. This paper fills this gap by analysing how the stability of Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), two popular interpretation methods, are affected by class imbalance. Our experiments use 2016-2020 UK residential mortgage data collected from European Datawarehouse. We evaluate the stability of LIME and SHAP on datasets of progressively increased class imbalance. The results show that interpretations generated from LIME and SHAP are less stable as the class imbalance increases, which indicates that the class imbalance does have an adverse effect on machine learning interpretability. To check the robustness of our outcomes, we also analyse two open-source credit scoring datasets and we obtain similar results.
Original language | English |
---|---|
Pages (from-to) | 357-372 |
Number of pages | 16 |
Journal | European Journal of Operational Research |
Volume | 312 |
Issue number | 1 |
Early online date | 23 Jun 2023 |
DOIs | |
Publication status | Published - 1 Jan 2024 |
Keywords / Materials (for Non-textual outputs)
- OR in banking
- interpretability
- stability
- credit scoring
- machine learning