Model-agnostic Counterfactual Explanations in Credit Scoring

Xolani Dastile, Turgay Celik, Hans Vandierendonck

Research output: Contribution to journalArticlepeer-review

14 Citations (Scopus)
54 Downloads (Pure)

Abstract

The past decade has shown a surge in the use and application of machine learning and deep learning models across different domains. One such domain is the credit scoring domain, where applicants are scored to assess their credit-worthiness for loan applications. During the scoring process, it is key to assure that there are no biases or discriminations that are incurred. Despite the proliferation of machine learning and deep learning models (referred to as black-box models in the literature) in credit scoring, there is still a need to explain how each prediction is made by the black-box models. Most of the machine learning and deep learning models are likely to be prone to unintended bias and discrimination that may occur in the datasets. To avoid the element of model bias and discrimination, it is imperative to explain each prediction during the scoring process. Our study proposes a novel optimisation formulation that generates a sparse counterfactual via a custom genetic algorithm to explain a black-box model’s prediction. This study uses publicly available credit scoring datasets. Furthermore, we validated the generated counterfactual explanations by comparing them to the counterfactual explanations from credit scoring experts. The proposed explanation technique does not only explains rejected applications, it can also be used to explain approved loans.
Original languageEnglish
Number of pages12
JournalIEEE Access
Early online date25 May 2022
DOIs
Publication statusEarly online date - 25 May 2022

Keywords

  • General Engineering
  • General Materials Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Model-agnostic Counterfactual Explanations in Credit Scoring'. Together they form a unique fingerprint.

Cite this