Abstract
The past decade has shown a surge in the use and application of machine learning and deep learning models across different domains. One such domain is the credit scoring domain, where applicants are scored to assess their credit-worthiness for loan applications. During the scoring process, it is key to assure that there are no biases or discriminations that are incurred. Despite the proliferation of machine learning and deep learning models (referred to as black-box models in the literature) in credit scoring, there is still a need to explain how each prediction is made by the black-box models. Most of the machine learning and deep learning models are likely to be prone to unintended bias and discrimination that may occur in the datasets. To avoid the element of model bias and discrimination, it is imperative to explain each prediction during the scoring process. Our study proposes a novel optimisation formulation that generates a sparse counterfactual via a custom genetic algorithm to explain a black-box model’s prediction. This study uses publicly available credit scoring datasets. Furthermore, we validated the generated counterfactual explanations by comparing them to the counterfactual explanations from credit scoring experts. The proposed explanation technique does not only explains rejected applications, it can also be used to explain approved loans.
Original language | English |
---|---|
Number of pages | 12 |
Journal | IEEE Access |
Early online date | 25 May 2022 |
DOIs | |
Publication status | Early online date - 25 May 2022 |
Keywords
- General Engineering
- General Materials Science
- General Computer Science