Abbas, Sayyed Khawar (2025) Lending by Algorithm: Fair or Flawe? : An Information-Theoretic View of Credit Decision Pipelines. SN Computer Science, 6 . DOI 10.1007/s42979-025-04222-8
|
PDF
- Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
1MB |
Official URL: https://doi.org/10.1007/s42979-025-04222-8
Abstract
As artificial intelligence (AI) becomes increasingly embedded in financial decision-making, questions of fairness, transparency, and trust have taken center stage, particularly in high-stakes domains like credit allocation. This study investigates the use of AI-based credit scoring for small and medium-sized enterprises (SMEs), combining algorithmic performance analysis with behavioral insights from affected users. Using a convergent mixed-methods design, we train and evaluate a random forest classifier on a US dataset of 5000 anonymized loan applications, achieving an AUC of 0.998 and simulating real-world lending conditions. Fairness diagnostics reveal that approval rates differ significantly by education level, with low-education applicants approved at a rate three times lower than those with advanced degrees, despite similar false positive rates across groups. We frame the decision pipeline as an information-processing system and consider how algorithmic scoring may distort or reduce informational signals relevant to perceived fairness. To contextualize these findings, we conduct twelve semi-structured interviews with SME owners and financial managers in the United States and United Kingdom, coded using the Capability, Opportunity, Motivation, Behavior (COM-B) framework. While the model privileges structural features such as income and digital banking activity, indicators of “capability”, participants place greater emphasis on behavioral cues such as payment reliability and business resilience. A triangulated analysis reveals a stark misalignment between what the algorithm recognizes and what users perceive as fair or valid. Our findings advance the discourse on AI ethics by demonstrating that statistical fairness does not guarantee experiential fairness. We advocate for the integration of behavioral indicators into credit models and call for policy reforms that address the socio-technical gaps in automated finance. While the model demonstrates high predictive performance (AUC ~ 0.998), this result should be interpreted cautiously given the constrained, anonymized dataset lacking demographic attributes such as gender and race. Consequently, intersectional fairness could not be evaluated, and findings primarily reflect lending contexts within advanced economies.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | Artificial intelligence in finance ; Algorithmic fairness ; SME lending ; Credit scoring models ; FinTech ; Mixed-methods research ; COM-B framework ; Explainable AI ; Behavioral trust ; Financial inclusion |
| Divisions: | Institute of Data Analytics and Information Systems |
| Subjects: | Automatizálás, gépesítés Computer science |
| Funders: | Corvinus University of Budapest |
| Projects: | Open Access funding |
| DOI: | 10.1007/s42979-025-04222-8 |
| ID Code: | 11608 |
| Deposited By: | MTMT SWORD |
| Deposited On: | 25 Jul 2025 10:13 |
| Last Modified: | 25 Jul 2025 10:13 |
Repository Staff Only: item control page


Download Statistics
Download Statistics