Position: Current XAI Methods Cannot Satisfy Financial AI Explainability Requirements
Dongxin Guo ⋅ Jikun Wu ⋅ Siu Ming Yiu
Abstract
**This position paper argues that current explainable AI (XAI) methods cannot satisfy regulatory explainability requirements for LLM-based financial systems**, creating a fundamental incompatibility between technological capability and legal mandate that threatens both consumer protection and financial stability. We demonstrate through systematic analysis across six regulatory frameworks (EU AI Act, US FSOC/CFPB, UK FCA, BIS, MAS, HKMA) that post-hoc explanation techniques fail systematically when applied to large language models. Exact SHAP computation exhibits $O(2^F)$ complexity at token-level granularity—rendering it infeasible for transformer architectures. LIME demonstrates substantial instability, with explanation rankings varying significantly across repeated evaluations of identical inputs. Chain-of-thought prompting generates unfaithful rationalizations: in controlled experiments, only 1 of 426 biased model outputs explicitly acknowledged the biasing feature in its explanation. When models learned to exploit reward hacks, they verbalized this exploitation less than 2% of the time. With 72% of UK financial firms now using AI and over $5 trillion in US consumer credit outstanding requiring adverse action explanations, this gap creates systemic risk affecting millions of consumers who may receive inadequate explanations for consequential financial decisions. We analyze three high-stakes domains—credit, trading, advisory—with documented regulatory enforcement cases, examine six counterarguments including hybrid architectures and outcome-based regulation, and propose prioritized recommendations with quarterly timelines. The status quo constitutes regulatory compliance theater; we call for either fundamental advances in LLM interpretability or deployment constraints matching current capabilities.
Successful Page Load