Abstract
Model-agnostic explanation methods are essential for interpreting machine learning models, but suffer from prohibitive computational costs that scale with the number of baselines. Existing acceleration approaches either lack a theoretical base or provide no principled guidance for baseline selection. To address this gap, we present ABSQR (Amortized Baseline Selection via Rank-Revealing QR). This framework exploits the low-rank structure of value matrices to accelerate multi-baseline attribution methods. Our approach combines deterministic baseline selection via SVD-guided QR decomposition with an amortized inference mechanism that utilizes cluster-based retrieval. We reduce computational complexity from O (m • 2d) to O (k • 2d), where k ≪ m. Experiments demonstrate that ABSQR achieves a 91.2% agreement rate with full baseline methods while providing 8.5× speedup across diverse datasets. As the first acceleration approach that preserves explanation error guarantees under computational speedup, ABSQR makes the practical deployment of interpretable AI systems feasible at scale.
카카오뱅크 금융기술연구소
Financial Tech Lab

