Unified Time Series Explanations via Semi-Amortized Optimization and Instance-level Multi-Expert Knowledge Distillation
Abstract
Deep Neural Networks (DNNs) achieve outstanding performance in Time Series Classification (TSC) yet remain opaque "black boxes", hindering their adoption in sensitive domains. While Explainable AI (XAI) aims to bridge this gap, existing TSC XAI methods rely on a single perspective and incur significant computational costs, making them difficult to integrate into real-time applications. To overcome these challenges, we propose a framework, containing two key techniques called Instance-level Multi-Expert Knowledge Distillation (IMEKD) and Semi-Amortized Optimization Explanation (SAOE). Unlike static methods, our IMEKD approach bridges the gap between explanation methods by dynamically selecting the best attribution map from a pool of "XAI experts" for each instance. We then distill this instance-optimal knowledge into a DNN using our SAOE framework, a dual-stage process that learns a global attribution via distillation and refines it using faithfulness and robustness losses, aligning the optimization with objectives of human evaluation. To the best of our knowledge, this is the first work to unify multi-expert selection with semi-amortized optimization for TSC XAI. Also, we introduce a Faithfulness-Preserving Segmentation (FPS) mechanism that converts point-wise maps into interpretable segments without sacrificing fidelity to align explanations with human intuition. Comprehensive experiments on four synthetic datasets, a ECG dataset with human-verified ground truth, and 11 multivariate UEA benchmarks across three DNN architectures show that our framework significantly outperforms the current state-of-the-art (SOTA) in terms of faithfulness, robustness, and computational efficiency.