Multi-scale Explainer for Graph Neural Networks
Abstract
Explainability for graph neural networks (GNNs) aims to unveil the complex decision logic of learned models by identifying the most influential structures in the input graph, thereby improving transparency and trustworthiness. Existing post-hoc explainers typically extract a sparse key subgraph at a single scale as the explanation. However, a single-scale view often fails to capture multi-level semantics, and the optimization procedure may degenerate into a local search that is sensitive to initialization and noise, leading to unstable explanations and limited robustness. To address these issues, we propose MSExplainer, a multi-scale explainer for GNNs. MSExplainer couples multi-scale subgraph consistency guidance with single-scale adaptive subgraph learning under a parameter-sharing design. It simultaneously extracts multi-scale key subgraphs and complementary subgraphs, yielding a hierarchical decomposition of the original graph that covers semantics at different granularities and improves the robustness of subgraph extraction. Experiments on six benchmark datasets show that MSExplainer consistently outperforms prior methods in explanation accuracy and fidelity. Moreover, we theoretically prove the upper bound advantage of the multi-scale strategy in representation consistency, and derive that it achieves the same-order computational complexity as single-scale methods under the parameter-sharing mechanism, thus ensuring the high fidelity of key subgraphs while maintaining computational efficiency.