Poster Teaser
in
Workshop: Graph Representation Learning and Beyond (GRL+)
(#28 / Sess. 1) Contrastive Graph Neural Network Explanation
Lukas Faber
Abstract:
Graph Neural Networks achieve remarkable results on problems with structured data but come as black-box predictors. Transferring existing techniques, for example occlusion, to interpret models fails as even removing a single node or edge can lead to drastic changes in the graph. The resulting graphs can differ from all training examples, causing model confusion and wrong explanations. Thus, we argue that explicability must use graphs consistent with the distribution underlying the training data. We coin this property Distribution Compliant Explanation (DCE) and present a novel Contrastive GNN Explanation (CoGE) technique following this paradigm. An experimental study supports the efficacy of CoGE.
Chat is not available.
Successful Page Load