Timezone: »
Given the increasing promise of Graph Neural Networks (GNNs) in real-world applications, several methods have been developed for explaining their predictions. So far, these methods have primarily focused on generating subgraphs that are especially relevant for a particular prediction. However, such methods do not provide a clear opportunity for recourse: given a prediction, we want to understand how the prediction can be changed in order to achieve a more desirable outcome. In this work, we propose a method for generating counterfactual (CF) explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes. Using only edge deletions, we find that our method can generate CF explanations for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94\% accuracy. This indicates that our method primarily removes edges that are crucial for the original predictions, resulting in minimal CF explanations.
Author Information
Ana Lucic (Partnership on AI, University of Amsterdam)
Research fellow at the Partnership on AI and PhD student at the University of Amsterdam, working primarily on explainable ML.
Maartje ter Hoeve (University of Amsterdam)
Gabriele Tolomei (University of Rome)
Maarten de Rijke (University of Amsterdam)
Fabrizio Silvestri (Facebook, London, UK)
More from the Same Authors
-
2021 : How Not to Measure Disentanglement »
· Julia Kiseleva · Maarten de Rijke -
2021 : Order in the Court: Explainable AI Methods Prone to Disagreement »
· Michael Neely · Stefan F. Schouten · Ana Lucic -
2021 : Counterfactual Explanations for Graph Neural Networks »
Ana Lucic · Maartje ter Hoeve · Gabriele Tolomei · Maarten de Rijke · Fabrizio Silvestri -
2021 : Flexible Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles »
Ana Lucic · Harrie Oosterhuis · Hinda Haned · Maarten de Rijke -
2021 : To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions »
Kim de Bie · Ana Lucic · Hinda Haned -
2021 : Poster »
Shiji Zhou · Nastaran Okati · Wichinpong Sinchaisri · Kim de Bie · Ana Lucic · Mina Khan · Ishaan Shah · JINGHUI LU · Andreas Kirsch · Julius Frost · Ze Gong · Gokul Swamy · Ah Young Kim · Ahmed Baruwa · Ranganath Krishnan -
2021 Workshop: ICML Workshop on Algorithmic Recourse »
Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez-Rodriguez · Isabel Valera · Hima Lakkaraju -
2018 Poster: Finding Influential Training Samples for Gradient Boosted Decision Trees »
Boris Sharchilev · Yury Ustinovskiy · Pavel Serdyukov · Maarten de Rijke -
2018 Oral: Finding Influential Training Samples for Gradient Boosted Decision Trees »
Boris Sharchilev · Yury Ustinovskiy · Pavel Serdyukov · Maarten de Rijke