Timezone: »
Model interpretability has become an important problem in \ac{ML} due to the increased effect algorithmic decisions have on humans. Providing users with counterfactual explanations (CF) can help them understand not only why ML models make certain decisions, but also how these decisions can be changed. We extend previous work that could only be applied to differentiable models by introducing probabilistic model approximations in the optimization framework. We find that our CF examples are significantly closer to the original instances compared to other methods specifically designed for tree ensembles.
Author Information
Ana Lucic (Partnership on AI, University of Amsterdam)
Research fellow at the Partnership on AI and PhD student at the University of Amsterdam, working primarily on explainable ML.
Harrie Oosterhuis (Radboud University)
Hinda Haned (University of Amsterdam)
Maarten de Rijke (University of Amsterdam)
More from the Same Authors
-
2021 : How Not to Measure Disentanglement »
· Julia Kiseleva · Maarten de Rijke -
2021 : Order in the Court: Explainable AI Methods Prone to Disagreement »
· Michael Neely · Stefan F. Schouten · Ana Lucic -
2021 : Counterfactual Explanations for Graph Neural Networks »
Ana Lucic · Maartje ter Hoeve · Gabriele Tolomei · Maarten de Rijke · Fabrizio Silvestri -
2021 : To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions »
Kim de Bie · Ana Lucic · Hinda Haned -
2021 : CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks »
Ana Lucic · Maartje ter Hoeve · Gabriele Tolomei · Maarten de Rijke · Fabrizio Silvestri -
2021 : Poster »
Shiji Zhou · Nastaran Okati · Wichinpong Sinchaisri · Kim de Bie · Ana Lucic · Mina Khan · Ishaan Shah · JINGHUI LU · Andreas Kirsch · Julius Frost · Ze Gong · Gokul Swamy · Ah Young Kim · Ahmed Baruwa · Ranganath Krishnan -
2021 Workshop: ICML Workshop on Algorithmic Recourse »
Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez-Rodriguez · Isabel Valera · Hima Lakkaraju -
2018 Poster: Finding Influential Training Samples for Gradient Boosted Decision Trees »
Boris Sharchilev · Yury Ustinovskiy · Pavel Serdyukov · Maarten de Rijke -
2018 Oral: Finding Influential Training Samples for Gradient Boosted Decision Trees »
Boris Sharchilev · Yury Ustinovskiy · Pavel Serdyukov · Maarten de Rijke