Timezone: »
In hybrid human-AI systems, users need to decide whether or not to trust an algorithmic prediction while the true error in the prediction is unknown. To accommodate such settings, we introduce RETRO-VIZ, a method for (i) estimating and (ii) explaining trustworthiness of regression predictions. It consists of RETRO, a quantitative estimate of the trustworthiness of a prediction, and VIZ, a visual explanation that helps users identify the reasons for the (lack of) trustworthiness of a prediction. We find that RETRO-scores negatively correlate with prediction error. In a user study with 41 participants, we confirm that RETRO-VIZ helps users identify whether and why a prediction is trustworthy or not.
Author Information
Kim de Bie (University of Amsterdam)
Ana Lucic (Partnership on AI, University of Amsterdam)
Research fellow at the Partnership on AI and PhD student at the University of Amsterdam, working primarily on explainable ML.
Hinda Haned (University of Amsterdam)
More from the Same Authors
-
2021 : Order in the Court: Explainable AI Methods Prone to Disagreement »
· Michael Neely · Stefan F. Schouten · Ana Lucic -
2021 : Counterfactual Explanations for Graph Neural Networks »
Ana Lucic · Maartje ter Hoeve · Gabriele Tolomei · Maarten de Rijke · Fabrizio Silvestri -
2021 : Flexible Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles »
Ana Lucic · Harrie Oosterhuis · Hinda Haned · Maarten de Rijke -
2021 : CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks »
Ana Lucic · Maartje ter Hoeve · Gabriele Tolomei · Maarten de Rijke · Fabrizio Silvestri -
2021 : Poster »
Shiji Zhou · Nastaran Okati · Wichinpong Sinchaisri · Kim de Bie · Ana Lucic · Mina Khan · Ishaan Shah · JINGHUI LU · Andreas Kirsch · Julius Frost · Ze Gong · Gokul Swamy · Ah Young Kim · Ahmed Baruwa · Ranganath Krishnan -
2021 Workshop: ICML Workshop on Algorithmic Recourse »
Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez-Rodriguez · Isabel Valera · Hima Lakkaraju