Timezone: »

On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi · Krikamol Muandet · Simon Kornblith · Bernhard Schölkopf · Been Kim

Thu Jul 27 01:30 PM -- 03:00 PM (PDT) @ Exhibit Hall 1 #434

Being able to provide explanations for a model's decision has become a central requirement for the development, deployment, and adoption of machine learning models. However, we are yet to understand what explanation methods can and cannot do. How do upstream factors such as data, model prediction, hyperparameters, and random initialization influence downstream explanations? While previous work raised concerns that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we study the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors, i.e., on hyperparameters and inputs used to generate saliency-based Es or Ys. Our results suggest that the relationships between E and Y is far from ideal. In fact, the gap between 'ideal' case only increase in higher-performing models --- models that are likely to be deployed. Our work is a promising first step towards providing a quantitative measure of the relationship between E and Y, which could also inform the future development of methods for E with a quantitative metric.

Author Information

Amir-Hossein Karimi (University of Waterloo)

Amir-Hossein Karimi is a final-year PhD candidate at ETH Zurich and the Max Planck Institute for Intelligent Systems, working under the guidance of Prof. Dr. Bernhard Schölkopf and Prof. Dr. Isabel Valera. His research interests lie at the intersection of causal inference, explainable AI, and program synthesis. Amir's contributions to the problem of algorithmic recourse have been recognized through spotlight and oral presentations at top venues such as NeurIPS, ICML, AAAI, AISTATS, ACM-FAccT, and ACM-AIES. He has also authored a book chapter and a highly-regarded survey paper in the ACM Computing Surveys. Supported by the NSERC, CLS, and Google PhD fellowships, Amir's research agenda aims to address the need for systems that make use of the best of both human and machine capabilities towards building trustworthy systems for human-machine collaboration. Prior to his PhD, Amir earned several awards including the Spirit of Engineering Science Award (UofToronto, 2015) and the Alumni Gold Medal Award (UWaterloo, 2018) for notable community and academic performance. Alongside his education, Amir gained valuable industry experience at Facebook, Google Brain, and DeepMind, and has provided >$250,000 in AI-consulting services to various startups and incubators. Finally, Amir teaches introductory and advanced topics in AI to an online community @PrinceOfAI.

Krikamol Muandet (CISPA--Helmholtz Center for Information Security)
Simon Kornblith (Google Brain)
Bernhard Schölkopf (MPI for Intelligent Systems Tübingen, Germany)

Bernhard Scholkopf received degrees in mathematics (London) and physics (Tubingen), and a doctorate in computer science from the Technical University Berlin. He has researched at AT&T Bell Labs, at GMD FIRST, Berlin, at the Australian National University, Canberra, and at Microsoft Research Cambridge (UK). In 2001, he was appointed scientific member of the Max Planck Society and director at the MPI for Biological Cybernetics; in 2010 he founded the Max Planck Institute for Intelligent Systems. For further information, see www.kyb.tuebingen.mpg.de/~bs.

Been Kim (Google Brain)

More from the Same Authors