Timezone: »
Being able to provide explanations for a model's decision has become a central requirement for the development, deployment, and adoption of machine learning models. However, we are yet to understand what explanation methods can and cannot do. How do upstream factors such as data, model prediction, hyperparameters, and random initialization influence downstream explanations? While previous work raised concerns that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we study the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors, i.e., on hyperparameters and inputs used to generate saliency-based Es or Ys. Our results suggest that the relationships between E and Y is far from ideal. In fact, the gap between 'ideal' case only increase in higher-performing models --- models that are likely to be deployed. Our work is a promising first step towards providing a quantitative measure of the relationship between E and Y, which could also inform the future development of methods for E with a quantitative metric.
Author Information
Amir-Hossein Karimi (University of Waterloo)
Amir-Hossein Karimi is a final-year PhD candidate at ETH Zurich and the Max Planck Institute for Intelligent Systems, working under the guidance of Prof. Dr. Bernhard Schölkopf and Prof. Dr. Isabel Valera. His research interests lie at the intersection of causal inference, explainable AI, and program synthesis. Amir's contributions to the problem of algorithmic recourse have been recognized through spotlight and oral presentations at top venues such as NeurIPS, ICML, AAAI, AISTATS, ACM-FAccT, and ACM-AIES. He has also authored a book chapter and a highly-regarded survey paper in the ACM Computing Surveys. Supported by the NSERC, CLS, and Google PhD fellowships, Amir's research agenda aims to address the need for systems that make use of the best of both human and machine capabilities towards building trustworthy systems for human-machine collaboration. Prior to his PhD, Amir earned several awards including the Spirit of Engineering Science Award (UofToronto, 2015) and the Alumni Gold Medal Award (UWaterloo, 2018) for notable community and academic performance. Alongside his education, Amir gained valuable industry experience at Facebook, Google Brain, and DeepMind, and has provided >$250,000 in AI-consulting services to various startups and incubators. Finally, Amir teaches introductory and advanced topics in AI to an online community @PrinceOfAI.
Krikamol Muandet (CISPA--Helmholtz Center for Information Security)
Simon Kornblith (Google Brain)
Bernhard Schölkopf (MPI for Intelligent Systems Tübingen, Germany)
Bernhard Scholkopf received degrees in mathematics (London) and physics (Tubingen), and a doctorate in computer science from the Technical University Berlin. He has researched at AT&T Bell Labs, at GMD FIRST, Berlin, at the Australian National University, Canberra, and at Microsoft Research Cambridge (UK). In 2001, he was appointed scientific member of the Max Planck Society and director at the MPI for Biological Cybernetics; in 2010 he founded the Max Planck Institute for Intelligent Systems. For further information, see www.kyb.tuebingen.mpg.de/~bs.
Been Kim (Google Brain)
More from the Same Authors
-
2021 : On the Fairness of Causal Algorithmic Recourse »
Julius von Kügelgen · Amir-Hossein Karimi · Umang Bhatt · Isabel Valera · Adrian Weller · Bernhard Schölkopf · Amir-Hossein Karimi -
2021 : Algorithmic Recourse in Partially and Fully Confounded Settings Through Bounding Counterfactual Effects »
Julius von Kügelgen · Nikita Agarwal · Jakob Zeitler · Afsaneh Mastouri · Bernhard Schölkopf -
2021 : On the Origins of the Block Structure Phenomenon in Neural Network Representations »
Thao Nguyen · Maithra Raghu · Simon Kornblith -
2021 : Representation Learning for Out-of-distribution Generalization in Downstream Tasks »
Frederik Träuble · Andrea Dittadi · Manuel Wuthrich · Felix Widmaier · Peter V Gehler · Ole Winther · Francesco Locatello · Olivier Bachem · Bernhard Schölkopf · Stefan Bauer -
2021 : Representation Learning for Out-of-distribution Generalization in Downstream Tasks »
Frederik Träuble · Andrea Dittadi · Manuel Wüthrich · Felix Widmaier · Peter Gehler · Ole Winther · Francesco Locatello · Olivier Bachem · Bernhard Schölkopf · Stefan Bauer -
2021 : Lie interventions in complex systems with cycles »
Michel Besserve · Bernhard Schölkopf -
2022 : Maximum Mean Discrepancy Distributionally Robust Nonlinear Chance-Constrained Optimization with Finite-Sample Guarantee »
Yassine Nemmour · Heiner Kremer · Bernhard Schölkopf · Jia-Jie Zhu -
2023 : Don't trust your eyes: on the (un)reliability of feature visualizations »
Robert Geirhos · Roland S. Zimmermann · Blair Bilodeau · Wieland Brendel · Been Kim -
2023 : Spuriosity Didn’t Kill the Classifier: Using Invariant Predictions to Harness Spurious Features »
Cian Eastwood · Shashank Singh · Andrei Nicolicioiu · Marin Vlastelica · Julius von Kügelgen · Bernhard Schölkopf -
2023 : Leveraging sparse and shared feature activations for disentangled representation learning »
Marco Fumero · Florian Wenzel · Luca Zancato · Alessandro Achille · Emanuele Rodola · Stefano Soatto · Bernhard Schölkopf · Francesco Locatello -
2023 : Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding »
Alizée Pace · Hugo Yèche · Bernhard Schölkopf · Gunnar Ratsch · Guy Tennenholtz -
2023 : Learning Linear Causal Representations from Interventions under General Nonlinear Mixing »
Simon Buchholz · Goutham Rajendran · Elan Rosenfeld · Bryon Aragam · Bernhard Schölkopf · Pradeep Ravikumar -
2023 : Learning Counterfactually Invariant Predictors »
Francesco Quinzan · Cecilia Casolo · Krikamol Muandet · Yucen Luo · Niki Kilbertus -
2023 : Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding »
Alizée Pace · Hugo Yèche · Bernhard Schölkopf · Gunnar Ratsch · Guy Tennenholtz -
2023 : Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding »
Alizée Pace · Hugo Yèche · Bernhard Schölkopf · Gunnar Ratsch · Guy Tennenholtz -
2023 : Learning Linear Causal Representations from Interventions under General Nonlinear Mixing »
Simon Buchholz · Goutham Rajendran · Elan Rosenfeld · Bryon Aragam · Bernhard Schölkopf · Pradeep Ravikumar -
2023 : TabCBM: Concept-based Interpretable Neural Networks for Tabular Data »
Mateo Espinosa Zarlenga · Zohreh Shams · Michael Nelson · Been Kim · Mateja Jamnik -
2023 : Flow Matching for Scalable Simulation-Based Inference »
Jonas Wildberger · Maximilian Dax · Simon Buchholz · Stephen R. Green · Jakob Macke · Bernhard Schölkopf -
2023 : Learning Linear Causal Representations from Interventions under General Nonlinear Mixing »
Simon Buchholz · Goutham Rajendran · Elan Rosenfeld · Bryon Aragam · Bernhard Schölkopf · Pradeep Ravikumar -
2023 : Flow Matching for Scalable Simulation-Based Inference »
Jonas Wildberger · Maximilian Dax · Simon Buchholz · Stephen R. Green · Jakob Macke · Bernhard Schölkopf -
2023 Workshop: “Could it have been different?” Counterfactuals in Minds and Machines »
Nina Corvelo Benz · Ricardo Dominguez-Olmedo · Manuel Gomez-Rodriguez · Thorsten Joachims · Amir-Hossein Karimi · Stratis Tsirtsis · Isabel Valera · Sarah A Wu -
2023 : Learning Counterfactually Invariant Predictors »
Francesco Quinzan · Cecilia Casolo · Krikamol Muandet · Yucen Luo · Niki Kilbertus -
2023 : Desiderata for Representation Learning from Identifiability, Disentanglement, and Group-Structuredness »
Hamza Keurti · Patrik Reizinger · Bernhard Schölkopf · Wieland Brendel -
2023 Poster: Provably Learning Object-Centric Representations »
Jack Brady · Roland S. Zimmermann · Yash Sharma · Bernhard Schölkopf · Julius von Kügelgen · Wieland Brendel -
2023 Poster: On the Identifiability and Estimation of Causal Location-Scale Noise Models »
Alexander Immer · Christoph Schultheiss · Julia Vogt · Bernhard Schölkopf · Peter Bühlmann · Alexander Marx -
2023 Poster: On Data Manifolds Entailed by Structural Causal Models »
Ricardo Dominguez-Olmedo · Amir-Hossein Karimi · Georgios Arvanitidis · Bernhard Schölkopf -
2023 Poster: The Hessian perspective into the Nature of Convolutional Neural Networks »
Sidak Pal Singh · Thomas Hofmann · Bernhard Schölkopf -
2023 Poster: Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels »
Alexander Immer · Tycho van der Ouderaa · Mark van der Wilk · Gunnar Ratsch · Bernhard Schölkopf -
2023 Poster: Diffusion Based Representation Learning »
Sarthak Mittal · Korbinian Abstreiter · Stefan Bauer · Bernhard Schölkopf · Arash Mehrjou -
2023 Poster: Discrete Key-Value Bottleneck »
Frederik Träuble · Anirudh Goyal · Nasim Rahaman · Michael Mozer · Kenji Kawaguchi · Yoshua Bengio · Bernhard Schölkopf -
2023 Oral: Provably Learning Object-Centric Representations »
Jack Brady · Roland S. Zimmermann · Yash Sharma · Bernhard Schölkopf · Julius von Kügelgen · Wieland Brendel -
2023 Poster: Estimation Beyond Data Reweighting: Kernel Method of Moments »
Heiner Kremer · Yassine Nemmour · Bernhard Schölkopf · Jia-Jie Zhu -
2023 Poster: Homomorphism AutoEncoder --- Learning Group Structured Representations from Observed Transitions »
Hamza Keurti · Hsiao-Ru Pan · Michel Besserve · Benjamin F. Grewe · Bernhard Schölkopf -
2022 : Invited talks I, Q/A »
Bernhard Schölkopf · David Lopez-Paz -
2022 : Invited Talks 1, Bernhard Schölkopf and David Lopez-Paz »
Bernhard Schölkopf · David Lopez-Paz -
2022 Poster: Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time »
Mitchell Wortsman · Gabriel Ilharco · Samir Gadre · Becca Roelofs · Raphael Gontijo Lopes · Ari Morcos · Hongseok Namkoong · Ali Farhadi · Yair Carmon · Simon Kornblith · Ludwig Schmidt -
2022 Spotlight: Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time »
Mitchell Wortsman · Gabriel Ilharco · Samir Gadre · Becca Roelofs · Raphael Gontijo Lopes · Ari Morcos · Hongseok Namkoong · Ali Farhadi · Yair Carmon · Simon Kornblith · Ludwig Schmidt -
2022 Poster: Action-Sufficient State Representation Learning for Control with Structural Constraints »
Biwei Huang · Chaochao Lu · Liu Leqi · Jose Miguel Hernandez-Lobato · Clark Glymour · Bernhard Schölkopf · Kun Zhang -
2022 Poster: Generalization and Robustness Implications in Object-Centric Learning »
Andrea Dittadi · Samuele Papa · Michele De Vita · Bernhard Schölkopf · Ole Winther · Francesco Locatello -
2022 Spotlight: Action-Sufficient State Representation Learning for Control with Structural Constraints »
Biwei Huang · Chaochao Lu · Liu Leqi · Jose Miguel Hernandez-Lobato · Clark Glymour · Bernhard Schölkopf · Kun Zhang -
2022 Spotlight: Generalization and Robustness Implications in Object-Centric Learning »
Andrea Dittadi · Samuele Papa · Michele De Vita · Bernhard Schölkopf · Ole Winther · Francesco Locatello -
2022 Poster: Causal Inference Through the Structural Causal Marginal Problem »
Luigi Gresele · Julius von Kügelgen · Jonas Kübler · Elke Kirschbaum · Bernhard Schölkopf · Dominik Janzing -
2022 Poster: Functional Generalized Empirical Likelihood Estimation for Conditional Moment Restrictions »
Heiner Kremer · Jia-Jie Zhu · Krikamol Muandet · Bernhard Schölkopf -
2022 Poster: On the Adversarial Robustness of Causal Algorithmic Recourse »
Ricardo Dominguez-Olmedo · Amir-Hossein Karimi · Bernhard Schölkopf -
2022 Spotlight: Functional Generalized Empirical Likelihood Estimation for Conditional Moment Restrictions »
Heiner Kremer · Jia-Jie Zhu · Krikamol Muandet · Bernhard Schölkopf -
2022 Spotlight: Causal Inference Through the Structural Causal Marginal Problem »
Luigi Gresele · Julius von Kügelgen · Jonas Kübler · Elke Kirschbaum · Bernhard Schölkopf · Dominik Janzing -
2022 Spotlight: On the Adversarial Robustness of Causal Algorithmic Recourse »
Ricardo Dominguez-Olmedo · Amir-Hossein Karimi · Bernhard Schölkopf -
2021 Workshop: ICML Workshop on Algorithmic Recourse »
Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez-Rodriguez · Isabel Valera · Hima Lakkaraju -
2021 Poster: Function Contrastive Learning of Transferable Meta-Representations »
Muhammad Waleed Gondal · Shruti Joshi · Nasim Rahaman · Stefan Bauer · Manuel Wuthrich · Bernhard Schölkopf -
2021 Spotlight: Function Contrastive Learning of Transferable Meta-Representations »
Muhammad Waleed Gondal · Shruti Joshi · Nasim Rahaman · Stefan Bauer · Manuel Wuthrich · Bernhard Schölkopf -
2021 Poster: On Disentangled Representations Learned from Correlated Data »
Frederik Träuble · Elliot Creager · Niki Kilbertus · Francesco Locatello · Andrea Dittadi · Anirudh Goyal · Bernhard Schölkopf · Stefan Bauer -
2021 Poster: Bayesian Quadrature on Riemannian Data Manifolds »
Christian Fröhlich · Alexandra Gessner · Philipp Hennig · Bernhard Schölkopf · Georgios Arvanitidis -
2021 Spotlight: Bayesian Quadrature on Riemannian Data Manifolds »
Christian Fröhlich · Alexandra Gessner · Philipp Hennig · Bernhard Schölkopf · Georgios Arvanitidis -
2021 Oral: On Disentangled Representations Learned from Correlated Data »
Frederik Träuble · Elliot Creager · Niki Kilbertus · Francesco Locatello · Andrea Dittadi · Anirudh Goyal · Bernhard Schölkopf · Stefan Bauer -
2021 Poster: Necessary and sufficient conditions for causal feature selection in time series with latent common causes »
Atalanti Mastakouri · Bernhard Schölkopf · Dominik Janzing -
2021 Poster: Conditional Distributional Treatment Effect with Kernel Conditional Mean Embeddings and U-Statistic Regression »
Junhyung Park · Uri Shalit · Bernhard Schölkopf · Krikamol Muandet -
2021 Poster: Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment Restriction »
Afsaneh Mastouri · Yuchen Zhu · Limor Gultchin · Anna Korba · Ricardo Silva · Matt J. Kusner · Arthur Gretton · Krikamol Muandet -
2021 Poster: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2021 Spotlight: Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment Restriction »
Afsaneh Mastouri · Yuchen Zhu · Limor Gultchin · Anna Korba · Ricardo Silva · Matt J. Kusner · Arthur Gretton · Krikamol Muandet -
2021 Spotlight: Necessary and sufficient conditions for causal feature selection in time series with latent common causes »
Atalanti Mastakouri · Bernhard Schölkopf · Dominik Janzing -
2021 Spotlight: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2021 Spotlight: Conditional Distributional Treatment Effect with Kernel Conditional Mean Embeddings and U-Statistic Regression »
Junhyung Park · Uri Shalit · Bernhard Schölkopf · Krikamol Muandet -
2021 Poster: Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning »
Sumedh Sontakke · Arash Mehrjou · Laurent Itti · Bernhard Schölkopf -
2021 Spotlight: Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning »
Sumedh Sontakke · Arash Mehrjou · Laurent Itti · Bernhard Schölkopf -
2020 Workshop: Inductive Biases, Invariances and Generalization in Reinforcement Learning »
Anirudh Goyal · Rosemary Nan Ke · Jane Wang · Stefan Bauer · Theophane Weber · Fabio Viola · Bernhard Schölkopf · Stefan Bauer -
2020 Poster: Weakly-Supervised Disentanglement Without Compromises »
Francesco Locatello · Ben Poole · Gunnar Ratsch · Bernhard Schölkopf · Olivier Bachem · Michael Tschannen -
2020 Poster: Revisiting Spatial Invariance with Low-Rank Local Connectivity »
Gamaleldin Elsayed · Prajit Ramachandran · Jon Shlens · Simon Kornblith -
2020 Poster: A Simple Framework for Contrastive Learning of Visual Representations »
Ting Chen · Simon Kornblith · Mohammad Norouzi · Geoffrey Hinton -
2019 Poster: Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness »
Raphael Suter · Djordje Miladinovic · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Similarity of Neural Network Representations Revisited »
Simon Kornblith · Mohammad Norouzi · Honglak Lee · Geoffrey Hinton -
2019 Oral: Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness »
Raphael Suter · Djordje Miladinovic · Bernhard Schölkopf · Stefan Bauer -
2019 Oral: Similarity of Neural Network Representations Revisited »
Simon Kornblith · Mohammad Norouzi · Honglak Lee · Geoffrey Hinton -
2019 Poster: Kernel Mean Matching for Content Addressability of GANs »
Wittawat Jitkrittum · Wittawat Jitkrittum · Patsorn Sangkloy · Muhammad Waleed Gondal · Amit Raj · James Hays · Bernhard Schölkopf -
2019 Oral: Kernel Mean Matching for Content Addressability of GANs »
Wittawat Jitkrittum · Wittawat Jitkrittum · Patsorn Sangkloy · Patsorn Sangkloy · Muhammad Waleed Gondal · Muhammad Waleed Gondal · Amit Raj · Amit Raj · James Hays · James Hays · Bernhard Schölkopf · Bernhard Schölkopf -
2019 Poster: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2019 Poster: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations »
Francesco Locatello · Stefan Bauer · Mario Lucic · Gunnar Ratsch · Sylvain Gelly · Bernhard Schölkopf · Olivier Bachem -
2019 Oral: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2019 Oral: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations »
Francesco Locatello · Stefan Bauer · Mario Lucic · Gunnar Ratsch · Sylvain Gelly · Bernhard Schölkopf · Olivier Bachem -
2018 Poster: Detecting non-causal artifacts in multivariate linear regression models »
Dominik Janzing · Bernhard Schölkopf -
2018 Poster: On Matching Pursuit and Coordinate Descent »
Francesco Locatello · Anant Raj · Sai Praneeth Reddy Karimireddy · Gunnar Ratsch · Bernhard Schölkopf · Sebastian Stich · Martin Jaggi -
2018 Oral: Detecting non-causal artifacts in multivariate linear regression models »
Dominik Janzing · Bernhard Schölkopf -
2018 Oral: On Matching Pursuit and Coordinate Descent »
Francesco Locatello · Anant Raj · Sai Praneeth Reddy Karimireddy · Gunnar Ratsch · Bernhard Schölkopf · Sebastian Stich · Martin Jaggi -
2018 Poster: Tempered Adversarial Networks »
Mehdi S. M. Sajjadi · Giambattista Parascandolo · Arash Mehrjou · Bernhard Schölkopf -
2018 Poster: Differentially Private Database Release via Kernel Mean Embeddings »
Matej Balog · Ilya Tolstikhin · Bernhard Schölkopf -
2018 Oral: Differentially Private Database Release via Kernel Mean Embeddings »
Matej Balog · Ilya Tolstikhin · Bernhard Schölkopf -
2018 Oral: Tempered Adversarial Networks »
Mehdi S. M. Sajjadi · Giambattista Parascandolo · Arash Mehrjou · Bernhard Schölkopf -
2018 Poster: Learning Independent Causal Mechanisms »
Giambattista Parascandolo · Niki Kilbertus · Mateo Rojas-Carulla · Bernhard Schölkopf -
2018 Oral: Learning Independent Causal Mechanisms »
Giambattista Parascandolo · Niki Kilbertus · Mateo Rojas-Carulla · Bernhard Schölkopf -
2017 Workshop: Workshop on Human Interpretability in Machine Learning (WHI) »
Kush Varshney · Adrian Weller · Been Kim · Dmitry Malioutov -
2017 Invited Talk: Causal Learning »
Bernhard Schölkopf -
2017 Tutorial: Interpretable Machine Learning »
Been Kim · Finale Doshi-Velez