Timezone: »
Understanding and explaining the mistakes made by trained models is critical to many machine learning objectives, such as improving robustness, addressing concept drift, and mitigating biases. However, this is often an ad hoc process that involves manually looking at the model's mistakes on many test samples and guessing at the underlying reasons for those incorrect predictions. In this paper, we propose a systematic approach, conceptual counterfactual explanations (CCE), that explains why a classifier makes a mistake on a particular test sample(s) in terms of human-understandable concepts (e.g. this zebra is misclassified as a dog because of faint stripes). We base CCE on two prior ideas: counterfactual explanations and concept activation vectors, and validate our approach on well-known pretrained models, showing that it explains the models' mistakes meaningfully. In addition, for new models trained on data with spurious correlations, CCE accurately identifies the spurious correlation as the cause of model mistakes from a single misclassified test sample. On two challenging medical applications, CCE generated useful insights, confirmed by clinicians, into biases and mistakes the model makes in real-world settings. The code for CCE is publicly available and can easily be applied to explain mistakes in new models.
Author Information
Abubakar Abid (Stanford)
Mert Yuksekgonul (Stanford University)
James Zou (Stanford)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Meaningfully debugging model mistakes using conceptual counterfactual explanations »
Tue. Jul 19th 05:30 -- 05:35 PM Room Ballroom 3 & 4
More from the Same Authors
-
2021 : Meaningfully Explaining a Model's Mistakes »
· Abubakar Abid · James Zou -
2021 : Meaningfully Explaining a Model's Mistakes »
Abubakar Abid · James Zou -
2021 : Stateful Performative Gradient Descent »
Zachary Izzo · James Zou · Lexing Ying -
2022 : On the nonlinear correlation of ML performance across data subpopulations »
Weixin Liang · Yining Mao · Yongchan Kwon · Xinyu Yang · James Zou -
2022 : MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts »
Weixin Liang · Xinyu Yang · James Zou -
2022 : Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning »
Weixin Liang · Yuhui Zhang · Yongchan Kwon · Serena Yeung · James Zou -
2023 : Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks »
Yuzhen Mao · Zhun Deng · Huaxiu Yao · Ting Ye · Kenji Kawaguchi · James Zou -
2023 : Prospectors: Leveraging Short Contexts to Mine Salient Objects in High-dimensional Imagery »
Gautam Machiraju · Arjun Desai · James Zou · Christopher Re · Parag Mallick -
2023 : Beyond Confidence: Reliable Models Should Also Consider Atypicality »
Mert Yuksekgonul · Linjun Zhang · James Zou · Carlos Guestrin -
2023 : Less is More: Using Multiple LLMs for Applications with Lower Costs »
Lingjiao Chen · Matei Zaharia · James Zou -
2023 Poster: Data-Driven Subgroup Identification for Linear Regression »
Zachary Izzo · Ruishan Liu · James Zou -
2023 Poster: Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value »
Yongchan Kwon · James Zou -
2023 Poster: Accuracy on the Curve: On the Nonlinear Correlation of ML Performance Between Data Subpopulations »
Weixin Liang · Yining Mao · Yongchan Kwon · Xinyu Yang · James Zou -
2023 Poster: Discover and Cure: Concept-aware Mitigation of Spurious Correlation »
Shirley Wu · Mert Yuksekgonul · Linjun Zhang · James Zou -
2022 : Invited talk #2 James Zou (Title: Machine learning to make clinical trials more efficient and diverse) »
James Zou -
2022 : 7-UP: generating in silico CODEX from a small set of immunofluorescence markers »
James Zou -
2022 : Contributed Talk 2: MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts »
Weixin Liang · Xinyu Yang · James Zou -
2019 Poster: Adaptive Monte Carlo Multiple Testing via Multi-Armed Bandits »
Martin Zhang · James Zou · David Tse -
2019 Poster: Concrete Autoencoders: Differentiable Feature Selection and Reconstruction »
Muhammed Fatih Balın · Abubakar Abid · James Zou -
2019 Oral: Concrete Autoencoders: Differentiable Feature Selection and Reconstruction »
Muhammed Fatih Balın · Abubakar Abid · James Zou -
2019 Oral: Adaptive Monte Carlo Multiple Testing via Multi-Armed Bandits »
Martin Zhang · James Zou · David Tse -
2017 Poster: Estimating the unseen from multiple populations »
Aditi Raghunathan · Greg Valiant · James Zou -
2017 Poster: Learning Latent Space Models with Angular Constraints »
Pengtao Xie · Yuntian Deng · Yi Zhou · Abhimanu Kumar · Yaoliang Yu · James Zou · Eric Xing -
2017 Talk: Learning Latent Space Models with Angular Constraints »
Pengtao Xie · Yuntian Deng · Yi Zhou · Abhimanu Kumar · Yaoliang Yu · James Zou · Eric Xing -
2017 Talk: Estimating the unseen from multiple populations »
Aditi Raghunathan · Greg Valiant · James Zou