Timezone: »
In this work, we propose counterfactual data augmentation methods, guided by knowledge of the causal structure of the data, to simulate interventions on spurious features. Our main motivation is classifying medical notes, and we use these methods to learn more robust text classifiers. In prediction problems where the label is spuriously correlated with an attribute, and under certain assumptions, we show that this strategy is appropriate and can enjoy improved sample complexity compared to importance re-weighting. Pragmatically, we match examples using auxiliary data, based on diff-in-diff methodology, and use a large language model (LLM) to represent a conditional probability of text. Experiments on learning caregiver-invariant predictors of clinical diagnoses from medical narratives and on semi-synthetic data, demonstrate that our method improves out-of-distribution (OOD) accuracy.
Author Information
Amir Feder (Columbia University, Google)
Yoav Wald (Johns Hopkins University)
Claudia Shi (Columbia University)
Suchi Saria (Johns Hopkins University)
David Blei (Columbia University)
David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.
More from the Same Authors
-
2022 : In the Eye of the Beholder: Robust Prediction with Causal User Modeling »
Amir Feder · Guy Horowitz · Yoav Wald · Roi Reichart · Nir Rosenfeld -
2022 : Optimization-based Causal Estimation from Heterogenous Environments »
Mingzhang Yin · Yixin Wang · David Blei -
2023 : Weighted Risk Invariance for Density-Aware Domain Generalization »
Gina Wong · Joshua Gleason · Rama Chellappa · Yoav Wald · Anqi Liu -
2023 : In the Eye of the Beholder: Robust Prediction with Causal User Modeling »
Amir Feder · Nir Rosenfeld -
2023 : Birds of an Odd Feather: Guaranteed Out-of-Distribution (OOD) Novel Category Detection »
Yoav Wald · Suchi Saria -
2023 : Practical and Asymptotically Exact Conditional Sampling in Diffusion Models »
Brian Trippe · Luhuan Wu · Christian Naesseth · David Blei · John Cunningham -
2023 : Using Causality to Improve Safety Throughout the AI Lifecycle »
Suchi Saria · Adarsh Subbaswamy -
2023 Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability »
Yoav Wald · Claudia Shi · Aahlad Puli · Amir Feder · Limor Gultchin · Mark Goldstein · Maggie Makar · Victor Veitch · Uri Shalit -
2023 Oral: JAWS-X: Addressing Efficiency Bottlenecks of Conformal Prediction Under Standard and Feedback Covariate Shift »
Drew Prinster · Suchi Saria · Anqi Liu -
2023 Poster: JAWS-X: Addressing Efficiency Bottlenecks of Conformal Prediction Under Standard and Feedback Covariate Shift »
Drew Prinster · Suchi Saria · Anqi Liu -
2022 : Reconstructing the Universe with Variational self-Boosted Sampling »
Chirag Modi · Yin Li · David Blei -
2022 Workshop: Spurious correlations, Invariance, and Stability (SCIS) »
Aahlad Puli · Maggie Makar · Victor Veitch · Yoav Wald · Mark Goldstein · Limor Gultchin · Angela Zhou · Uri Shalit · Suchi Saria -
2022 Poster: Variational Inference for Infinitely Deep Neural Networks »
Achille Nazaret · David Blei -
2022 Spotlight: Variational Inference for Infinitely Deep Neural Networks »
Achille Nazaret · David Blei -
2021 Poster: Unsupervised Representation Learning via Neural Activation Coding »
Yookoon Park · Sangho Lee · Gunhee Kim · David Blei -
2021 Poster: A Proxy Variable View of Shared Confounding »
Yixin Wang · David Blei -
2021 Spotlight: A Proxy Variable View of Shared Confounding »
Yixin Wang · David Blei -
2021 Oral: Unsupervised Representation Learning via Neural Activation Coding »
Yookoon Park · Sangho Lee · Gunhee Kim · David Blei -
2018 Poster: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Oral: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Poster: Augment and Reduce: Stochastic Inference for Large Categorical Distributions »
Francisco Ruiz · Michalis Titsias · Adji Bousso Dieng · David Blei -
2018 Poster: Black Box FDR »
Wesley Tansey · Yixin Wang · David Blei · Raul Rabadan -
2018 Oral: Augment and Reduce: Stochastic Inference for Large Categorical Distributions »
Francisco Ruiz · Michalis Titsias · Adji Bousso Dieng · David Blei -
2018 Oral: Black Box FDR »
Wesley Tansey · Yixin Wang · David Blei · Raul Rabadan -
2017 Workshop: Implicit Generative Models »
Rajesh Ranganath · Ian Goodfellow · Dustin Tran · David Blei · Balaji Lakshminarayanan · Shakir Mohamed -
2017 Poster: Robust Probabilistic Modeling with Bayesian Data Reweighting »
Yixin Wang · Alp Kucukelbir · David Blei -
2017 Poster: Evaluating Bayesian Models with Posterior Dispersion Indices »
Alp Kucukelbir · Yixin Wang · David Blei -
2017 Poster: Zero-Inflated Exponential Family Embeddings »
Liping Liu · David Blei -
2017 Talk: Zero-Inflated Exponential Family Embeddings »
Liping Liu · David Blei -
2017 Talk: Evaluating Bayesian Models with Posterior Dispersion Indices »
Alp Kucukelbir · Yixin Wang · David Blei -
2017 Talk: Robust Probabilistic Modeling with Bayesian Data Reweighting »
Yixin Wang · Alp Kucukelbir · David Blei