Timezone: »

Causal-structure Driven Augmentations for Text OOD Generalization
Amir Feder · Yoav Wald · Claudia Shi · Suchi Saria · David Blei
Event URL: https://openreview.net/forum?id=XmTRYK1uN2 »

In this work, we propose counterfactual data augmentation methods, guided by knowledge of the causal structure of the data, to simulate interventions on spurious features. Our main motivation is classifying medical notes, and we use these methods to learn more robust text classifiers. In prediction problems where the label is spuriously correlated with an attribute, and under certain assumptions, we show that this strategy is appropriate and can enjoy improved sample complexity compared to importance re-weighting. Pragmatically, we match examples using auxiliary data, based on diff-in-diff methodology, and use a large language model (LLM) to represent a conditional probability of text. Experiments on learning caregiver-invariant predictors of clinical diagnoses from medical narratives and on semi-synthetic data, demonstrate that our method improves out-of-distribution (OOD) accuracy.

Author Information

Amir Feder (Columbia University, Google)
Yoav Wald (Johns Hopkins University)
Claudia Shi (Columbia University)
Suchi Saria (Johns Hopkins University)
David Blei (Columbia University)

David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.

More from the Same Authors