Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Adversarial Data Augmentations for Out-of-Distribution Generalization

Simon Zhang · Ryan DeMilt · Kun Jin · Cathy Honghui Xia


Abstract:

Out-of-distribution (OoD) generalization occurs when representation learning encounters a distribution shift. This frequently happens in practice when training and testing data come from different environments. Covariate shift is a type of distribution shift that occurs only in the input data, while keeping the concept distribution invariant. We propose RIA - Regularization for Invariance with Adversarial training, a new method for OoD generalization that performs an adversarial search for training data environments. These adversarial data augmentations prevent a collapse to an in-distribution trained learner. It works with many existing OoD generalization methods for covariate shift that can be formulated as constrained optimization problems. We perform extensive experiments on OoD graph classification for various kinds of synthetic and natural distribution shifts. We demonstrate that our method can achieve high accuracy compared with OoD baselines.

Chat is not available.