Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Spurious correlations, Invariance, and Stability (SCIS)

Conditional Distributional Invariance through Implicit Regularization

Tanmay Gupta

Keywords: [ text classification ] [ Causality ] [ Invariant Learning ] [ Machine Learning ] [ Deep Learning ] [ image classification ] [ Spurious Correlations ]


Abstract:

A significant challenge faced by models trained via standard Empirical Risk Minimization (ERM) is that they might learn features of the input X which help it predict label Y in the training set which shouldn’t matter, i.e. associations which might not hold in test data. Causality lends itself very well to separate such spurious correlations from genuine, causal, ones. In this paper, we present a simple causal model for data and a method using which we can train a classifier to predict a category Y from an input X, while being invariant to a variable Z which is spuriously associated with Y. Notably, this method is just a slightly modified ERM problem without any explicit regularization. We empirically demonstrate that our method does better than regular ERM on standard metrics on benchmark datasets.

Chat is not available.