Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Weighted Risk Invariance for Density-Aware Domain Generalization

Gina Wong · Joshua Gleason · Rama Chellappa · Yoav Wald · Anqi Liu


Abstract:

Learning how to generalize training performance to unseen test distributions is essential to building robust, practically useful models. To this end, many recent studies focus on learning invariant (causal) features from multiple domains. However, the problem of distribution shift in the invariant features is not well studied, and existing invariant learning methods that ignore this possibility can struggle to generalize. In this work, we focus on finding invariant predictors from multiple, potentially shifted invariant feature distributions. We propose a novel optimization problem, Weighted Risk Invariance (WRI), and we show that the solution to this problem provably achieves out-of-distribution generalization. We also introduce an algorithm to practically solve the WRI problem that learns the density of invariant features and model parameters simultaneously, and we demonstrate our approach outperforms previous invariant learning methods under covariate shift in the invariant features. Finally, we show that the learned density over invariant features effectively detects when the features are out-of-distribution. To the best of our knowledge, ours is the first invariant learning method to provide informative density estimates on invariant features for the domain generalization problem.

Chat is not available.