Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: LatinX in AI (LXAI) Research Workshop

Towards Understanding Why Group Robustness Methods Work

Alain Raymond · Nico Alvarado · Julio Hurtado · Alvaro Soto · Vincenzo Lomonaco

Keywords: [ Fairness ] [ robustness ] [ Deep Learning ]


Abstract:

Deep Learning has made remarkable strides, yet models trained under conventional Empirical Risk Minimization (ERM) approaches encounter challenges regarding their generalization capabilities. In particular, a lack of robustness to spurious correlations. In response, Group Robustness Methods (GRMs) have been developed to combat them. These methods partition training datasets into distinct groups based on spurious features and problem labels. While GRMs demonstrate remarkable performance, the precise mechanisms underpinning their success remain elusive. In this study, we investigate both the features learned by GRMs and what happens in classifiers learned by them. Surprisingly, both GRMs and ERM models retain spurious information in their representations, even when irrelevant to the task at hand. Our findings suggest that the key to GRMs' success is two-fold: disentanglement of spurious features from invariant ones in representation space and incentives for the classifier to become orthogonal to spurious features.

Chat is not available.