Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Accuracy on the wrong line: On the pitfalls of noisy data for OOD generalisation

Amartya Sanyal · Yaxi Hu · Yaodong Yu · Yian Ma · Yixin Wang · Bernhard Schölkopf

Keywords: [ Accuracy-on-the-line ] [ Label Noise ] [ OOD Generalization ]


Abstract:

Accuracy-on-the-line is a widely observed phenomenon in machine learning, where a model's accuracy on in-distribution (ID) and out-of-distribution (OOD) data is positively correlated across different hyperparameters and data configurations. But when does this useful relationship break down? In this work, we explore its robustness. The key observation is that noisy data and the presence of nuisance features can be sufficient to shatter the Accuracy-on-the-line phenomenon. In these cases, ID and OOD accuracy can become negatively correlated, leading to "Accuracy-on-the-wrong-line". This phenomenon can also occur in the presence of spurious (shortcut) features, which tend to overshadow the more complex signal (core, non-spurious) features, resulting in a large nuisance feature space. Moreover, scaling to larger datasets does not mitigate this undesirable behaviour and may even exacerbate it. We formally prove a lower bound on OOD error in a linear classification model, characterising the conditions on the noise and nuisance features for a large OOD error. We finally demonstrate this phenomenon across both synthetic and real datasets with noisy data and nuisance features.

Chat is not available.