Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Shortcut Learning Through the Lens of Training Dynamics

Nihal Murali · Aahlad Puli · Ke Yu · Rajesh Ranganath · Kayhan Batmanghelich


Abstract: Deep Neural Networks (DNNs) are prone to learning *shortcut* patterns that damage the generalization of the DNN during deployment. This paper aims to better understand shortcut learning through the lens of the learning dynamics of the internal neurons during the training process. We make the following observations: (1) While previous works treat shortcuts as synonymous with spurious correlations, we emphasize that not all spurious correlations are shortcuts. We show that shortcuts are only those spurious features that are "easier" than the core features. (2) We build upon this premise and use *instance difficulty* methods (like Prediction Depth) to quantify "easy" and to identify this behavior during the training phase. (3) We empirically show that shortcut learning can be detected by observing the learning dynamics of the DNN's *early layers*. In other words, easy features learned by the initial layers of a DNN early during the training are potential shortcuts. We verify our claims on medical and vision datasets, both simulated and real, and justify the empirical success of our hypothesis by showing the theoretical connections between Prediction Depth and information-theoretic concepts like $\mathcal{V}$-usable information. Lastly, our experiments show the insufficiency of monitoring only accuracy plots during training (as is common in machine learning pipelines). We highlight the need for monitoring early training dynamics using example difficulty metrics.

Chat is not available.