Timezone: »

 
Catastrophic overfitting is a bug but also a feature
Guillermo Ortiz Jimenez · Pau de Jorge Aranda · Amartya Sanyal · Adel Bibi · Puneet Dokania · Pascal Frossard · Gregory Rogez · Phil Torr

Despite clear computational advantages in building robust neural networks, adversarial training (AT) using single-step methods is unstable as it suffers from catastrophic overfitting (CO): Networks gain non-trivial robustness during the first stages of adversarial training, but suddenly reach a breaking point where they quickly lose all robustness in just a few iterations. Although some works have succeeded at preventing CO, the different mechanisms that lead to this remarkable failure mode are still poorly understood. In this work, however, we find that the interplay between the structure of the data and the dynamics of AT plays a fundamental role in CO. Specifically, through active interventions on typical datasets of natural images, we establish a causal link between the structure of the data and the onset of CO in single-step AT methods. This new perspective provides important insights into the mechanisms that lead to CO and paves the way towards a better understanding of the general dynamics of robust model construction.

Author Information

Guillermo Ortiz Jimenez (EPFL)
Pau de Jorge Aranda (University of Oxford & Naver Labs Europe)
Pau de Jorge Aranda

I'm a a PhD student at the University of Oxford and Naver Labs Europe. My research interests include but are not limited to deep learning, computer vision, and machine learning.

Amartya Sanyal (University of Oxford)
Adel Bibi (University of Oxford)
Puneet Dokania (University of Oxford)
Pascal Frossard (EPFL)
Gregory Rogez (NAVER LABS Europe)
Phil Torr (Oxford)

More from the Same Authors