Timezone: »

 
Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
Pau de Jorge Aranda · Adel Bibi · Riccardo Volpi · Amartya Sanyal · Phil Torr · Gregory Rogez · Puneet Dokania

Recently, Wong et al. (2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. Experimentally they showed that simply adding a random perturbation prior to FGSM (RS-FGSM) could prevent CO. However, Andriushchenko & Flammarion (2020) observed that RS-FGSM still leads to CO for larger perturbations, and proposed a computationally expensive regularizer (GradAlign) to avoid it. In this work, we methodically revisit the role of noise and clipping in single-step adversarial training. Contrary to previous intuitions, we find that using a stronger noise around the clean sample combined with not clipping is highly effective in avoiding CO for large perturbation radii. We then propose Noise-FGSM (N-FGSM) that, while providing the benefits of single-step adversarial training, does not suffer from CO. Empirical analyses on a large suite of experiments show that N-FGSM is able to match or surpass the performance of previous state of-the-art GradAlign while achieving 3x speed-up.

Author Information

Pau de Jorge Aranda (University of Oxford & Naver Labs Europe)
Pau de Jorge Aranda

I'm a a PhD student at the University of Oxford and Naver Labs Europe. My research interests include but are not limited to deep learning, computer vision, and machine learning.

Adel Bibi (University of Oxford)
Riccardo Volpi (Naver Labs)
Amartya Sanyal (University of Oxford)
Phil Torr (Oxford)
Gregory Rogez (NAVER LABS Europe)
Puneet Dokania (University of Oxford)

More from the Same Authors