Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Towards Stable and Efficient Adversarial Training against l1 Bounded Adversarial Attacks

Yulun Jiang · Chen Liu · Zhichao Huang · Mathieu Salzmann · Sabine Süsstrunk

Exhibit Hall 1 #816
[ ]
[ PDF [ Poster

Abstract: We address the problem of stably and efficiently training a deep neural network robust to adversarial perturbations bounded by an l1 norm. We demonstrate that achieving robustness against l1-bounded perturbations is more challenging than in the l2 or l cases, because adversarial training against l1-bounded perturbations is more likely to suffer from catastrophic overfitting and yield training instabilities. Our analysis links these issues to the coordinate descent strategy used in existing methods. We address this by introducing Fast-EG-l1, an efficient adversarial training algorithm based on Euclidean geometry and free of coordinate descent. Fast-EG-l1 comes with no additional memory costs and no extra hyper-parameters to tune. Our experimental results on various datasets demonstrate that Fast-EG-l1 yields the best and most stable robustness against l1-bounded adversarial attacks among the methods of comparable computational complexity. Code and the checkpoints are available at https://github.com/IVRL/FastAdvL.

Chat is not available.