Skip to yearly menu bar Skip to main content


Poster

Towards Efficient Training and Evaluation of Robust Models against $l_0$ Bounded Adversarial Perturbations

Xuyang Zhong · Yixiao HUANG · Chen Liu

Hall C 4-9 #906
[ ] [ Project Page ] [ Paper PDF ]
[ Slides [ Poster
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract: This work studies sparse adversarial perturbations bounded by $l_0$ norm. We propose a white-box PGD-like attack method named sparse-PGD to effectively and efficiently generate such perturbations. Furthermore, we combine sparse-PGD with a black-box attack to comprehensively and more reliably evaluate the models' robustness against $l_0$ bounded adversarial perturbations. Moreover, the efficiency of sparse-PGD enables us to conduct adversarial training to build robust models against sparse perturbations. Extensive experiments demonstrate that our proposed attack algorithm exhibits strong performance in different scenarios. More importantly, compared with other robust models, our adversarially trained model demonstrates state-of-the-art robustness against various sparse attacks. Codes are available at https://github.com/CityU-MLO/sPGD.

Chat is not available.