Timezone: »

 
Poster
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang · Xilie Xu · Bo Han · Gang Niu · Lizhen Cui · Masashi Sugiyama · Mohan Kankanhalli

Tue Jul 14 06:00 PM -- 06:45 PM & Wed Jul 15 04:00 AM -- 04:45 AM (PDT) @ Virtual #None

Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question—do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel formulation of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial data (i.e., friendly adversarial data) minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively—adversarial robustness can indeed be achieved without compromising the natural generalization.

Author Information

Jingfeng Zhang (National University of Singapore)

I am interested in robustness in machine learning.

Xilie Xu (Shandong University)
Bo Han (HKBU / RIKEN)
Gang Niu (RIKEN)

Gang Niu is currently a research scientist (indefinite-term) at RIKEN Center for Advanced Intelligence Project. He received the PhD degree in computer science from Tokyo Institute of Technology in 2013. Before joining RIKEN as a research scientist, he was a senior software engineer at Baidu and then an assistant professor at the University of Tokyo. He has published more than 70 journal articles and conference papers, including 14 NeurIPS (1 oral and 3 spotlights), 28 ICML, and 2 ICLR (1 oral) papers. He has served as an area chair 14 times, including ICML 2019--2021, NeurIPS 2019--2021, and ICLR 2021--2022.

Lizhen Cui (ShanDong University)
Masashi Sugiyama (RIKEN / The University of Tokyo)
Mohan Kankanhalli (National University of Singapore,)

More from the Same Authors