Timezone: »
Poster
Better Diffusion Models Further Improve Adversarial Training
Zekai Wang · Tianyu Pang · Chao Du · Min Lin · Weiwei Liu · Shuicheng YAN
Event URL: https://github.com/wzekai99/DM-Improves-AT »
It has been recognized that the data generated by the denoising diffusion probabilistic model (DDPM) improves adversarial training. After two years of rapid development in diffusion models, a question naturally arises: can better diffusion models further improve adversarial training? This paper gives an affirmative answer by employing the most recent diffusion model which has higher efficiency ($\sim 20$ sampling steps) and image quality (lower FID score) compared with DDPM. Our adversarially trained models achieve state-of-the-art performance on RobustBench using only generated data (no external datasets). Under the $\ell_\infty$-norm threat model with $\epsilon=8/255$, our models achieve $70.69\\\%$ and $42.67\\\%$ robust accuracy on CIFAR-10 and CIFAR-100, respectively, i.e. improving upon previous state-of-the-art models by $+4.58\\\%$ and $+8.03\\\%$. Under the $\ell_2$-norm threat model with $\epsilon=128/255$, our models achieve $84.86\\\%$ on CIFAR-10 ($+4.44\\\%$). These results also beat previous works that use external data. We also provide compelling results on the SVHN and TinyImageNet datasets. Our code is at https://github.com/wzekai99/DM-Improves-AT.
It has been recognized that the data generated by the denoising diffusion probabilistic model (DDPM) improves adversarial training. After two years of rapid development in diffusion models, a question naturally arises: can better diffusion models further improve adversarial training? This paper gives an affirmative answer by employing the most recent diffusion model which has higher efficiency ($\sim 20$ sampling steps) and image quality (lower FID score) compared with DDPM. Our adversarially trained models achieve state-of-the-art performance on RobustBench using only generated data (no external datasets). Under the $\ell_\infty$-norm threat model with $\epsilon=8/255$, our models achieve $70.69\\\%$ and $42.67\\\%$ robust accuracy on CIFAR-10 and CIFAR-100, respectively, i.e. improving upon previous state-of-the-art models by $+4.58\\\%$ and $+8.03\\\%$. Under the $\ell_2$-norm threat model with $\epsilon=128/255$, our models achieve $84.86\\\%$ on CIFAR-10 ($+4.44\\\%$). These results also beat previous works that use external data. We also provide compelling results on the SVHN and TinyImageNet datasets. Our code is at https://github.com/wzekai99/DM-Improves-AT.
Author Information
Zekai Wang (Wuhan University)
Tianyu Pang (Sea AI Lab)
https://scholar.google.com/citations?user=wYDbtFsAAAAJ&hl=en
Chao Du (Sea AI Lab)
Min Lin (Sea AI Lab)
Weiwei Liu (Wuhan University)
Shuicheng YAN
More from the Same Authors
-
2021 : Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks »
Xiao Yang · Yinpeng Dong · Tianyu Pang -
2022 : Robustness Verification for Contrastive Learning »
Zekai Wang · Weiwei Liu -
2022 : $O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks »
Tianyu Pang · Shuicheng Yan · Min Lin -
2023 Poster: Improving Adversarial Robustness of Deep Equilibrium Models with Explicit Regulations Along the Neural Dynamics »
Zonghan Yang · Peng Li · Tianyu Pang · Yang Liu -
2023 Poster: Delving into Noisy Label Detection with Clean Data »
Chenglin Yu · Xinsong Ma · Weiwei Liu -
2023 Poster: DDGR: Continual Learning with Deep Diffusion-based Generative Replay »
Rui Gao · Weiwei Liu -
2023 Oral: Delving into Noisy Label Detection with Clean Data »
Chenglin Yu · Xinsong Ma · Weiwei Liu -
2023 Poster: Nonparametric Generative Modeling with Conditional Sliced-Wasserstein Flows »
Chao Du · Tianbo Li · Tianyu Pang · Shuicheng YAN · Min Lin -
2023 Poster: Bag of Tricks for Training Data Extraction from Language Models »
Weichen Yu · Tianyu Pang · Qian Liu · Chao Du · Bingyi Kang · Yan Huang · Min Lin · Shuicheng YAN -
2022 Poster: Robustness Verification for Contrastive Learning »
Zekai Wang · Weiwei Liu -
2022 Poster: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2022 Oral: Robustness Verification for Contrastive Learning »
Zekai Wang · Weiwei Liu -
2022 Spotlight: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2020 Poster: Adaptive Adversarial Multi-task Representation Learning »
YUREN MAO · Weiwei Liu · Xuemin Lin -
2019 Poster: On the Spectral Bias of Neural Networks »
Nasim Rahaman · Aristide Baratin · Devansh Arpit · Felix Draxler · Min Lin · Fred Hamprecht · Yoshua Bengio · Aaron Courville -
2019 Oral: On the Spectral Bias of Neural Networks »
Nasim Rahaman · Aristide Baratin · Devansh Arpit · Felix Draxler · Min Lin · Fred Hamprecht · Yoshua Bengio · Aaron Courville -
2019 Poster: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2019 Poster: Sparse Extreme Multi-label Learning with Oracle Property »
Weiwei Liu · Xiaobo Shen -
2019 Oral: Sparse Extreme Multi-label Learning with Oracle Property »
Weiwei Liu · Xiaobo Shen -
2019 Oral: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2018 Poster: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu -
2018 Oral: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu