Timezone: »
Adversarial purification refers to a class of defense methods that remove adversarial perturbations using a generative model. These methods do not make assumptions on the form of attack and the classification model, and thus can defend pre-existing classifiers against unseen threats. However, their performance currently falls behind adversarial training methods. In this work, we propose DiffPure that uses diffusion models for adversarial purification: Given an adversarial example, we first diffuse it with a small amount of noise following a forward diffusion process, and then recover the clean image through a reverse generative process. To evaluate our method against strong adaptive attacks in an efficient and scalable way, we propose to use the adjoint method to compute full gradients of the reverse generative process. Extensive experiments on three image datasets including CIFAR-10, ImageNet and CelebA-HQ with three classifier architectures including ResNet, WideResNet and ViT demonstrate that our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods, often by a large margin.
Author Information
Weili Nie (NVIDIA)
Brandon Guo (Caltech)
Yujia Huang (California Institute of Technology)
Working on generative models, model brain connection using DNN, high speed DNN computing architecture.
Chaowei Xiao (University of Michigan)
Arash Vahdat (NVIDIA Research)
Animashree Anandkumar (Caltech and NVIDIA)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Diffusion Models for Adversarial Purification »
Thu. Jul 21st through Fri the 22nd Room Hall E #203
More from the Same Authors
-
2021 : Improving Adversarial Robustness in 3D Point Cloud Classification via Self-Supervisions »
Jiachen Sun · yulong cao · Christopher Choy · Zhiding Yu · Chaowei Xiao · Anima Anandkumar · Zhuoqing Morley Mao -
2021 : Auditing AI models for Verified Deployment under Semantic Specifications »
Homanga Bharadhwaj · De-An Huang · Chaowei Xiao · Anima Anandkumar · Animesh Garg -
2021 : Delving into the Remote Adversarial Patch in Semantic Segmentation »
yulong cao · Jiachen Sun · Chaowei Xiao · Qi Chen · Zhuoqing Morley Mao -
2022 : Physics-Informed Neural Operator for Learning Partial Differential Equations »
Zongyi Li · Hongkai Zheng · Nikola Kovachki · David Jin · Haoxuan Chen · Burigede Liu · Kamyar Azizzadenesheli · Animashree Anandkumar -
2023 : LeanDojo: Theorem Proving with Retrieval-Augmented Language Models »
Kaiyu Yang · Aidan Swope · Alexander Gu · Rahul Chalamala · Shixing Yu · Saad Godil · Ryan Prenger · Animashree Anandkumar -
2023 : Unsupervised Discovery of Steerable Factors in Graphsc »
Shengchao Liu · Chengpeng Wang · Weili Nie · Hanchen Wang · Jiarui Lu · Bolei Zhou · Jian Tang -
2023 : ChatGPT-powered Conversational Drug Editing Using Retrieval and Domain Feedback »
Shengchao Liu · Jiongxiao Wang · Yijin Yang · Chengpeng Wang · Ling Liu · Hongyu Guo · Chaowei Xiao -
2023 : Panel Discussion »
Chenlin Meng · Yang Song · Yilun Xu · Ricky T. Q. Chen · Charlotte Bunne · Arash Vahdat -
2023 Workshop: New Frontiers in Learning, Control, and Dynamical Systems »
Valentin De Bortoli · Charlotte Bunne · Guan-Horng Liu · Tianrong Chen · Maxim Raginsky · Pratik Chaudhari · Melanie Zeilinger · Animashree Anandkumar -
2023 Poster: Fast Sampling of Diffusion Models via Operator Learning »
Hongkai Zheng · Weili Nie · Arash Vahdat · Kamyar Azizzadenesheli · Anima Anandkumar -
2023 Poster: A Critical Revisit of Adversarial Robustness in 3D Point Cloud Recognition with Diffusion-Driven Purification »
Jiachen Sun · Jiongxiao Wang · Weili Nie · Zhiding Yu · Zhuoqing Morley Mao · Chaowei Xiao -
2023 Poster: CodeIPPrompt: Intellectual Property Infringement Assessment of Code Language Models »
Zhiyuan Yu · Yuhao Wu · Ning Zhang · Chenguang Wang · Yevgeniy Vorobeychik · Chaowei Xiao -
2023 Poster: I$^2$SB: Image-to-Image Schrödinger Bridge »
Guan-Horng Liu · Arash Vahdat · De-An Huang · Evangelos Theodorou · Weili Nie · Anima Anandkumar -
2023 Poster: Loss-Guided Diffusion Models for Plug-and-Play Controllable Generation »
Jiaming Song · Qinsheng Zhang · Hongxu Yin · Morteza Mardani · Ming-Yu Liu · Jan Kautz · Yongxin Chen · Arash Vahdat -
2022 Poster: Langevin Monte Carlo for Contextual Bandits »
Pan Xu · Hongkai Zheng · Eric Mazumdar · Kamyar Azizzadenesheli · Animashree Anandkumar -
2022 Poster: Understanding The Robustness in Vision Transformers »
Zhou Daquan · Zhiding Yu · Enze Xie · Chaowei Xiao · Animashree Anandkumar · Jiashi Feng · Jose M. Alvarez -
2022 Spotlight: Understanding The Robustness in Vision Transformers »
Zhou Daquan · Zhiding Yu · Enze Xie · Chaowei Xiao · Animashree Anandkumar · Jiashi Feng · Jose M. Alvarez -
2022 Spotlight: Langevin Monte Carlo for Contextual Bandits »
Pan Xu · Hongkai Zheng · Eric Mazumdar · Kamyar Azizzadenesheli · Animashree Anandkumar -
2021 : Contributed Talk-4. Auditing AI models for Verified Deployment under Semantic Specifications »
Chaowei Xiao -
2021 : Contributed Talk-3. FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information »
Chaowei Xiao -
2021 : Contributed Talk-2. Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions »
Chaowei Xiao -
2021 : Kai-Wei Chang. Societal Bias in Language Generation »
Chaowei Xiao -
2021 : Invited Speaker: Animashree Anandkumar: Stability-aware reinforcement learning in dynamical systems »
Animashree Anandkumar -
2021 : Contributed Talk-1. Machine Learning API Shift Assessments »
Chaowei Xiao -
2021 : Nicolas Papernot. What Does it Mean for ML to be Trustworthy »
Chaowei Xiao -
2021 : Olga Russakovsky. Revealing, Quantifying, Analyzing and Mitigating Bias in Visual Recognition »
Chaowei Xiao -
2021 : Jun Zhu. Understand and Benchmark Adversarial Robustness of Deep Learning »
Chaowei Xiao -
2021 : Anima Anandkumar. Opening remarks »
Chaowei Xiao -
2021 Workshop: Workshop on Socially Responsible Machine Learning »
Chaowei Xiao · Animashree Anandkumar · Mingyan Liu · Dawn Song · Raquel Urtasun · Jieyu Zhao · Xueru Zhang · Cihang Xie · Xinyun Chen · Bo Li -
2020 : Q&A: Anima Anandakumar »
Animashree Anandkumar · Jessica Forde -
2020 : Invited Talks: Anima Anandakumar »
Animashree Anandkumar -
2020 Poster: Undirected Graphical Models as Approximate Posteriors »
Arash Vahdat · Evgeny Andriyash · William Macready -
2020 : Mentoring Panel: Doina Precup, Deborah Raji, Anima Anandkumar, Angjoo Kanazawa and Sinead Williamson (moderator). »
Doina Precup · Inioluwa Raji · Angjoo Kanazawa · Sinead A Williamson · Animashree Anandkumar -
2018 Poster: StrassenNets: Deep Learning with a Multiplication Budget »
Michael Tschannen · Aran Khanna · Animashree Anandkumar -
2018 Oral: StrassenNets: Deep Learning with a Multiplication Budget »
Michael Tschannen · Aran Khanna · Animashree Anandkumar