Timezone: »
While adversarial training is considered as a standard defense method against adversarial attacks for image classifiers, adversarial purification, which purifies attacked images into clean images with a standalone purification, model has shown promises as an alternative defense method. Recently, an EBM trained with MCMC has been highlighted as a purification model, where an attacked image is purified by running a long Markov-chain using the gradients of the EBM. Yet, the practicality of the adversarial purification using an EBM remains questionable because the number of MCMC steps required for such purification is too large. In this paper, we propose a novel adversarial purification method based on an EBM trained with DSM. We show that an EBM trained with DSM can quickly purify attacked images within a few steps. We further introduce a simple yet effective randomized purification scheme that injects random noises into images before purification. This process screens the adversarial perturbations imposed on images by the random noises and brings the images to the regime where the EBM can denoise well. We show that our purification method is robust against various attacks and demonstrate its state-of-the-art performances.
Author Information
Jongmin Yoon (KAIST)
Sung Ju Hwang (KAIST, AITRICS)
Juho Lee (KAIST, AITRICS)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Adversarial Purification with Score-based Generative Models »
Thu. Jul 22nd 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2023 Poster: Probabilistic Imputation for Time-series Classification with Missing Data »
SeungHyun Kim · Hyunsu Kim · Eunggu Yun · Hwangrae Lee · Jaehun Lee · Juho Lee -
2023 Poster: Traversing Between Modes in Function Space for Fast Ensembling »
Eunggu Yun · Hyungi Lee · Giung Nam · Juho Lee -
2023 Poster: Regularizing Towards Soft Equivariance Under Mixed Symmetries »
Hyunsu Kim · Hyungi Lee · Hongseok Yang · Juho Lee -
2023 Poster: Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation »
Jeffrey Willette · Seanie Lee · Bruno Andreis · Kenji Kawaguchi · Juho Lee · Sung Ju Hwang -
2022 Poster: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations »
Jaehyeong Jo · Seul Lee · Sung Ju Hwang -
2022 Spotlight: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations »
Jaehyeong Jo · Seul Lee · Sung Ju Hwang -
2022 Poster: Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation »
Giung Nam · Hyungi Lee · Byeongho Heo · Juho Lee -
2022 Poster: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Poster: Set Based Stochastic Subsampling »
Bruno Andreis · Seanie Lee · A. Tuan Nguyen · Juho Lee · Eunho Yang · Sung Ju Hwang -
2022 Poster: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Spotlight: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: Set Based Stochastic Subsampling »
Bruno Andreis · Seanie Lee · A. Tuan Nguyen · Juho Lee · Eunho Yang · Sung Ju Hwang -
2022 Spotlight: Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation »
Giung Nam · Hyungi Lee · Byeongho Heo · Juho Lee -
2021 Poster: Large-Scale Meta-Learning with Continual Trajectory Shifting »
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang -
2021 Spotlight: Large-Scale Meta-Learning with Continual Trajectory Shifting »
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang -
2021 Poster: Learning to Generate Noise for Multi-Attack Robustness »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2021 Spotlight: Learning to Generate Noise for Multi-Attack Robustness »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2021 Poster: Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation »
Dongchan Min · Dong Bok Lee · Eunho Yang · Sung Ju Hwang -
2021 Spotlight: Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation »
Dongchan Min · Dong Bok Lee · Eunho Yang · Sung Ju Hwang -
2021 Poster: Federated Continual Learning with Weighted Inter-client Transfer »
Jaehong Yoon · Wonyong Jeong · GiWoong Lee · Eunho Yang · Sung Ju Hwang -
2021 Spotlight: Federated Continual Learning with Weighted Inter-client Transfer »
Jaehong Yoon · Wonyong Jeong · GiWoong Lee · Eunho Yang · Sung Ju Hwang -
2020 Poster: Cost-Effective Interactive Attention Learning with Neural Attention Processes »
Jay Heo · Junhyeon Park · Hyewon Jeong · Kwang Joon Kim · Juho Lee · Eunho Yang · Sung Ju Hwang -
2020 Poster: Meta Variance Transfer: Learning to Augment from the Others »
Seong-Jin Park · Seungju Han · Ji-won Baek · Insoo Kim · Juhwan Song · Hae Beom Lee · Jae-Joon Han · Sung Ju Hwang -
2020 Poster: Self-supervised Label Augmentation via Input Transformations »
Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2020 Poster: Adversarial Neural Pruning with Latent Vulnerability Suppression »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2019 Poster: Learning What and Where to Transfer »
Yunhun Jang · Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2019 Oral: Learning What and Where to Transfer »
Yunhun Jang · Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2018 Poster: Deep Asymmetric Multi-task Feature Learning »
Hae Beom Lee · Eunho Yang · Sung Ju Hwang -
2018 Oral: Deep Asymmetric Multi-task Feature Learning »
Hae Beom Lee · Eunho Yang · Sung Ju Hwang