Timezone: »

Eliminating Adversarial Noise via Information Discard and Robust Representation Restoration
Dawei Zhou · Yukun Chen · Nannan Wang · Decheng Liu · Xinbo Gao · Tongliang Liu

Thu Jul 27 04:30 PM -- 06:00 PM (PDT) @ Exhibit Hall 1 #709

Deep neural networks (DNNs) are vulnerable to adversarial noise. Denoising model-based defense is a major protection strategy. However, denoising models may fail and induce negative effects in fully white-box scenarios. In this work, we start from the latent inherent properties of adversarial samples to break the limitations. Unlike solely learning a mapping from adversarial samples to natural samples, we aim to achieve denoising by destroying the spatial characteristics of adversarial noise and preserving the robust features of natural information. Motivated by this, we propose a defense based on information discard and robust representation restoration. Our method utilize complementary masks to disrupt adversarial noise and guided denoising models to restore robust-predictive representations from masked samples. Experimental results show that our method has competitive performance against white-box attacks and effectively reverses the negative effect of denoising models.

Author Information

Dawei Zhou (Xidian University)
Yukun Chen (Xidian University)
Nannan Wang (Xidian University)
Decheng Liu (Xidian University )
Xinbo Gao (Chongqing University of Posts and Telecommunications)
Tongliang Liu (The University of Sydney)

More from the Same Authors