Skip to yearly menu bar Skip to main content


Poster

Eliminating Adversarial Noise via Information Discard and Robust Representation Restoration

Dawei Zhou · Yukun Chen · Nannan Wang · Decheng Liu · Xinbo Gao · Tongliang Liu

Exhibit Hall 1 #709
[ ]
[ PDF [ Poster

Abstract:

Deep neural networks (DNNs) are vulnerable to adversarial noise. Denoising model-based defense is a major protection strategy. However, denoising models may fail and induce negative effects in fully white-box scenarios. In this work, we start from the latent inherent properties of adversarial samples to break the limitations. Unlike solely learning a mapping from adversarial samples to natural samples, we aim to achieve denoising by destroying the spatial characteristics of adversarial noise and preserving the robust features of natural information. Motivated by this, we propose a defense based on information discard and robust representation restoration. Our method utilize complementary masks to disrupt adversarial noise and guided denoising models to restore robust-predictive representations from masked samples. Experimental results show that our method has competitive performance against white-box attacks and effectively reverses the negative effect of denoising models.

Chat is not available.