Skip to yearly menu bar Skip to main content


Poster

On Breaking Deep Generative Model-based Defenses and Beyond

Yanzhi Chen · Renjie Xie · Zhanxing Zhu

Keywords: [ Adversarial Examples ] [ Computer Vision ] [ Deep Generative Models ]


Abstract:

Deep neural networks have been proven to be vulnerable to the so-called adversarial attacks. Recently there have been efforts to defend such attacks with deep generative models. These defense often predict by inverting the deep generative models rather than simple feedforward propagation. Such defenses are difficult to attack due to obfuscated gradient. In this work, we develop a new gradient approximation attack to break these defenses. The idea is to view the inversion phase as a dynamical system, through which we extract the gradient w.r.t the input by tracing its recent trajectory. An amortized strategy is further developed to accelerate the attack. Experiments show that our attack breaks state-of-the-art defenses (e.g DefenseGAN, ABS) much more effectively than other attacks. Additionally, our empirical results provide insights for understanding the weaknesses of deep generative model-based defenses.

Chat is not available.