Timezone: »

On Breaking Deep Generative Model-based Defenses and Beyond
Yanzhi Chen · Renjie Xie · Zhanxing Zhu

Thu Jul 16 06:00 AM -- 06:45 AM & Thu Jul 16 07:00 PM -- 07:45 PM (PDT) @

Deep neural networks have been proven to be vulnerable to the so-called adversarial attacks. Recently there have been efforts to defend such attacks with deep generative models. These defense often predict by inverting the deep generative models rather than simple feedforward propagation. Such defenses are difficult to attack due to obfuscated gradient. In this work, we develop a new gradient approximation attack to break these defenses. The idea is to view the inversion phase as a dynamical system, through which we extract the gradient w.r.t the input by tracing its recent trajectory. An amortized strategy is further developed to accelerate the attack. Experiments show that our attack breaks state-of-the-art defenses (e.g DefenseGAN, ABS) much more effectively than other attacks. Additionally, our empirical results provide insights for understanding the weaknesses of deep generative model-based defenses.

Author Information

Yanzhi Chen (University of Edinburgh)
Renjie Xie (Southeast University)
Zhanxing Zhu (Peking University)

More from the Same Authors