Timezone: »
Though deep neural networks have achieved significant progress on various tasks, often enhanced by model ensemble, existing high-performance models can be vulnerable to adversarial attacks. Many efforts have been devoted to enhancing the robustness of individual networks and then constructing a straightforward ensemble, e.g., by directly averaging the outputs, which ignores the interaction among networks. This paper presents a new method that explores the interaction among individual networks to improve robustness for ensemble models. Technically, we define a new notion of ensemble diversity in the adversarial setting as the diversity among non-maximal predictions of individual members, and present an adaptive diversity promoting (ADP) regularizer to encourage the diversity, which leads to globally better robustness for the ensemble by making adversarial examples difficult to transfer among individual members. Our method is computationally efficient and compatible with the defense methods acting on individual networks. Empirical results on various datasets verify that our method can improve adversarial robustness while maintaining state-of-the-art accuracy on normal examples.
Author Information
Tianyu Pang (Tsinghua University)
Taufik Xu (Tsinghua University)
Chao Du (Tsinghua University)
Ning Chen
Jun Zhu (Tsinghua University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tue Jun 11th 06:30 -- 06:35 PM Room Grand Ballroom
More from the Same Authors
-
2020 Poster: Understanding and Stabilizing GANs' Training Dynamics Using Control Theory »
Kun Xu · Chongxuan Li · Jun Zhu · Bo Zhang -
2020 Poster: Variance Reduction and Quasi-Newton for Particle-Based Variational Inference »
Michael Zhu · Chang Liu · Jun Zhu -
2020 Poster: VFlow: More Expressive Generative Flows with Variational Data Augmentation »
Jianfei Chen · Cheng Lu · Biqi Chenli · Jun Zhu · Tian Tian -
2020 Poster: Nonparametric Score Estimators »
Yuhao Zhou · Jiaxin Shi · Jun Zhu -
2018 Poster: Message Passing Stein Variational Gradient Descent »
Jingwei Zhuo · Chang Liu · Jiaxin Shi · Jun Zhu · Ning Chen · Bo Zhang -
2018 Poster: Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors »
Yichi Zhou · Jun Zhu · Jingwei Zhuo -
2018 Oral: Message Passing Stein Variational Gradient Descent »
Jingwei Zhuo · Chang Liu · Jiaxin Shi · Jun Zhu · Ning Chen · Bo Zhang -
2018 Oral: Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors »
Yichi Zhou · Jun Zhu · Jingwei Zhuo -
2018 Poster: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu -
2018 Poster: Adversarial Attack on Graph Structured Data »
Hanjun Dai · Hui Li · Tian Tian · Xin Huang · Lin Wang · Jun Zhu · Le Song -
2018 Oral: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu -
2018 Oral: Adversarial Attack on Graph Structured Data »
Hanjun Dai · Hui Li · Tian Tian · Xin Huang · Lin Wang · Jun Zhu · Le Song -
2018 Poster: Stochastic Training of Graph Convolutional Networks with Variance Reduction »
Jianfei Chen · Jun Zhu · Le Song -
2018 Poster: A Spectral Approach to Gradient Estimation for Implicit Distributions »
Jiaxin Shi · Shengyang Sun · Jun Zhu -
2018 Oral: A Spectral Approach to Gradient Estimation for Implicit Distributions »
Jiaxin Shi · Shengyang Sun · Jun Zhu -
2018 Oral: Stochastic Training of Graph Convolutional Networks with Variance Reduction »
Jianfei Chen · Jun Zhu · Le Song -
2017 Poster: Identify the Nash Equilibrium in Static Games with Random Payoffs »
Yichi Zhou · Jialian Li · Jun Zhu -
2017 Talk: Identify the Nash Equilibrium in Static Games with Random Payoffs »
Yichi Zhou · Jialian Li · Jun Zhu