Oral
Improving Adversarial Robustness via Promoting Ensemble Diversity
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu

Tue Jun 11th 11:30 -- 11:35 AM @ Grand Ballroom

Though deep neural networks have achieved significant progress on various tasks, often enhanced by model ensemble, existing high-performance models can be vulnerable to adversarial attacks. Many efforts have been devoted to enhancing the robustness of individual networks and then constructing a straightforward ensemble, e.g., by directly averaging the outputs, which ignores the interaction among networks. This paper presents a new method that explores the interaction among individual networks to improve robustness for ensemble models. Technically, we define a new notion of ensemble diversity in the adversarial setting as the diversity among non-maximal predictions of individual members, and present an adaptive diversity promoting (ADP) regularizer to encourage the diversity, which leads to globally better robustness for the ensemble by making adversarial examples difficult to transfer among individual members. Our method is computationally efficient and compatible with the defense methods acting on individual networks. Empirical results on various datasets verify that our method can improve adversarial robustness while maintaining state-of-the-art accuracy on normal examples.

Author Information

Tianyu Pang (Tsinghua University)
Taufik Xu (Tsinghua University)
Chao Du (Tsinghua University)
Ning Chen
Jun Zhu (Tsinghua University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors