Timezone: »

 
Spotlight
Building Robust Ensembles via Margin Boosting
Dinghuai Zhang · Hongyang Zhang · Aaron Courville · Yoshua Bengio · Pradeep Ravikumar · Arun Sai Suggala

Thu Jul 21 11:10 AM -- 11:15 AM (PDT) @ None

In the context of adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attacks, and as a result, has sub-optimal robustness. Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks. In this work, we take a principled approach towards building robust ensembles. We view this problem from the perspective of margin-boosting and develop an algorithm for learning an ensemble with maximum margin. Through extensive empirical evaluation on benchmark datasets, we show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion. An important byproduct of our work is a margin-maximizing cross-entropy (MCE) loss, which is a better alternative to the standard cross-entropy (CE) loss. Empirically, we show that replacing the CE loss in state-of-the-art adversarial training techniques with our MCE loss leads to significant performance improvement.

Author Information

Dinghuai Zhang (Mila, Meta)
Hongyang Zhang (University of Waterloo)
Aaron Courville (Université de Montréal)
Yoshua Bengio (Mila - Quebec AI Institute)
Pradeep Ravikumar (Carnegie Mellon University)
Arun Sai Suggala (Carnegie Mellon University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors