Timezone: »

 
Poster
Stratified Adversarial Robustness with Rejection
Jiefeng Chen · Jayaram Raghuram · Jihye Choi · Xi Wu · Yingyiu Liang · Somesh Jha

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #816
Event URL: https://github.com/jfc43/stratified-adv-rej »

Recently, there is an emerging interest in adversarially training a classifier with a rejection option (also known as a selective classifier) for boosting adversarial robustness. While rejection can incur a cost in many applications, existing studies typically associate zero cost with rejecting perturbed inputs, which can result in the rejection of numerous slightly-perturbed inputs that could be correctly classified. In this work, we study adversarially-robust classification with rejection in the stratified rejection setting, where the rejection cost is modeled by rejection loss functions monotonically non-increasing in the perturbation magnitude. We theoretically analyze the stratified rejection setting and propose a novel defense method -- Adversarial Training with Consistent Prediction-based Rejection (CPR) -- for building a robust selective classifier. Experiments on image datasets demonstrate that the proposed method significantly outperforms existing methods under strong adaptive attacks. For instance, on CIFAR-10, CPR reduces the total robust loss (for different rejection losses) by at least 7.3% under both seen and unseen attacks.

Author Information

Jiefeng Chen (University of Wisconsin-Madison)
Jayaram Raghuram (University of Wisconsin, Madison)
Jihye Choi (University of Wisconsin-Madison)
Xi Wu (Google)

Completed my PhD in Computer Science from UW-Madison, advised by Jeffrey F. Naughton and Somesh Jha. Now a software engineer at Google. [Google PhD Fellow 2016 in privacy and security](https://ai.googleblog.com/2016/03/announcing-2016-google-phd-fellows-for.html).

Yingyiu Liang (University of Wisconsin-Madison)
Somesh Jha (University of Wisconsin, Madison)

More from the Same Authors