Timezone: »

 
Poster
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu · Wooyeong Jang · Jiefeng Chen · Lingjiao Chen · Somesh Jha

Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #54
In this paper we study leveraging \emph{confidence information} induced by adversarial training to reinforce adversarial robustness of a given adversarially trained model. A natural measure of confidence is $\|F(\bfx)\|_\infty$ (i.e. how confident $F$ is about its prediction?). We start by analyzing an adversarial training formulation proposed by Madry et al.. We demonstrate that, under a variety of instantiations, an only somewhat good solution to their objective induces confidence to be a discriminator, which can distinguish between right and wrong model predictions in a neighborhood of a point sampled from the underlying distribution. Based on this, we propose Highly Confident Near Neighbor ($\HCNN$), a framework that combines confidence information and nearest neighbor search, to reinforce adversarial robustness of a base model. We give algorithms in this framework and perform a detailed empirical study. We report encouraging experimental results that support our analysis, and also discuss problems we observed with existing adversarial training.

Author Information

Xi Wu (Google)

Completed my PhD in Computer Science from UW-Madison, advised by Jeffrey F. Naughton and Somesh Jha. Now a software engineer at Google. [Google PhD Fellow 2016 in privacy and security](https://ai.googleblog.com/2016/03/announcing-2016-google-phd-fellows-for.html).

Wooyeong Jang (University of Wisconsin - Madison)
Jiefeng Chen (University of Wisconsin-Madison)
Lingjiao Chen (University of Wisconsin-Madison)
Somesh Jha (University of Wisconsin, Madison)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors