Timezone: »

Modeling Adversarial Noise for Adversarial Training
Dawei Zhou · Nannan Wang · Bo Han · Tongliang Liu

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #319

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks. Motivated by the fact that adversarial noise contains well-generalizing features and that the relationship between adversarial data and natural data can help infer natural data and make reliable predictions, in this paper, we study to model adversarial noise by learning the transition relationship between adversarial labels (i.e. the flipped labels used to generate adversarial data) and natural labels (i.e. the ground truth labels of the natural data). Specifically, we introduce an instance-dependent transition matrix to relate adversarial labels and natural labels, which can be seamlessly embedded with the target model (enabling us to model stronger adaptive adversarial noise). Empirical evaluations demonstrate that our method could effectively improve adversarial accuracy.

Author Information

Dawei Zhou (Xidian University)
Nannan Wang (Xidian University)
Tongliang Liu (The University of Sydney)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors