Timezone: »

 
Oral
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu · W. Ronny Huang · Hengduo Li · Gavin Taylor · Christoph Studer · Tom Goldstein

Wed Jun 12 12:05 PM -- 12:10 PM (PDT) @ Grand Ballroom

In this paper, we explore the clean-label poisoning attacks on neural networks with no access to neither the networks' output nor its parameters. We deal with the case of transfer learning, where the network is initialized from a pre-trained model on a certain dataset and only its last layer is re-trained on the targeted dataset. The task is to make the re-trained model classify the target image into a target class. To achieve this goal, we generate multiple poison images from the target class by adding small perturbations on the clean images. These poison images form a convex hull of the target image in the feature space, with guarantees the target image to be mis-classified when the requirement is perfectly satisfied.

Author Information

Chen Zhu (University of Maryland)
W. Ronny Huang (University of Maryland and EY LLP)
Hengduo Li (University of Maryland, College Park)
Gavin Taylor (United States Naval Academy)
Christoph Studer (Cornell University)
Tom Goldstein (University of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors