Timezone: »

 
Poster
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu · W. Ronny Huang · Hengduo Li · Gavin Taylor · Christoph Studer · Tom Goldstein

Wed Jun 12 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #68

In this paper, we explore clean-label poisoning attacks on deep convolutional networks with access to neither the network's output nor its architecture or parameters. Our goal is to ensure that after injecting the poisons into the training data, a model with unknown architecture and parameters trained on that data will misclassify the target image into a specific class. To achieve this goal, we generate multiple poison images from the base class by adding small perturbations which cause the poison images to trap the target image within their convex polytope in feature space. We also demonstrate that using Dropout during crafting of the poisons and enforcing this objective in multiple layers enhances transferability, enabling attacks against both the transfer learning and end-to-end training settings. We demonstrate transferable attack success rates of over 50% by poisoning only 1% of the training set.

Author Information

Chen Zhu (University of Maryland)
W. Ronny Huang (University of Maryland and EY LLP)
Hengduo Li (University of Maryland, College Park)
Gavin Taylor (United States Naval Academy)
Christoph Studer (Cornell University)
Tom Goldstein (University of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors