Timezone: »
In a poisoning attack, an adversary who controls a small fraction of the training data attempts to select that data, so a model is induced that misbehaves in a particular way. We consider poisoning attacks against convex machine learning models and propose an efficient poisoning attack designed to induce a model specified by the adversary. Unlike previous model-targeted poisoning attacks, our attack comes with provable convergence to any attainable target model. We also provide a lower bound on the minimum number of poisoning points needed to achieve a given target model. Our method uses online convex optimization and finds poisoning points incrementally. This provides more flexibility than previous attacks which require an a priori assumption about the number of poisoning points. Our attack is the first model-targeted poisoning attack that provides provable convergence for convex models. In our experiments, it either exceeds or matches state-of-the-art attacks in terms of attack success rate and distance to the target model.
Author Information
Fnu Suya (University of Virginia)
Saeed Mahloujifar (Princeton University)
Anshuman Suri (University of Virginia)
David Evans (University of Virginia)
Yuan Tian (University of Virginia)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Model-Targeted Poisoning Attacks with Provable Convergence »
Fri. Jul 23rd 01:25 -- 01:30 AM Room
More from the Same Authors
-
2021 : Formalizing Distribution Inference Risks »
Anshuman Suri · Anshuman Suri · David Evans -
2022 : Memorization in NLP Fine-tuning Methods »
FatemehSadat Mireshghallah · FatemehSadat Mireshghallah · Archit Uniyal · Archit Uniyal · Tianhao Wang · Tianhao Wang · David Evans · David Evans · Taylor Berg-Kirkpatrick · Taylor Berg-Kirkpatrick -
2023 : When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks? »
Fnu Suya · Xiao Zhang · Yuan Tian · David Evans -
2020 Poster: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization »
Sicheng Zhu · Xiao Zhang · David Evans -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song