Timezone: »
We study indiscriminate poisoning for linear learners where an adversary injects a few crafted examples into the training data with the goal of forcing the induced model to incur higher test error. Inspired by the observation that linear learners on some datasets are able to resist the best known attacks even without any defenses, we further investigate whether datasets can be inherently robust to indiscriminate poisoning attacks for linear learners. For theoretical Gaussian distributions, we rigorously characterize the behavior of an optimal poisoning attack, defined as the poisoning strategy that attains the maximum risk of the induced model at a given poisoning budget. Our results prove that linear learners can indeed be robust to indiscriminate poisoning if the class-wise data distributions are well-separated with low variance and the size of the constraint set containing all permissible poisoning points is also small. These findings largely explain the drastic variation in empirical attack performance of the state-of-the-art poisoning attacks across benchmark datasets, making an important initial step towards understanding the underlying reasons some learning tasks are vulnerable to data poisoning attacks.
Author Information
Fnu Suya (University of Virginia)
Xiao Zhang (CISPA Helmholtz Center for Information Security)
Yuan Tian (University of Virginia)
David Evans (University of Virginia)
Related Events (a corresponding poster, oral, or spotlight)
-
2023 : When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks? »
Dates n/a. Room
More from the Same Authors
-
2021 : Formalizing Distribution Inference Risks »
Anshuman Suri · Anshuman Suri · David Evans -
2022 : Memorization in NLP Fine-tuning Methods »
FatemehSadat Mireshghallah · FatemehSadat Mireshghallah · Archit Uniyal · Archit Uniyal · Tianhao Wang · Tianhao Wang · David Evans · David Evans · Taylor Berg-Kirkpatrick · Taylor Berg-Kirkpatrick -
2023 : Provably Robust Cost-Sensitive Learning via Randomized Smoothing »
Yuan Xin · Michael Backes · Xiao Zhang -
2021 Poster: Model-Targeted Poisoning Attacks with Provable Convergence »
Fnu Suya · Saeed Mahloujifar · Anshuman Suri · David Evans · Yuan Tian -
2021 Spotlight: Model-Targeted Poisoning Attacks with Provable Convergence »
Fnu Suya · Saeed Mahloujifar · Anshuman Suri · David Evans · Yuan Tian -
2020 Poster: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization »
Sicheng Zhu · Xiao Zhang · David Evans -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song