Skip to yearly menu bar Skip to main content


Spotlight

Model-Targeted Poisoning Attacks with Provable Convergence

Fnu Suya · Saeed Mahloujifar · Anshuman Suri · David Evans · Yuan Tian

[ ] [ Livestream: Visit Privacy 4 ] [ Paper ]
[ Paper ]

Abstract:

In a poisoning attack, an adversary who controls a small fraction of the training data attempts to select that data, so a model is induced that misbehaves in a particular way. We consider poisoning attacks against convex machine learning models and propose an efficient poisoning attack designed to induce a model specified by the adversary. Unlike previous model-targeted poisoning attacks, our attack comes with provable convergence to any attainable target model. We also provide a lower bound on the minimum number of poisoning points needed to achieve a given target model. Our method uses online convex optimization and finds poisoning points incrementally. This provides more flexibility than previous attacks which require an a priori assumption about the number of poisoning points. Our attack is the first model-targeted poisoning attack that provides provable convergence for convex models. In our experiments, it either exceeds or matches state-of-the-art attacks in terms of attack success rate and distance to the target model.

Chat is not available.