Timezone: »

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild · Micah Goldblum · Arjun Gupta · John P Dickerson · Tom Goldstein

Tue Jul 20 07:30 PM -- 07:35 PM (PDT) @

Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference. A recent survey of industry practitioners found that data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks. However, it remains unclear exactly how dangerous poisoning methods are and which ones are more effective considering that these methods, even ones with identical objectives, have not been tested in consistent or realistic settings. We observe that data poisoning and backdoor attacks are highly sensitive to variations in the testing setup. Moreover, we find that existing methods may not generalize to realistic settings. While these existing works serve as valuable prototypes for data poisoning, we apply rigorous tests to determine the extent to which we should fear them. In order to promote fair comparison in future work, we develop standardized benchmarks for data poisoning and backdoor attacks.

Author Information

Avi Schwarzschild (University of Maryland)
Micah Goldblum (University of Maryland)
Arjun Gupta (University of Maryland College Park)
John P Dickerson (University of Maryland)
Tom Goldstein (University of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors