Timezone: »

 
Poster
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Elan Rosenfeld · Ezra Winston · Pradeep Ravikumar · Zico Kolter

Tue Jul 14 09:00 AM -- 09:45 AM & Tue Jul 14 08:00 PM -- 08:45 PM (PDT) @

Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier. In this work, we propose a strategy for building linear classifiers that are certifiably robust against a strong variant of label flipping, where each test example is targeted independently. In other words, for each test point, our classifier includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Our approach leverages randomized smoothing, a technique that has previously been used to guarantee---with high probability---test-time robustness to adversarial manipulation of the input to a classifier. We derive a variant which provides a deterministic, analytical bound, sidestepping the probabilistic certificates that traditionally result from the sampling subprocedure. Further, we obtain these certified bounds with minimal additional runtime complexity over standard classification and no assumptions on the train or test distributions. We generalize our results to the multi-class case, providing the first multi-class classification algorithm that is certifiably robust to label-flipping attacks.

Author Information

Elan Rosenfeld (Carnegie Mellon University)
Ezra Winston
Pradeep Ravikumar (Carnegie Mellon University)
Zico Kolter (Carnegie Mellon University / Bosch Center for AI)

More from the Same Authors