Skip to yearly menu bar Skip to main content


Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Elan Rosenfeld · Ezra Winston · Pradeep Ravikumar · Zico Kolter

Keywords: [ Robust Statistics and Machine Learning ] [ Supervised Learning ] [ Trustworthy Machine Learning ] [ Algorithms ]


Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier. In this work, we propose a strategy for building linear classifiers that are certifiably robust against a strong variant of label flipping, where each test example is targeted independently. In other words, for each test point, our classifier includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Our approach leverages randomized smoothing, a technique that has previously been used to guarantee---with high probability---test-time robustness to adversarial manipulation of the input to a classifier. We derive a variant which provides a deterministic, analytical bound, sidestepping the probabilistic certificates that traditionally result from the sampling subprocedure. Further, we obtain these certified bounds with minimal additional runtime complexity over standard classification and no assumptions on the train or test distributions. We generalize our results to the multi-class case, providing the first multi-class classification algorithm that is certifiably robust to label-flipping attacks.

Chat is not available.