Skip to yearly menu bar Skip to main content


Poster

Run-off Election: Improved Provable Defense against Data Poisoning Attacks

Keivan Rezaei · Kiarash Banihashem · Atoosa Malemir Chegini · Soheil Feizi

Exhibit Hall 1 #103
[ ]
[ PDF [ Poster

Abstract: In data poisoning attacks, an adversary tries to change a model's prediction by adding, modifying, or removing samples in the training data. Recently, *ensemble-based* approaches for obtaining *provable* defenses against data poisoning have been proposed where predictions are done by taking a majority vote across multiple base models. In this work, we show that merely considering the majority vote in ensemble defenses is wasteful as it does not effectively utilize available information in the logits layers of the base models. Instead, we propose *Run-Off Election (ROE)*, a novel aggregation method based on a two-round election across the base models: In the first round, models vote for their preferred class and then a second, *Run-Off* election is held between the top two classes in the first round. Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work. We evaluate our methods on MNIST, CIFAR-10, and GTSRB and obtain improvements in certified accuracy by up to $3\%$-$4\%$. Also, by applying ROE on a boosted version of DPA, we gain improvements around $12\%$-$27\%$ comparing to the current state-of-the-art, establishing **a new state-of-the-art** in (pointwise) certified robustness against data poisoning. In many cases, our approach outperforms the state-of-the-art, even when using 32 times less computational power.

Chat is not available.