Skip to yearly menu bar Skip to main content


Poster

Adversarial Risk and the Dangers of Evaluating Against Weak Attacks

Jonathan Uesato · Brendan O'Donoghue · Pushmeet Kohli · AƤron van den Oord

Hall B #132

Abstract:

This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate \emph{adversarial risk} as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as \textit{obscurity to an adversary}, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.

Live content is unavailable. Log in and register to view live content