Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip to yearly menu bar Skip to main content


Poster

Randomized Smoothing of All Shapes and Sizes

Greg Yang · Tony Duan · J. Edward Hu · Hadi Salman · Ilya Razenshteyn · Jerry Li

Keywords: [ Adversarial Examples ] [ Robust Statistics and Machine Learning ]


Abstract: Randomized smoothing is the current state-of-the-art defense with provable robustness against 2 adversarial attacks. Many works have devised new randomized smoothing schemes for other metrics, such as 1 or ; however, substantial effort was needed to derive such new guarantees. This begs the question: can we find a general theory for randomized smoothing? We propose a novel framework for devising and analyzing randomized smoothing schemes, and validate its effectiveness in practice. Our theoretical contributions are: (1) we show that for an appropriate notion of "optimal", the optimal smoothing distributions for any "nice" norms have level sets given by the norm's *Wulff Crystal*; (2) we propose two novel and complementary methods for deriving provably robust radii for any smoothing distribution; and, (3) we show fundamental limits to current randomized smoothing techniques via the theory of *Banach space cotypes*. By combining (1) and (2), we significantly improve the state-of-the-art certified accuracy in 1 on standard datasets. Meanwhile, we show using (3) that with only label statistics under random input perturbations, randomized smoothing cannot achieve nontrivial certified accuracy against perturbations of p-norm Ω(min(1,d1p12)), when the input dimension d is large. We provide code in github.com/tonyduan/rs4a.

Chat is not available.