Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness

Aounon Kumar · Alexander Levine · Tom Goldstein · Soheil Feizi

Keywords: [ Adversarial Examples ] [ Deep Learning Theory ] [ Robust Statistics and Machine Learning ]


Abstract: Randomized smoothing, using just a simple isotropic Gaussian distribution, has been shown to produce good robustness guarantees against 2-norm bounded adversaries. In this work, we show that extending the smoothing technique to defend against other attack models can be challenging, especially in the high-dimensional regime. In particular, for a vast class of i.i.d.~smoothing distributions, we prove that the largest p-radius that can be certified decreases as O(1/d121p) with dimension d for p>2. Notably, for p2, this dependence on d is no better than that of the p-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius. When restricted to {\it generalized} Gaussian smoothing, these two bounds can be shown to be within a constant factor of each other in an asymptotic sense, establishing that Gaussian smoothing provides the best possible results, up to a constant factor, when p2. We present experimental results on CIFAR to validate our theory. For other smoothing distributions, such as, a uniform distribution within an 1 or an -norm ball, we show upper bounds of the form O(1/d) and O(1/d11p) respectively, which have an even worse dependence on d.

Chat is not available.