Timezone: »

Certified Adversarial Robustness Under the Bounded Support Set
Yiwen Kou · Qinyuan Zheng · Yisen Wang

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #1012
Deep neural networks (DNNs) have revealed severe vulnerability to adversarial perturbations, beside empirical adversarial training for robustness, the design of provably robust classifiers attracts more and more attention. Randomized smoothing methods provide the certified robustness with agnostic architecture, which is further extended to a provable robustness framework using f-divergence. While these methods cannot be applied to smoothing measures with bounded support set such as uniform probability measure due to the use of likelihood ratio in their certification methods. In this paper, we generalize the $f$-divergence-based framework to a Wasserstein-distance-based and total-variation-distance-based framework that is first able to analyze robustness properties of bounded support set smoothing measures both theoretically and experimentally. By applying our methodology to uniform probability measures with support set $l_p (p=1,2,\infty\text{ and general})$ ball, we prove negative certified robustness properties with respect to $l_q (q=1, 2, \infty)$ perturbations and present experimental results on CIFAR-10 dataset with ResNet to validate our theory. And it is also worth mentioning that our certification procedure only costs constant computation time.

Author Information

Yiwen Kou (Peking University)
Qinyuan Zheng (Peking University)
Yisen Wang (Peking University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors