Timezone: »
Spotlight
Certified Adversarial Robustness Under the Bounded Support Set
Yiwen Kou · Qinyuan Zheng · Yisen Wang
Deep neural networks (DNNs) have revealed severe vulnerability to adversarial perturbations, beside empirical adversarial training for robustness, the design of provably robust classifiers attracts more and more attention. Randomized smoothing methods provide the certified robustness with agnostic architecture, which is further extended to a provable robustness framework using f-divergence. While these methods cannot be applied to smoothing measures with bounded support set such as uniform probability measure due to the use of likelihood ratio in their certification methods. In this paper, we generalize the $f$-divergence-based framework to a Wasserstein-distance-based and total-variation-distance-based framework that is first able to analyze robustness properties of bounded support set smoothing measures both theoretically and experimentally. By applying our methodology to uniform probability measures with support set $l_p (p=1,2,\infty\text{ and general})$ ball, we prove negative certified robustness properties with respect to $l_q (q=1, 2, \infty)$ perturbations and present experimental results on CIFAR-10 dataset with ResNet to validate our theory. And it is also worth mentioning that our certification procedure only costs constant computation time.
Author Information
Yiwen Kou (Peking University)
Qinyuan Zheng (Peking University)
Yisen Wang (Peking University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Certified Adversarial Robustness Under the Bounded Support Set »
Wed. Jul 20th through Thu the 21st Room Hall E #1012
More from the Same Authors
-
2021 : Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions »
Nodens Koren · Xingjun Ma · Qiuhong Ke · Yisen Wang · James Bailey -
2021 : Demystifying Adversarial Training via A Unified Probabilistic Framework »
Yisen Wang · Jiansheng Yang · Zhouchen Lin · Yifei Wang -
2022 Poster: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Poster: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2022 Poster: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Spotlight: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2021 : Discussion Panel #1 »
Hang Su · Matthias Hein · Liwei Wang · Sven Gowal · Jan Hendrik Metzen · Henry Liu · Yisen Wang -
2021 Poster: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Poster: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Spotlight: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Oral: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Poster: Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? »
Dinghuai Zhang · Kartik Ahuja · Yilun Xu · Yisen Wang · Aaron Courville -
2021 Oral: Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? »
Dinghuai Zhang · Kartik Ahuja · Yilun Xu · Yisen Wang · Aaron Courville