Timezone: »
Poster
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing
Zhongkai Hao · Chengyang Ying · Yinpeng Dong · Hang Su · Jian Song · Jun Zhu
Certified defenses such as randomized smoothing have shown promise towards building reliable machine learning systems against $\ell_p$ norm bounded attacks. However, existing methods are insufficient or unable to provably defend against semantic transformations, especially those without closed-form expressions (such as defocus blur and pixelate), which are more common in practice and often unrestricted. To fill up this gap, we propose generalized randomized smoothing (GSmooth), a unified theoretical framework for certifying robustness against general semantic transformations via a novel dimension augmentation strategy. Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation. The surrogate model provides a powerful tool for studying the properties of semantic transformations and certifying robustness. Experimental results on several datasets demonstrate the effectiveness of our approach for robustness certification against multiple kinds of semantic transformations and corruptions, which is not achievable by the alternative baselines.
Author Information
Zhongkai Hao (Tsinghua University)
Chengyang Ying (Tsinghua University)
Yinpeng Dong (Tsinghua University)
Hang Su (Tsinghua University)
Jian Song (Tsinghua University)
Jun Zhu (Tsinghua University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing »
Tue. Jul 19th 08:40 -- 08:45 PM Room Room 327 - 329
More from the Same Authors
-
2022 Poster: Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching »
Cheng Lu · Kaiwen Zheng · Fan Bao · Jianfei Chen · Chongxuan Li · Jun Zhu -
2022 Spotlight: Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching »
Cheng Lu · Kaiwen Zheng · Fan Bao · Jianfei Chen · Chongxuan Li · Jun Zhu -
2022 Poster: Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models »
Fan Bao · Chongxuan Li · Jiacheng Sun · Jun Zhu · Bo Zhang -
2022 Spotlight: Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models »
Fan Bao · Chongxuan Li · Jiacheng Sun · Jun Zhu · Bo Zhang -
2021 : Discussion Panel #1 »
Hang Su · Matthias Hein · Liwei Wang · Sven Gowal · Jan Hendrik Metzen · Henry Liu · Yisen Wang -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2021 : Opening Remarks »
Hang Su -
2021 Workshop: ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI »
Quanshi Zhang · Tian Han · Lixin Fan · Zhanxing Zhu · Hang Su · Ying Nian Wu -
2019 Poster: Scalable Training of Inference Networks for Gaussian-Process Models »
Jiaxin Shi · Mohammad Emtiyaz Khan · Jun Zhu -
2019 Poster: Understanding and Accelerating Particle-Based Variational Inference »
Chang Liu · Jingwei Zhuo · Pengyu Cheng · RUIYI (ROY) ZHANG · Jun Zhu -
2019 Oral: Understanding and Accelerating Particle-Based Variational Inference »
Chang Liu · Jingwei Zhuo · Pengyu Cheng · RUIYI (ROY) ZHANG · Jun Zhu -
2019 Oral: Scalable Training of Inference Networks for Gaussian-Process Models »
Jiaxin Shi · Mohammad Emtiyaz Khan · Jun Zhu -
2019 Poster: Understanding MCMC Dynamics as Flows on the Wasserstein Space »
Chang Liu · Jingwei Zhuo · Jun Zhu -
2019 Oral: Understanding MCMC Dynamics as Flows on the Wasserstein Space »
Chang Liu · Jingwei Zhuo · Jun Zhu