Timezone: »
Point cloud models with neural network architectures have achieved great success and been widely used in safety-critical applications, such as Lidar-based recognition systems in autonomous vehicles. However, such models are shown vulnerable against adversarial attacks which aim to apply stealthy semantic transformations such as rotation and tapering to mislead model predictions. In this paper, we propose a transformation-specific smoothing framework TPC, which provides tight and scalable robustness guarantees for point cloud models against semantic transformation attacks. We first categorize common 3D transformations into two categories: composable (e.g., rotation) and indirectly composable (e.g., tapering), and we present generic robustness certification strategies for both categories. We then specify unique certification protocols for a range of specific semantic transformations and derive strong robustness guarantees. Extensive experiments on several common 3D transformations show that TPC significantly outperforms the state of the art. For example, our framework boosts the certified accuracy against twisting transformation along z-axis (within ±20°) from 20.3% to 83.8%. Codes and models are available at https://github.com/Qianhewu/Point-Cloud-Smoothing.
Author Information
Wenda Chu (Tsinghua University)
Linyi Li (UIUC)
Bo Li (UIUC)

Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, AI’s 10 to Watch, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean's Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, IBM, and eBay, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, which is at the intersection of machine learning, security, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times.
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: TPC: Transformation-Specific Smoothing for Point Cloud Models »
Tue. Jul 19th through Wed the 20th Room Hall E #211
More from the Same Authors
-
2022 : Group Distributionally Robust Reinforcement Learning with Hierarchical Latent Variables »
Mengdi Xu · Peide Huang · Visak Kumar · Jielin Qiu · Chao Fang · Kuan-Hui Lee · Xuewei Qi · Henry Lam · Bo Li · Ding Zhao -
2022 : Paper 10: CausalAF: Causal Autoregressive Flow for Safety-Critical Scenes Generation »
Wenhao Ding · Haohong Lin · Bo Li · Ding Zhao · Hitesh Arora -
2023 Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities »
Zheng Xu · Peter Kairouz · Bo Li · Tian Li · John Nguyen · Jianyu Wang · Shiqiang Wang · Ayfer Ozgur -
2023 Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning »
Nezihe Merve Gürel · Bo Li · Theodoros Rekatsinas · Beliz Gunel · Alberto Sngiovanni Vincentelli · Paroma Varma -
2022 : Paper 15: On the Robustness of Safe Reinforcement Learning under Observational Perturbations »
Zuxin Liu · Zhepeng Cen · Huan Zhang · Jie Tan · Bo Li · Ding Zhao -
2022 Poster: Constrained Variational Policy Optimization for Safe Reinforcement Learning »
Zuxin Liu · Zhepeng Cen · Vladislav Isenbaev · Wei Liu · Steven Wu · Bo Li · Ding Zhao -
2022 Poster: Provable Domain Generalization via Invariant-Feature Subspace Recovery »
Haoxiang Wang · Haozhe Si · Bo Li · Han Zhao -
2022 Spotlight: Constrained Variational Policy Optimization for Safe Reinforcement Learning »
Zuxin Liu · Zhepeng Cen · Vladislav Isenbaev · Wei Liu · Steven Wu · Bo Li · Ding Zhao -
2022 Spotlight: Provable Domain Generalization via Invariant-Feature Subspace Recovery »
Haoxiang Wang · Haozhe Si · Bo Li · Han Zhao -
2022 Poster: How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection »
Mantas Mazeika · Bo Li · David Forsyth -
2022 Poster: Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization »
Xiaojun Xu · Yibo Zhang · Evelyn Ma · Hyun Ho Son · Oluwasanmi Koyejo · Bo Li -
2022 Poster: Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond »
Haoxiang Wang · Bo Li · Han Zhao -
2022 Spotlight: How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection »
Mantas Mazeika · Bo Li · David Forsyth -
2022 Spotlight: Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization »
Xiaojun Xu · Yibo Zhang · Evelyn Ma · Hyun Ho Son · Oluwasanmi Koyejo · Bo Li -
2022 Spotlight: Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond »
Haoxiang Wang · Bo Li · Han Zhao -
2022 Poster: Certifying Out-of-Domain Generalization for Blackbox Functions »
Maurice Weber · Linyi Li · Boxin Wang · Zhikuan Zhao · Bo Li · Ce Zhang -
2022 Poster: Double Sampling Randomized Smoothing »
Linyi Li · Jiawei Zhang · Tao Xie · Bo Li -
2022 Spotlight: Double Sampling Randomized Smoothing »
Linyi Li · Jiawei Zhang · Tao Xie · Bo Li -
2022 Spotlight: Certifying Out-of-Domain Generalization for Blackbox Functions »
Maurice Weber · Linyi Li · Boxin Wang · Zhikuan Zhao · Bo Li · Ce Zhang -
2021 : Discussion Panel #2 »
Bo Li · Nicholas Carlini · Andrzej Banburski · Kamalika Chaudhuri · Will Xiao · Cihang Xie -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2021 Poster: Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability »
Kaizhao Liang · Yibo Zhang · Boxin Wang · Zhuolin Yang · Oluwasanmi Koyejo · Bo Li -
2021 Poster: CRFL: Certifiably Robust Federated Learning against Backdoor Attacks »
Chulin Xie · Minghao Chen · Pin-Yu Chen · Bo Li -
2021 Poster: Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation »
Jiawei Zhang · Linyi Li · Huichen Li · Xiaolu Zhang · Shuang Yang · Bo Li -
2021 Poster: Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation »
Haoxiang Wang · Han Zhao · Bo Li -
2021 Spotlight: Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation »
Jiawei Zhang · Linyi Li · Huichen Li · Xiaolu Zhang · Shuang Yang · Bo Li -
2021 Spotlight: Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability »
Kaizhao Liang · Yibo Zhang · Boxin Wang · Zhuolin Yang · Oluwasanmi Koyejo · Bo Li -
2021 Spotlight: Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation »
Haoxiang Wang · Han Zhao · Bo Li -
2021 Spotlight: CRFL: Certifiably Robust Federated Learning against Backdoor Attacks »
Chulin Xie · Minghao Chen · Pin-Yu Chen · Bo Li -
2021 Poster: Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks »
Nezihe Merve Gürel · Xiangyu Qi · Luka Rimanic · Ce Zhang · Bo Li -
2021 Spotlight: Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks »
Nezihe Merve Gürel · Xiangyu Qi · Luka Rimanic · Ce Zhang · Bo Li -
2020 Poster: Improving Robustness of Deep-Learning-Based Image Reconstruction »
Ankit Raj · Yoram Bresler · Bo Li