Timezone: »
3D point cloud data is increasingly used in safety-critical applications such as autonomous driving. Thus, robustness of 3D deep learning models against adversarial attacks is a major consideration. In this paper, we systematically study the impact of various self-supervised learning proxy tasks on different architectures and threat models for 3D point clouds. Specifically, we study MLP-based (PointNet), convolution-based (DGCNN), and transformer-based (PCT) 3D architectures. Through comprehensive experiments, we demonstrate that appropriate self-supervisions can significantly enhance the robustness in 3D point cloud recognition, achieving considerable improvements compared to the standard adversarial training baseline. Our analysis reveals that local feature learning is desirable for adversarial robustness since it limits the adversarial propagation between the point-level input perturbations and the model's final output. It also explains the success of DGCNN and the jigsaw proxy task in achieving 3D robustness.
Author Information
Jiachen Sun (University of Michigan)
yulong cao (University of Michigan, Ann Arbor)
Christopher Choy (Nvidia)
Zhiding Yu (NVIDIA)
Zhiding Yu is a Senior Research Scientist at NVIDIA. Before joining NVIDIA in 2018, he received Ph.D. in ECE from Carnegie Mellon University in 2017, and M.Phil. in ECE from The Hong Kong University of Science and Technology in 2012. His research interests mainly focus on deep representation learning, weakly/self-supervised learning, transfer learning and deep structured prediction, with their applications to vision and robotics problems.
Chaowei Xiao (University of Michigan, Ann Arbor)
Anima Anandkumar (NVIDIA/Caltech)
Zhuoqing Morley Mao (University of Michigan)
More from the Same Authors
-
2021 : Auditing AI models for Verified Deployment under Semantic Specifications »
Homanga Bharadhwaj · De-An Huang · Chaowei Xiao · Anima Anandkumar · Animesh Garg -
2021 : Delving into the Remote Adversarial Patch in Semantic Segmentation »
yulong cao · Jiachen Sun · Chaowei Xiao · Qi Chen · Zhuoqing Morley Mao -
2023 : ChatGPT-powered Conversational Drug Editing Using Retrieval and Domain Feedback »
Shengchao Liu · Jiongxiao Wang · Yijin Yang · Chengpeng Wang · Ling Liu · Hongyu Guo · Chaowei Xiao -
2023 Poster: A Critical Revisit of Adversarial Robustness in 3D Point Cloud Recognition with Diffusion-Driven Purification »
Jiachen Sun · Jiongxiao Wang · Weili Nie · Zhiding Yu · Zhuoqing Morley Mao · Chaowei Xiao -
2023 Poster: CodeIPPrompt: Intellectual Property Infringement Assessment of Code Language Models »
Zhiyuan Yu · Yuhao Wu · Ning Zhang · Chenguang Wang · Yevgeniy Vorobeychik · Chaowei Xiao -
2022 Poster: Diffusion Models for Adversarial Purification »
Weili Nie · Brandon Guo · Yujia Huang · Chaowei Xiao · Arash Vahdat · Animashree Anandkumar -
2022 Spotlight: Diffusion Models for Adversarial Purification »
Weili Nie · Brandon Guo · Yujia Huang · Chaowei Xiao · Arash Vahdat · Animashree Anandkumar -
2022 Poster: Understanding The Robustness in Vision Transformers »
Zhou Daquan · Zhiding Yu · Enze Xie · Chaowei Xiao · Animashree Anandkumar · Jiashi Feng · Jose M. Alvarez -
2022 Spotlight: Understanding The Robustness in Vision Transformers »
Zhou Daquan · Zhiding Yu · Enze Xie · Chaowei Xiao · Animashree Anandkumar · Jiashi Feng · Jose M. Alvarez -
2021 : Contributed Talk-4. Auditing AI models for Verified Deployment under Semantic Specifications »
Chaowei Xiao -
2021 : Contributed Talk-3. FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information »
Chaowei Xiao -
2021 : Contributed Talk-2. Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions »
Chaowei Xiao -
2021 : Kai-Wei Chang. Societal Bias in Language Generation »
Chaowei Xiao -
2021 : Contributed Talk-1. Machine Learning API Shift Assessments »
Chaowei Xiao -
2021 : Nicolas Papernot. What Does it Mean for ML to be Trustworthy »
Chaowei Xiao -
2021 : Olga Russakovsky. Revealing, Quantifying, Analyzing and Mitigating Bias in Visual Recognition »
Chaowei Xiao -
2021 : Jun Zhu. Understand and Benchmark Adversarial Robustness of Deep Learning »
Chaowei Xiao -
2021 : Anima Anandkumar. Opening remarks »
Chaowei Xiao -
2021 Workshop: Workshop on Socially Responsible Machine Learning »
Chaowei Xiao · Animashree Anandkumar · Mingyan Liu · Dawn Song · Raquel Urtasun · Jieyu Zhao · Xueru Zhang · Cihang Xie · Xinyun Chen · Bo Li -
2021 Poster: Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection »
Nadine Chang · Zhiding Yu · Yu-Xiong Wang · Anima Anandkumar · Sanja Fidler · Jose Alvarez -
2021 Spotlight: Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection »
Nadine Chang · Zhiding Yu · Yu-Xiong Wang · Anima Anandkumar · Sanja Fidler · Jose Alvarez -
2021 Poster: SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual Policies »
Jim Fan · Guanzhi Wang · De-An Huang · Zhiding Yu · Li Fei-Fei · Yuke Zhu · Anima Anandkumar -
2021 Spotlight: SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual Policies »
Jim Fan · Guanzhi Wang · De-An Huang · Zhiding Yu · Li Fei-Fei · Yuke Zhu · Anima Anandkumar -
2020 Poster: Automated Synthetic-to-Real Generalization »
Wuyang Chen · Zhiding Yu · Zhangyang “Atlas” Wang · Anima Anandkumar -
2020 Poster: Angular Visual Hardness »
Beidi Chen · Weiyang Liu · Zhiding Yu · Jan Kautz · Anshumali Shrivastava · Animesh Garg · Anima Anandkumar