Timezone: »
In modern image semantic segmentation models, large receptive field is used for better segmentation performance. Due to the inefficiency of directly using large convolution kernels, several techniques such as dilated convolution, attention are invented to increase the receptive field of the deep learning models. However, large receptive fields introduces a new attack vector for adversarial attacks on segmentation/object detection models. In this work, we demonstrate that a large receptive field exposes the models to new risks. To show its serious consequences, we propose a new attack, remote adversarial patch attack, which is able to mislead the prediction results of the targeted object without directly accessing and manipulating (adding) adversarial perturbation to the targeted object. We conduct comprehensive experiments on evaluating the attack on models with different receptive field sizes, which reduces the mIoU from 30% to 100%. In the end, we also apply our remote adversarial patch attack to the physical-world setting. We show that with the adversarial patch printed on the road, it is able to remove the target vehicle at different positions which is unknown in advance.
Author Information
yulong cao (University of Michigan, Ann Arbor)
Jiachen Sun (University of Michigan)
Chaowei Xiao (University of Michigan, Ann Arbor)
Qi Chen (University of California, Irvine)
Qi Alfred Chen is an Assistant Professor in the Department of Computer Science at the University of California, Irvine. His research interest is network and systems security, and the major research theme is addressing security challenges through systematic problem analysis and mitigation. His research has discovered and mitigated security problems in systems such as next-generation transportation systems, smartphone OSes, network protocols, DNS, GUI systems and access control systems. Currently, his focus has been in smart systems and IoT, including transportation and autonomous vehicle systems. His work has high impact in both academic and industry with over 10 top-tier conference papers, a DHS US-CERT alert, multiple CVEs, and over 50 news articles by major news media such as Fortune and BBC News. Chen received his Ph.D. from the University of Michigan in 2018.
Zhuoqing Morley Mao (University of Michigan)
More from the Same Authors
-
2021 : Improving Adversarial Robustness in 3D Point Cloud Classification via Self-Supervisions »
Jiachen Sun · yulong cao · Christopher Choy · Zhiding Yu · Chaowei Xiao · Anima Anandkumar · Zhuoqing Morley Mao -
2021 : Auditing AI models for Verified Deployment under Semantic Specifications »
Homanga Bharadhwaj · De-An Huang · Chaowei Xiao · Anima Anandkumar · Animesh Garg -
2023 Poster: CodeIPPrompt: Intellectual Property Infringement Assessment of Code Language Models »
Zhiyuan Yu · Yuhao Wu · Ning Zhang · Chenguang Wang · Yevgeniy Vorobeychik · Chaowei Xiao -
2023 Poster: A Critical Revisit of Adversarial Robustness in 3D Point Cloud Recognition with Diffusion-Driven Purification »
Jiachen Sun · Jiongxiao Wang · Weili Nie · Zhiding Yu · Chaowei Xiao · Zhuoqing Morley Mao -
2022 Poster: Diffusion Models for Adversarial Purification »
Weili Nie · Brandon Guo · Yujia Huang · Chaowei Xiao · Arash Vahdat · Animashree Anandkumar -
2022 Spotlight: Diffusion Models for Adversarial Purification »
Weili Nie · Brandon Guo · Yujia Huang · Chaowei Xiao · Arash Vahdat · Animashree Anandkumar -
2022 Poster: Understanding The Robustness in Vision Transformers »
Zhou Daquan · Zhiding Yu · Enze Xie · Chaowei Xiao · Animashree Anandkumar · Jiashi Feng · Jose M. Alvarez -
2022 Spotlight: Understanding The Robustness in Vision Transformers »
Zhou Daquan · Zhiding Yu · Enze Xie · Chaowei Xiao · Animashree Anandkumar · Jiashi Feng · Jose M. Alvarez -
2021 : Contributed Talk-4. Auditing AI models for Verified Deployment under Semantic Specifications »
Chaowei Xiao -
2021 : Contributed Talk-3. FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information »
Chaowei Xiao -
2021 : Contributed Talk-2. Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions »
Chaowei Xiao -
2021 : Kai-Wei Chang. Societal Bias in Language Generation »
Chaowei Xiao -
2021 : Contributed Talk-1. Machine Learning API Shift Assessments »
Chaowei Xiao -
2021 : Nicolas Papernot. What Does it Mean for ML to be Trustworthy »
Chaowei Xiao -
2021 : Olga Russakovsky. Revealing, Quantifying, Analyzing and Mitigating Bias in Visual Recognition »
Chaowei Xiao -
2021 : Jun Zhu. Understand and Benchmark Adversarial Robustness of Deep Learning »
Chaowei Xiao -
2021 : Anima Anandkumar. Opening remarks »
Chaowei Xiao -
2021 Workshop: Workshop on Socially Responsible Machine Learning »
Chaowei Xiao · Animashree Anandkumar · Mingyan Liu · Dawn Song · Raquel Urtasun · Jieyu Zhao · Xueru Zhang · Cihang Xie · Xinyun Chen · Bo Li