Skip to yearly menu bar Skip to main content

Workshop: Workshop on Socially Responsible Machine Learning

Delving into the Remote Adversarial Patch in Semantic Segmentation

yulong cao · Jiachen Sun · Chaowei Xiao · Qi Chen · Zhuoqing Morley Mao


In modern image semantic segmentation models, large receptive field is used for better segmentation performance. Due to the inefficiency of directly using large convolution kernels, several techniques such as dilated convolution, attention are invented to increase the receptive field of the deep learning models. However, large receptive fields introduces a new attack vector for adversarial attacks on segmentation/object detection models. In this work, we demonstrate that a large receptive field exposes the models to new risks. To show its serious consequences, we propose a new attack, remote adversarial patch attack, which is able to mislead the prediction results of the targeted object without directly accessing and manipulating (adding) adversarial perturbation to the targeted object. We conduct comprehensive experiments on evaluating the attack on models with different receptive field sizes, which reduces the mIoU from 30% to 100%. In the end, we also apply our remote adversarial patch attack to the physical-world setting. We show that with the adversarial patch printed on the road, it is able to remove the target vehicle at different positions which is unknown in advance.

Chat is not available.