Timezone: »

Certified robustness against adversarial patch attacks via randomized cropping
Wan-Yi Lin · Fatemeh Sheikholeslami · jinghao shi · Leslie Rice · Zico Kolter

This paper proposes a certifiable defense against adversarial patch attacks on image classification. Our approach classifies random crops from the original image independently and classifies the original image as the majority vote over predicted classes of the crops. Leveraging the fact that a patch attack can only influence a certain number of pixels in the image, we derive certified robustness bounds for the classifier. Our method is particularly effective when realistic transformations are applied to the adversarial patch, such as affine transformations. Such transformations occur naturally when an adversarial patch is physically introduced in a scene. Our method improves upon the current state of the art in defending against patch attacks on CIFAR10 and ImageNet, both in terms of certified accuracy and inference time.

Author Information

Wan-Yi Lin (Robert Bosch LLC)
Fatemeh Sheikholeslami (Bosch Center for AI)
jinghao shi (Carnegie Mellon University)
Leslie Rice (Carnegie Mellon University)
Zico Kolter (Carnegie Mellon University / Bosch Center for AI)

More from the Same Authors