Skip to yearly menu bar Skip to main content


Oral

LaVAN: Localized and Visible Adversarial Noise

Danny Karmon · Daniel Zoran · Yoav Goldberg

Abstract:

Most works on adversarial examples for deep-learning based image classifiers usenoise that, while small, covers the entire image. We explore the case where thenoise is allowed to be visible but confined to a small, localized patch of theimage, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable acrossimages and locations, and successfully fool a state-of-the-art Inception v3model with very high success rates.

Chat is not available.