Timezone: »

 
Poster
LaVAN: Localized and Visible Adversarial Noise
Danny Karmon · Daniel Zoran · Yoav Goldberg

Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #116

Most works on adversarial examples for deep-learning based image classifiers use noise that, while small, covers the entire image. We explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.

Author Information

Danny Karmon (Bar Ilan University)
Daniel Zoran (DeepMind)
Yoav Goldberg (Bar Ilan University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors