Timezone: »

 
Oral
Adversarial camera stickers: A physical camera-based attack on deep learning systems
Juncheng Li · Frank R. Schmidt · Zico Kolter

Tue Jun 11 11:35 AM -- 11:40 AM (PDT) @ Grand Ballroom

Recent work has thoroughly documented the susceptibility of deep learning systems to adversarial examples, but most such instances directly manipulate the digital input to a classifier. Although a smaller line of work has considered physical adversarial attacks, in all cases these involve manipulating the object of interest, i.e., putting a physical sticker on a object to misclassify it, or manufacturing an object specifically intended to be misclassified. In this work we consider an alternative question: is it possible to fool deep classifiers, over all perceived objects of a certain type, by physically manipulating the camera itself? We show that this is indeed possible, that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet reliably misclassify target objects as a different (targeted) class. To accomplish this, we propose an iterative procedure for both updating the attack perturbation (to make it adversarial for a given classifier), and the threat model itself (to ensure it is physically realizable). For example, we show that we can achieve physically-realizable attacks that fool ImageNet classifiers in a targeted fashion 49.6\% of the time. This presents a new class of physically-realizable threat models to consider in the context of adversarially robust machine learning.

Author Information

Juncheng Li (Carnegie Mellon University)
Frank R. Schmidt (Robert Bosch GmbH)
Zico Kolter (Carnegie Mellon University / Bosch Center for AI)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors