Skip to yearly menu bar Skip to main content


Talk
in
Workshop: New Frontiers in Adversarial Machine Learning

Robust physical perturbation attacks and defenses for deep learning visual classifiers

Atul Prakash


Abstract:

Deep Neural networks are increasingly used in safety-critical situations such as autonomous driving. Our prior work at CVPR 2018 showed that robust physical adversarial examples can be crafted that fool state-of-the-art vision classifiers for domains such as traffic signs. Unfortunately, crafting those attacks still required manual selection of appropriate masks and whitebox access to the model being tested for robustness. We describe a recently developed system called GRAPHITE that can be a useful aid in automatically generating candidates for robust physical perturbation attacks. GRAPHITE can generate attacks in not only white-box, but also in black-box hard-label scenarios. In hard-label blackbox scenarios, GRAPHITE is able to find successful small-patch attacks with an average of only 566 queries for 92.2% of victim-target pairs for the GTSRB dataset. This is about a one to three orders of magnitude smaller query count than previously reported hard-label black-box attacks on similar datasets. We discuss potential implications of GRAPHITE as a helpful tool towards developing and evaluating defenses against robust physical perturbation attacks. For instance, GRAPHITE is also able to find successful attacks using perturbations that modify small areas of the input image against PatchGuard, a recently proposed defense against patch-based attacks.

Bio:

Atul Prakash is a Professor in Computer Science and Engineering at the University of Michigan, Ann Arbor with research interests in computer security and privacy. He received a Bachelor of Technology in Electrical Engineering from IIT, Delhi, India and a Ph.D. in Computer Science from the University of California, Berkeley. His recent research includes security analysis of emerging IoT software stacks, mobile payment infrastructure in India, and vulnerability of deep learning classifiers to physical perturbations. At the University of Michigan, He has served as Director of the Software Systems Lab, led the creation of the new Data Science undergraduate program, and is currently serving as the Associate Chair of the CSE Division.

Chat is not available.