Poster
in
Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning
Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition
Benjamin Spetter-Goldstein · Nataniel Ruiz · Sarah Bargal
Keywords: [ Gaussian Processes and Bayesian non-parametrics ]
Abstract:
The modern open internet contains billions of public pictures of human faces across the web, especially on social media websites used by half the world's population. In this context, Face Recognition (FR) systems have the potential to match faces to specific names and identities, creating glaring privacy concerns. Adversarial attacks are a promising way to grant users privacy from FR systems by disrupting their capability to recognize faces. Yet, such attacks can be perceptible to human observers, especially under the more challenging black-box threat model. In the literature, the justification for the imperceptibility of such attacks hinges on bounding metrics such as $\ell_p$ norms. However, there is not much research on how these norms match up with human perception. Through examining and measuring both the effectiveness of recent black-box attacks in the face recognition setting and their corresponding human perceptibility through survey data, we demonstrate the trade-offs in perceptibility that occur as attacks become more aggressive. We also show how the $\ell_2$ norm and other metrics do not correlate with human perceptibility in a linear fashion, thus making these norms suboptimal at measuring adversarial attack perceptibility.
Chat is not available.