Skip to yearly menu bar Skip to main content


Poster

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Weiran Lin · Keane Lucas · Lujo Bauer · Michael Reiter · Mahmood Sharif

Hall E #403

Keywords: [ APP: Computer Vision ] [ DL: Robustness ]


Abstract: We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks better align with the attacker's goal: (1) tricking a model to assign higher probability to the target class than to any other class, while (2) staying within an $\epsilon$-distance of the attacked input. First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it. Second, we propose a new attack method, Constrained Gradient Descent (CGD), usinga refinement of our loss function that captures both (1) and (2).CGD seeks to satisfyboth attacker objectives---misclassification and bounded $\ell_{p}$-norm---ina principled manner, as part of the optimization, instead of via ad hocpost-processing techniques (e.g., projection or clipping).We show that CGD is more successful on CIFAR10(0.9--4.2\%) and ImageNet (8.6--13.6\%) than state-of-the-art attacks while consuming less time (11.4--18.8\%). Statistical testsconfirm that our attack outperforms others against leading defenses on different datasets and values of $\epsilon$.

Chat is not available.