Skip to yearly menu bar Skip to main content


Poster
in
Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning

Uncovering Universal Features: How Adversarial Training Improves Adversarial Transferability

Jacob M Springer · Melanie Mitchell · Garrett T Kenyon


Abstract:

Adversarial examples for neural networks are known to be transferable: examples optimized to be misclassified by a “source” network are often misclassified by other “destination” networks. Here, we show that training the source network to be “slightly robust”---that is, robust to small-magnitude adversarial examples---substantially improves the transferability of targeted attacks, even between architectures as different as convolutional neural networks and transformers. In fact, we show that these adversarial examples can transfer representation (penultimate) layer features substantially better than adversarial examples generated with non-robust networks. We argue that this result supports a non-intuitive hypothesis: slightly robust networks exhibit universal features---ones that tend to overlap with the features learned by all other networks trained on the same dataset. This suggests that the features of a single slightly-robust neural network may be useful to derive insight about the features of every non-robust neural network trained on the same distribution.

Chat is not available.