Skip to yearly menu bar Skip to main content


Poster
in
Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning

On Success and Simplicity: A Second Look at Transferable Targeted Attacks

Zhengyu Zhao · Zhuoran Liu · Martha Larson


Abstract:

Achieving transferability of targeted attacks is reputed to be remarkably difficult, and state-of-the-art approaches are resource-intensive due to training target-specific model(s) with additional data. In our work, we find, however, that simple transferable attacks which require neither additional data nor model training can achieve surprisingly high targeted transferability. This insight has been overlooked mainly due to the widespread practice of unreasonably restricting attack optimization to few iterations. In particular, we, for the first time, identify the state-of-the-art performance of a simple logit loss. Our investigation is conducted in a wide range of transfer settings, especially including three new, realistic settings: ensemble transfer with little model similarity, transfer to low-ranked target classes, and transfer to the real-world Google Cloud Vision API. Results in these new settings demonstrate that the commonly adopted, easy settings cannot fully reveal the actual properties of different attacks and may cause misleading comparisons. Overall, the aim of our analysis is to inspire a more meaningful evaluation on targeted transferability.

Chat is not available.