Skip to yearly menu bar Skip to main content


Poster
in
Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning

Towards Transferable Adversarial Perturbations with Minimum Norm

Fangcheng Liu · Chao Zhang · Hongyang Zhang


Abstract:

Transfer-based adversarial example is one of the most important classes of black-box attacks. Prior work in this direction often requires a fixed but large perturbation radius to reach a good transfer success rate. In this work, we propose a \emph{geometry-aware framework} to generate transferable adversarial perturbation with minimum norm for each input. Analogous to model selection in statistical machine learning, we leverage a validation model to select the optimal perturbation budget for each image. Extensive experiments verify the effectiveness of our framework on improving image quality of the crafted adversarial examples. The methodology is the foundation of our entry to the CVPR'21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet, in which we ranked 1st place out of 1,559 teams and surpassed the runner-up submissions by 4.59\% and 23.91\% in terms of final score and average image quality level, respectively.

Chat is not available.