Timezone: »

Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks
Yonggang Zhang · Ya Li · Tongliang Liu · Xinmei Tian

Thu Jul 16 05:00 PM -- 05:45 PM & Fri Jul 17 04:00 AM -- 04:45 AM (PDT) @

We study the problem of constructing black-box adversarial attacks, where no model information is revealed except for the feedback knowledge of the given inputs. To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions. However, these methods suffer from poor query efficiency since the employed searching directions are sampled randomly. To mitigate this issue, we formulate the goal of mounting efficient attacks as an optimization problem in which the adversary tries to fool the target model with a limited number of queries. Under such settings, the adversary has to select appropriate searching directions to reduce the number of model queries. By solving the efficient-attack problem, we find that we need to distill the knowledge in both the path of the adversarial examples and the path of the searching directions. Therefore, we propose a novel framework, dual-path distillation, that utilizes the feedback knowledge not only to craft adversarial examples but also to alter the searching directions to achieve efficient attacks. Experimental results suggest that our framework can significantly increase the query efficiency.

Author Information

Yonggang Zhang (University of Science and Technology of China)
Ya Li (IFLYTEK Research)
Tongliang Liu (The University of Sydney)
Xinmei Tian (USTC)

More from the Same Authors