Timezone: »

Adversarial Attacks and Defenses for Non-Parametric Two-Sample Tests
Xilie Xu · Jingfeng Zhang · Feng Liu · Masashi Sugiyama · Mohan Kankanhalli

Wed Jul 20 11:25 AM -- 11:30 AM (PDT) @ None

Non-parametric two-sample tests (TSTs) that judge whether two sets of samples are drawn from the same distribution, have been widely used in the analysis of critical data. People tend to employ TSTs as trusted basic tools and rarely have any doubt about their reliability. This paper systematically uncovers the failure mode of non-parametric TSTs through adversarial attacks and then proposes corresponding defense strategies. First, we theoretically show that an adversary can upper-bound the distributional shift which guarantees the attack's invisibility. Furthermore, we theoretically find that the adversary can also degrade the lower bound of a TST's test power, which enables us to iteratively minimize the test criterion in order to search for adversarial pairs. To enable TST-agnostic attacks, we propose an ensemble attack (EA) framework that jointly minimizes the different types of test criteria. Second, to robustify TSTs, we propose a max-min optimization that iteratively generates adversarial pairs to train the deep kernels. Extensive experiments on both simulated and real-world datasets validate the adversarial vulnerabilities of non-parametric TSTs and the effectiveness of our proposed defense.

Author Information

Xilie Xu (National University of Singapore)
Jingfeng Zhang (RIKEN)
Feng Liu (The University of Melbourne)
Masashi Sugiyama (RIKEN / The University of Tokyo)
Mohan Kankanhalli (National University of Singapore,)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors