Timezone: »

Adversarial Attack and Defense for Non-Parametric Two-Sample Tests
Xilie Xu · Jingfeng Zhang · Feng Liu · Masashi Sugiyama · Mohan Kankanhalli

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #1010

Non-parametric two-sample tests (TSTs) that judge whether two sets of samples are drawn from the same distribution, have been widely used in the analysis of critical data. People tend to employ TSTs as trusted basic tools and rarely have any doubt about their reliability. This paper systematically uncovers the failure mode of non-parametric TSTs through adversarial attacks and then proposes corresponding defense strategies. First, we theoretically show that an adversary can upper-bound the distributional shift which guarantees the attack's invisibility. Furthermore, we theoretically find that the adversary can also degrade the lower bound of a TST's test power, which enables us to iteratively minimize the test criterion in order to search for adversarial pairs. To enable TST-agnostic attacks, we propose an ensemble attack (EA) framework that jointly minimizes the different types of test criteria. Second, to robustify TSTs, we propose a max-min optimization that iteratively generates adversarial pairs to train the deep kernels. Extensive experiments on both simulated and real-world datasets validate the adversarial vulnerabilities of non-parametric TSTs and the effectiveness of our proposed defense. Source code is available at https://github.com/GodXuxilie/Robust-TST.git.

Author Information

Xilie Xu (National University of Singapore)
Jingfeng Zhang (RIKEN)
Feng Liu (The University of Melbourne)

I am a machine learning researcher with research interests in hypothesis testing and trustworthy machine learning. I am currently an Assistant Professor in Statistics (Data Science) at the School of Mathematics and Statistics, The University of Melbourne, Australia. We are also running the Trustworthy Machine Learning and Reasoning (TMLR) Lab where I am one of co-directors (see this page for details). In addition, I am a Visiting Scientist at RIKEN-AIP, Japan, and a Visting Fellow at DeSI Lab, Australian Artificial Intelligence Institute, University of Technology Sydney. I was the recipient of the Australian Laureate postdoctoral fellowship. I received my Ph.D. degree in computer science at the University of Technology Sydney in 2020, advised by Dist. Prof. Jie Lu and Prof. Guangquan Zhang. I was a research intern at the RIKEN-AIP, working on the robust domain adaptation project with Prof. Masashi Sugiyama, Dr. Gang Niu and Dr. Bo Han. I visited Gatsby Computational Neuroscience Unit at UCL and worked on the hypothesis testing project with Prof. Arthur Gretton, Dr. Danica J. Sutherland and Dr. Wenkai Xu. I have received the Outstanding Paper Award of NeurIPS (2022), the Outstanding Reviewer Award of NeurIPS (2021), the Outstanding Reviewer Award of ICLR (2021), the UTS-FEIT HDR Research Excellence Award (2019). My publications are mainly distributed in high-quality journals or conferences, such as Nature Communications, IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS, NeurIPS, ICML, ICLR, KDD, IJCAI, and AAAI. I have served as a senior program committee (SPC) member for IJCAI, ECAI and program committee (PC) members for NeurIPS, ICML, ICLR, AISTATS, ACML, AAAI and so on. I also serve as reviewers for many academic journals, such as JMLR, IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS and so on.

Masashi Sugiyama (RIKEN / The University of Tokyo)
Mohan Kankanhalli (National University of Singapore,)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors