Timezone: »
Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. In this paper, we target a more challenging and realistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training. Although researchers have designed many methods from the algorithmic perspectives, there are few methods that provide generalization guarantees on their ability to achieve consistent performance on different training samples drawn from the same distribution. Motivated by the transfer learning and probably approximate correct (PAC) theory, we make a bold attempt to study OSL by proving its generalization error−given training samples with size n, the estimation error will get close to order Op(1/√n). This is the first study to provide a generalization bound for OSL, which we do by theoretically investigating the risk of the target classifier on unknown classes. According to our theory, a novel algorithm, called auxiliary open-set risk (AOSR) is proposed to address the OSL problem. Experiments verify the efficacy of AOSR. The code is available at github.com/AnjinLiu/OpensetLearningAOSR.
Author Information
Zhen Fang (University of Technology Sydney)
Jie Lu (University of Technology Sydney)
Anjin Liu (University of Technology Sydney)
Feng Liu (University of Technology Sydney)
I am a machine learning researcher with research interests in hypothesis testing and trustworthy machine learning. I am currently an Assistant Professor in Statistics (Data Science) at the School of Mathematics and Statistics, The University of Melbourne, Australia. We are also running the Trustworthy Machine Learning and Reasoning (TMLR) Lab where I am one of co-directors (see this page for details). In addition, I am a Visiting Scientist at RIKEN-AIP, Japan, and a Visting Fellow at DeSI Lab, Australian Artificial Intelligence Institute, University of Technology Sydney. I was the recipient of the Australian Laureate postdoctoral fellowship. I received my Ph.D. degree in computer science at the University of Technology Sydney in 2020, advised by Dist. Prof. Jie Lu and Prof. Guangquan Zhang. I was a research intern at the RIKEN-AIP, working on the robust domain adaptation project with Prof. Masashi Sugiyama, Dr. Gang Niu and Dr. Bo Han. I visited Gatsby Computational Neuroscience Unit at UCL and worked on the hypothesis testing project with Prof. Arthur Gretton, Dr. Danica J. Sutherland and Dr. Wenkai Xu. I have received the Outstanding Paper Award of NeurIPS (2022), the Outstanding Reviewer Award of NeurIPS (2021), the Outstanding Reviewer Award of ICLR (2021), the UTS-FEIT HDR Research Excellence Award (2019). My publications are mainly distributed in high-quality journals or conferences, such as Nature Communications, IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS, NeurIPS, ICML, ICLR, KDD, IJCAI, and AAAI. I have served as a senior program committee (SPC) member for IJCAI, ECAI and program committee (PC) members for NeurIPS, ICML, ICLR, AISTATS, ACML, AAAI and so on. I also serve as reviewers for many academic journals, such as JMLR, IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS and so on.
Guangquan Zhang (University of Technology Sydney)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Learning Bounds for Open-Set Learning »
Tue. Jul 20th 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2022 Poster: Adversarial Attack and Defense for Non-Parametric Two-Sample Tests »
Xilie Xu · Jingfeng Zhang · Feng Liu · Masashi Sugiyama · Mohan Kankanhalli -
2022 Spotlight: Adversarial Attack and Defense for Non-Parametric Two-Sample Tests »
Xilie Xu · Jingfeng Zhang · Feng Liu · Masashi Sugiyama · Mohan Kankanhalli -
2022 Poster: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack »
Ruize Gao · Jiongxiao Wang · Kaiwen Zhou · Feng Liu · Binghui Xie · Gang Niu · Bo Han · James Cheng -
2022 Spotlight: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack »
Ruize Gao · Jiongxiao Wang · Kaiwen Zhou · Feng Liu · Binghui Xie · Gang Niu · Bo Han · James Cheng -
2021 Poster: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2020 Poster: Learning Deep Kernels for Non-Parametric Two-Sample Tests »
Feng Liu · Wenkai Xu · Jie Lu · Guangquan Zhang · Arthur Gretton · D.J. Sutherland