Timezone: »

Learning Bounds for Open-Set Learning
Zhen Fang · Jie Lu · Anjin Liu · Feng Liu · Guangquan Zhang

Tue Jul 20 09:00 AM -- 11:00 AM (PDT) @ Virtual

Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. In this paper, we target a more challenging and realistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training. Although researchers have designed many methods from the algorithmic perspectives, there are few methods that provide generalization guarantees on their ability to achieve consistent performance on different training samples drawn from the same distribution. Motivated by the transfer learning and probably approximate correct (PAC) theory, we make a bold attempt to study OSL by proving its generalization error−given training samples with size n, the estimation error will get close to order Op(1/√n). This is the first study to provide a generalization bound for OSL, which we do by theoretically investigating the risk of the target classifier on unknown classes. According to our theory, a novel algorithm, called auxiliary open-set risk (AOSR) is proposed to address the OSL problem. Experiments verify the efficacy of AOSR. The code is available at github.com/AnjinLiu/OpensetLearningAOSR.

Author Information

Zhen Fang (University of Technology Sydney)
Jie Lu (University of Technology Sydney)
Anjin Liu (University of Technology Sydney)
Feng Liu (University of Technology Sydney)
Guangquan Zhang (University of Technology Sydney)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors