Poster

Revisiting Consistency Regularization for Deep Partial Label Learning

Dong-Dong Wu · Deng-Bao Wang · Min-Ling Zhang

Hall E #333

Keywords: [ DL: Self-Supervised Learning ] [ APP: Computer Vision ] [ MISC: General Machine Learning Techniques ] [ MISC: Unsupervised and Semi-supervised Learning ] [ MISC: Representation Learning ] [ MISC: Supervised Learning ] [ MISC: Transfer, Multitask and Meta-learning ] [ DL: Algorithms ] [ DL: Other Representation Learning ] [ DL: Robustness ] [ Deep Learning ]

[ Abstract ]
[ Poster [ Paper PDF
Wed 20 Jul 3:30 p.m. PDT — 5:30 p.m. PDT
 
Spotlight presentation: Deep Learning/APP:Computer Vision
Wed 20 Jul 10:15 a.m. PDT — 11:45 a.m. PDT

Abstract:

Partial label learning (PLL), which refers to the classification task where each training instance is ambiguously annotated with a set of candidate labels, has been recently studied in deep learning paradigm. Despite advances in recent deep PLL literature, existing methods (e.g., methods based on self-training or contrastive learning) are confronted with either ineffectiveness or inefficiency. In this paper, we revisit a simple idea namely consistency regularization, which has been shown effective in traditional PLL literature, to guide the training of deep models. Towards this goal, a new regularized training framework, which performs supervised learning on non-candidate labels and employs consistency regularization on candidate labels, is proposed for PLL. We instantiate the regularization term by matching the outputs of multiple augmentations of an instance to a conformal label distribution, which can be adaptively inferred by the closed-form solution. Experiments on benchmark datasets demonstrate the superiority of the proposed method compared with other state-of-the-art methods.

Chat is not available.