Keywords: [ DL: Self-Supervised Learning ] [ APP: Computer Vision ] [ MISC: General Machine Learning Techniques ] [ MISC: Unsupervised and Semi-supervised Learning ] [ MISC: Representation Learning ] [ MISC: Supervised Learning ] [ MISC: Transfer, Multitask and Meta-learning ] [ DL: Algorithms ] [ DL: Other Representation Learning ] [ DL: Robustness ] [ Deep Learning ]
Partial label learning (PLL), which refers to the classification task where each training instance is ambiguously annotated with a set of candidate labels, has been recently studied in deep learning paradigm. Despite advances in recent deep PLL literature, existing methods (e.g., methods based on self-training or contrastive learning) are confronted with either ineffectiveness or inefficiency. In this paper, we revisit a simple idea namely consistency regularization, which has been shown effective in traditional PLL literature, to guide the training of deep models. Towards this goal, a new regularized training framework, which performs supervised learning on non-candidate labels and employs consistency regularization on candidate labels, is proposed for PLL. We instantiate the regularization term by matching the outputs of multiple augmentations of an instance to a conformal label distribution, which can be adaptively inferred by the closed-form solution. Experiments on benchmark datasets demonstrate the superiority of the proposed method compared with other state-of-the-art methods.