One Coin Has Two Sides: Single Poistive Multi Label Learning from Salient Annotations
Abstract
Single-Positive Multi-Label Learning (SPML) studies learning from incomplete supervision, where each instance is annotated with only one positive label despite potentially belonging to multiple categories. While existing methods assume the annotated labels are randomly distributed, real-world annotations are often biased toward the most salient category. We formalize this realistic scenario as Salient Single-Positive Multi-Label Learning (SalSPML). This salient annotation bias poses a challenge to conventional SPML methods, as the missing labels often correspond to less salient and harder-to-recognize categories. Fortunately, we find that salient annotations are typically more representative and informative. Motivated by this insight, we propose Prototype-Guided Rejection for Salient Annotation (PiSA), which constructs reliable class-wise prototypes from salient labels and leverages them to guide embedding learning for non-salient labels recognition. We theoretically demonstrate that SalSPML is harder than Random SPML due to irreducible annotation bias, and under SalSPML, more accurate prototypes facilitate false-negative label detection. Experiments on multiple benchmarks, together with two newly constructed real-world SalSPML datasets, demonstrate that PiSA consistently outperforms existing methods, achieving an average mAP improvement of 3.16\%. Our code is available in the supplementary materials.