Skip to yearly menu bar Skip to main content


Poster

Discriminative Complementary-Label Learning with Weighted Loss

Yi Gao · Min-Ling Zhang

Virtual

Keywords: [ Probabilistic Methods ] [ MCMC ] [ Supervised Learning ] [ Algorithms ]


Abstract: Complementary-label learning (CLL) deals with the weak supervision scenario where each training instance is associated with one \emph{complementary} label, which specifies the class label that the instance does \emph{not} belong to. Given the training instance \bmx, existing CLL approaches aim at modeling the \emph{generative} relationship between the complementary label y¯, i.e. P(y¯\bmx), and the ground-truth label y, i.e. P(y\bmx). Nonetheless, as the ground-truth label is not directly accessible for complementarily labeled training instance, strong generative assumptions may not hold for real-world CLL tasks. In this paper, we derive a simple and theoretically-sound \emph{discriminative} model towards P(y¯\bmx), which naturally leads to a risk estimator with estimation error bound at O(1/n) convergence rate. Accordingly, a practical CLL approach is proposed by further introducing weighted loss to the empirical risk to maximize the predictive gap between potential ground-truth label and complementary label. Extensive experiments clearly validate the effectiveness of the proposed discriminative complementary-label learning approach.

Chat is not available.