Poster
Discriminative Complementary-Label Learning with Weighted Loss
Yi Gao · Min-Ling Zhang
Virtual
Keywords: [ Probabilistic Methods ] [ MCMC ] [ Supervised Learning ] [ Algorithms ]
Abstract:
Complementary-label learning (CLL) deals with the weak supervision scenario where each training instance is associated with one \emph{complementary} label, which specifies the class label that the instance does \emph{not} belong to. Given the training instance , existing CLL approaches aim at modeling the \emph{generative} relationship between the complementary label , i.e. , and the ground-truth label , i.e. . Nonetheless, as the ground-truth label is not directly accessible for complementarily labeled training instance, strong generative assumptions may not hold for real-world CLL tasks. In this paper, we derive a simple and theoretically-sound \emph{discriminative} model towards , which naturally leads to a risk estimator with estimation error bound at convergence rate. Accordingly, a practical CLL approach is proposed by further introducing weighted loss to the empirical risk to maximize the predictive gap between potential ground-truth label and complementary label. Extensive experiments clearly validate the effectiveness of the proposed discriminative complementary-label learning approach.
Chat is not available.