Skip to yearly menu bar Skip to main content


Poster

Learning with Partial-Label and Unlabeled Data: A Uniform Treatment for Supervision Redundancy and Insufficiency

Yangfan Liu · JIAQI LYU · Xin Geng · Ning Xu


Abstract:

One major challenge in weakly supervised learning is learning from inexact supervision, ranging from partial labels (PLs) with \emph{redundant} information to the extreme of unlabeled data with \emph{insufficient} information.While recent work have made significant strides in specific inexact supervision contexts, supervision forms typically \emph{coexist} in complex combinations. This is exemplified in \emph{semi-supervised partial label learning}, where PLs act as the exclusive supervision in a semi-supervised setting.Current strategies addressing combined inexact scenarios are usually composite, which can lead to incremental solutions that essentially replicate existing methods.In this paper, we propose a novel approach to \emph{uniformly} tackle both label redundancy and insufficiency, derived from a mutual information-based perspective.We design a label channel that facilitate dynamic label exchange within the candidate set, which identifying potentially true labels and filtering out likely incorrect ones, thereby minimizing error accumulation.Experimental results demonstrate the superiority of our method over existing state-of-the-art PL and semi-supervised learning approaches by directly integrating them. Furthermore, our extended experiments on partial-complementary-label learning underscore the flexibility of our uniform treatment in managing diverse supervision scenarios.

Live content is unavailable. Log in and register to view live content