Class-Prior Perturbation-Robust Regularization for Imbalanced Unreliable Partial Label Learning
Abstract
Imbalanced Unreliable Partial Label Learning (I-UPLL) is a challenging weakly supervised learning setting in which severe class imbalance and unreliable candidate labels jointly degrade model performance. By revisiting existing approaches for imbalanced learning, we observe that most of them fundamentally rely on estimating the class prior to guide balancing operations, such as re-sampling, pseudo-label generation, or logit adjustment. However, under I-UPLL, obtaining stable and accurate prior estimates at the early stage of training is often unrealistic due to the ambiguity and unreliability of partial labels, thereby leading the model to rapidly converge to a suboptimal solution. To address this issue, we propose CLAPOR, a novel CLAss-PriOr perturbation-Robust regularization framework that fundamentally avoids dependence on accurate prior estimation. Specifically, the proposed regularization trains the model under deliberately perturbed class priors, sampled from a Dirichlet distribution that deviates from the current estimated prior. This design encourages consistent performance under prior uncertainty and naturally preserves attention to minority classes. Extensive experiments on benchmark datasets demonstrate the effectiveness of CLAPOR across various settings of I-UPLL.