When Generalized Zero-Shot Learning Meets PU Learning: A Plug-and-Play Framework for Seen-Class Bias Mitigation
Abstract
Generalized Zero-Shot Learning (GZSL) suffers from severe seen-class bias, a challenge stemming from the label incompleteness inherent in mixed test distributions. To address this, we propose PUFE, a unified plug-and-play framework that recasts GZSL inference as a Positive-Unlabeled (PU) learning task by treating seen categories as positive and mixed test data as unlabeled. Serving as a seamless post-processing module, PUFE constructs a PU classifier in the semantic space, jointly estimating the seen-class posterior and labeling propensity via Maximum Likelihood Estimation (MLE) within a dual-head network. Furthermore, we introduce an adaptive prototype calibration strategy that leverages high-confidence pseudo-instances—identified by the PU classifier—to explicitly align semantic prototypes with the underlying test distribution. Extensive experiments demonstrate that PUFE consistently mitigates bias and significantly boosts the performance of various embedding-based baselines, yielding gains of up to 11.2 percentage points in the harmonic mean.