Reliable Confidence Alignment for Generalized Category Discovery
Abstract
Generalized Category Discovery (GCD) requires models to categorize an unlabeled pool containing both known and novel classes under sparse supervision. We identify a systemic confidence bias inherent in existing parametric methods: while entropy regularization prevents class collapse, it indiscriminately suppresses predictive certainty on all unlabeled instances. This bias drives a distributional wedge between labeled and unlabeled samples of the same category, forcing models to sacrifice their stability on known classes to achieve plasticity for new ones. To resolve this, we propose Reliable Confidence Alignment (RCA), a plug-and-play framework grounded in Evidential Deep Learning. RCA first establishes high certainty anchors on labeled data using a Reliable Anchor for Certainty (RAC) module. Then, we introduce Cross-view Confidence Alignment (CCA) to propagate this grounded reliability to the unlabeled discovery set. Thus, RCA captures the fine-grained geometry of the probability simplex, effectively calibrating the model's epistemic uncertainty. Extensive evaluations on coarse- and fine-grained benchmarks demonstrate that RCA effectively rectifies the confidence landscape, significantly mitigating performance decay on known classes without compromising novel-class discovery.