CURE: Consistency-under-Unified Semantic Regularization for Generalized Category Discovery
Abstract
Generalized Category Discovery (GCD) aims to learn semantically structured representations for discovering novel categories in unlabeled data using supervision from known classes. Most existing methods rely on self-supervised contrastive learning (CL) with consistency and uniformity objectives. We identify an inherent optimization conflict between these objectives: while uniformity enforces global feature dispersion, it can hinder the formation of class-discriminative and semantically coherent structures. To address this issue, we propose a two-stage framework that decouples representation learning from self-contrastive regularization. The first stage learns category-anchored representations aligned with known class prototypes, while the second stage extends the representation space to novel categories via a consistency objective enhanced with unified semantic regularization. We further introduce a Semantic Exploration Energy mechanism to capture shared semantics across categories and mitigate information loss caused by prototype orthogonalization. The resulting framework, termed Consistency-under-Unified Semantic Regularization(CURE), achieves state-of-the-art performance on multiple benchmarks and substantially reduces the performance gap between known and novel categories.