DiL: Discrete-anchored Representation Alignment for Semi-Supervised Continual Learning
Abstract
Leveraging the unlabeled stream is crucial yet challenging in Semi-Supervised Continual Learning (SSCL) under continual class expansion. Existing SSCL methods typically enforce dense pseudo-label consistency and indiscriminate distillation on unlabeled data, which can reinforce errors and intensify base–novel interference. To address these issues, we propose Discrete-anchored Incremental Learning (DiL) to ground continual updates on reliable discrete anchors that remain stable under noisy pseudo-labels. DiL introduces Discrete Contrastive Distillation (DCD), which discretizes the distillation pathway and performs anchor-referenced selective distillation to curb error reinforcement. Meanwhile, Class-Aware Channel-Chunked Encoding (CACE) learns channel-chunked representations and exploits the confusion matrix induced by the discrete anchors to separate novel from confusable base classes. Extensive experiments on multiple datasets show that DiL achieves state-of-the-art performance across diverse SSCL protocols.