Breaking the Synthetic-Real Domain Shortcut for Training-Free Generative Replay-based Class Incremental Learning
Abstract
Class-incremental learning (CIL) requires models to continuously acquire new knowledge while avoiding catastrophic forgetting. While exemplar replay is effective, it raises concerns regarding privacy and storage. Thus, generative replay has emerged as a viable alternative, synthesizing old data using frozen pretrained text-to-image (T2I) models without any extra training. However, we observe that directly mixing synthetic old-class data with real new-class data during incremental training leads to significant performance degradation. This issue stems from a ‘’domain shortcut'', where models rely on domain-discriminative features instead of semantic class cues. To address this, we propose DREAM (Domain-Regularized Exemplar-free Alignment Model), which uses a training-free generator to synthesize old-class data and eliminates domain shortcut via subspace rectification and orthogonal projection, while reinforcing semantic alignment through real-anchored prototype regularization. Extensive experiments on 4 datasets demonstrate that DREAM outperforms existing exemplar-free CIL methods and achieves state-of-the-art performance.