On the Power of Statistics in Class-Incremental Learning with Pretrained Models
Abstract
Recent class-incremental learning (CIL) methods built on large pre-trained vision models have shown that strong performance can be retained even under strict data access constraints. This raises a fundamental question: which properties of pre-trained representations make such recovery possible in the class-incremental setting? In this work, we show that class-level feature statistics play a central role in enabling effective CIL under strong pre-training. When the visual backbone is frozen, maintaining simple class-wise statistics—such as prototypes and low-order distributional information—can recover a substantial fraction of the performance achieved by static joint training across diverse benchmarks. We make this observation explicit through deliberately minimal reference points built on frozen CLIP representations. In particular, we demonstrate that competitive performance can be obtained even without continual training, by performing inference directly from accumulated class-level statistics. Our findings suggest that class-level statistics constitute an important and previously underemphasized component of recent CIL approaches based on pre-trained models, offering a complementary perspective for understanding their strong empirical performance.