HypCL: Adapting CLIP in Hyperbolic Space for Continual Learning
Abstract
Recently, vision-language models (e.g., CLIP) are increasingly adopted for continual learning to mitigate catastrophic forgetting. However, existing CLIP-based methods typically freeze the backbone to preserve pre-trained knowledge, which limits the model's ability to learn discriminative features for downstream tasks. In this paper, we introduce HypCL, a parameter-efficient framework that continually adapts CLIP in hyperbolic space for continual learning. Our key insight is that the exponentially expanding capacity of hyperbolic geometry naturally accommodates the growing class space and promotes stronger inter-class separation. Specifically, HypCL learns task-specific hyperbolic transformations and employs a lightweight task-weighting mechanism to aggregate transformations across tasks. To fully exploit the enhanced feature separability afforded by hyperbolic geometry, HypCL maintains class prototypes computed from the adapted features, which serve as stable anchors for calibrating predictions during inference. Extensive experiments on standard class-incremental benchmarks demonstrate that HypCL consistently outperforms existing CLIP-based continual learning methods.