Parameter-Masked Decoupled Optimization for Cross-Domain Class-Incremental Learning
Abstract
Cross-domain class-incremental learning (CD-CIL) requires models to continuously acquire new classes across shifting domains while retaining previously learned knowledge. Existing approaches often entangle what to update with how to update, resulting in unstable adaptation and severe forgetting under domain shifts. Inspired by the hippocampal learning mechanism that separates rapid adaptation from stable consolidation, we propose Parameter-Masked Decoupled Optimization (PMDO) that disentangles what knowledge is adapted from how learning proceeds in cross-domain class-incremental learning. Specifically, we introduce a domain-aware knowledge decoupler that selectively adapts domain-relevant shared parameters, constraining incremental updates while preserving prior representations. To regulate how learning proceeds, we further design a stability-aware trajectory regulation that guides optimization along transferable and stable optimization trajectories, thereby reducing interference across domain transitions. As a result, PMDO enables effective cross-domain adaptation while mitigating catastrophic forgetting and maintaining long-term learnability. Extensive experiments across multiple benchmarks demonstrate the effectiveness of PMDO and its superiority over state-of-the-art methods.