SABER: Continual Learning with Representation Conflict Management
Xuandi Luo ⋅ Huaidong Zhang ⋅ Yi Xie ⋅ Shengfeng He
Abstract
Continual learning seeks to develop models capable of acquiring new tasks sequentially while retaining prior knowledge. A central challenge in this setting is managing inherent knowledge conflicts that arise as overlapping or contradictory information is introduced across tasks. While parameter-efficient fine-tuning (PEFT) techniques, particularly those based on Low-Rank Adaptation (LoRA), have shown promise by reducing interference through parameter isolation or modular architectures, they often treat conflict as something to avoid rather than address directly. In this work, we propose $\underline{S}$ubspace-$\underline{A}$ligned $\underline{B}$alanc$\underline{e}$d $\underline{R}$ecomposition (SABER), a novel method that reframes continual learning as a problem of structured conflict management. SABER introduces a unified subspace alignment framework to support shared task representations, decomposes task-specific knowledge into orthogonal components to preserve distinct information, and recomposes them using an energy-aware balancing mechanism that coordinates contributions without compromising stability. Extensive experiments across multiple continual learning benchmarks show that SABER achieves performance on par with or surpassing state-of-the-art methods, offering a principled approach that directly addresses the root cause of forgetting by managing representational conflict.
Successful Page Load