Less Is More in Federated Continual Learning: RieSelect for Conflict-Aware Layer Selection in LLMs
Wenqi Qiu ⋅ Yipeng Zhou ⋅ Lin Zhu ⋅ Laizhong Cui
Abstract
Federated continual learning (FCL) of large language models on edge devices is constrained by a communication--stability--plasticity trilemma. We reveal a less-is-more phenomenon: beyond a moderate layer upload ratio, stability loss offsets saturated plasticity gains, so overall continual performance no longer improves. Moreover, layer-wise conflict is heavy-tailed and concentrates in a few layers; denser uplink increasingly includes these layers, which disproportionately drives forgetting and motivates selective sparse communication. Therefore, we introduce RieSelect, which treats stability as staying within a Fisher-metric safe basin around historical solutions. Under this safe-basin constraint, we derive a layer-wise conflict score and a closed-form certified safe step size for finite local updates, and formulate selective uplink as a knapsack-based utility--risk selection, balancing plasticity gains against stability risks. Extensive experiments show that, under a per-round uplink budget, RieSelect achieves the best performance across task orders. Beyond this matched-budget setting, under standard communication protocols, RieSelect improves average accuracy by 18.99–28.14 points while reducing total uplink by 53–115$\times$.
Successful Page Load