Cross-task Calibration for Asynchronous Federated Continual Learning
Yichen Li ⋅ Haozhao Wang ⋅ Hang Su ⋅ Yulong Li ⋅ xiaoquan Yi ⋅ Yankai Jiang ⋅ Chuang Zhao ⋅ Imran Razzak ⋅ Ruixuan Li
Abstract
Federated Continual Learning (FCL) aims to empower distributed devices to learn a sequence of tasks over time. However, existing FCL research largely relies on the impractical assumption of synchronous new task arrival. This overlooks the reality of asynchronous user behavior and system latencies, forcing more efficient clients to endure costly inactivity. The practical necessity of an asynchronous method gives rise to Asynchronous Federated Continual Learning (AFCL). The server constantly receives a mixture of updates from clients at different time steps, leading to a catastrophic task drift that corrupts the global model and prevents effective learning. In this paper, we introduce a novel Cross-task Calibration framework called C$^2$-AFCL that is the first to tackle task drift at a semantic level within an Asynchronous FCL setting. Its core is a two-stage orthogonal calibration mechanism. First, intra-client calibration uses task-aware caches to mitigate variance from local client drift. Second, and more critically, inter-task interference calibration dynamically estimates an interference subspace from historical task knowledge. New updates are orthogonally projected to isolate and remove components that conflict with this subspace, preserving previous knowledge while learning new tasks. Extensive experiments show that C$^2$-AFCL significantly outperforms existing methods, demonstrating robust and efficient learning in dynamic federated environments.
Successful Page Load