Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities

Tackling the Data Heterogeneity in Asynchronous Federated Learning with Cached Update Calibration

Yujia Wang · Yuanpu Cao · Jingcheng Wu · Ruoyu Chen · Jinghui Chen


Abstract:
Asynchronous federated learning, which enables local clients to send their model update asynchronously to the server without waiting for others, has recently emerged for its improved efficiency and scalability over traditional synchronized federated learning. In this paper, we study how the asynchronous delay affects the convergence of asynchronous federated learning under non-i.i.d. distributed data across clients. We first analyze the convergence of a general asynchronous federated learning framework under a practical nonconvex stochastic optimization setting. Our result suggests that the asynchronous delay can largely slow down the convergence, especially when the data heterogeneity is high. To further improve the convergence of asynchronous federated learning with heterogeneous data distribution, we then propose a novel asynchronous federated learning method with a cached update calibration. Particularly, we let the server cache the latest update for each client and reuse these variables for calibrating the global update at each round. We theoretically prove the convergence acceleration for our proposed method under non-convex stochastic settings and empirically demonstrate its superior performances compared to standard asynchronous federated learning. Moreover, we also extend our method with a memory-friendly adaption in which the server only maintains a quantized cached update for each client for reducing the server storage overhead.

Chat is not available.