FedQueue: Queue-Aware Federated Learning for Cross-Facility HPC Training
Yijiang Li ⋅ Emon Dey ⋅ Zilinghan Li ⋅ Krishnan Raghavan ⋅ Ravi Madduri ⋅ Kibaek Kim
Abstract
Federated learning~(FL) across multiple HPC facilities faces stochastic \emph{admission delays} from batch schedulers that dominate wall-clock time. Synchronous FL suffers from severe stragglers, while asynchronous FL accumulates stale updates when queues spike. We propose \fedqueue{}, a queue-aware FL protocol that incorporates scheduler delays directly into training and aggregation, which (i) predicts per-facility queue delays online to budget local work, (ii) applies cutoff-based admission that buffers late arrivals to bound staleness, and (iii) performs staleness-aware aggregation to stabilize heterogeneous local workloads. We prove the convergence for non-convex objectives at rate $\mathcal{O}(1/\sqrt{R})$ under bounded staleness, and show that the admission controls yield bounded staleness with high probability under queue-prediction error. Real-world cross-facility deployment of \fedqueue{} shows 20.5\% improvement over baseline algorithms. Controlled queue simulations demonstrate robust improvement over the baselines; in particular, about 34\% reduction in time to reach a target accuracy level under high queue variance and non-IID partitions.
Successful Page Load