SPARe: Stacked Parallelism with Adaptive Reordering for Fault-Tolerant LLM Pretraining Systems with 100k+ GPUs
Jin Lee ⋅ Zhonghao Chen ⋅ Xuhang He ⋅ Robert Underwood ⋅ Bogdan Nicolae ⋅ Franck Cappello ⋅ Xiaoyi Lu ⋅ Sheng Di ⋅ Zheng Zhang
Abstract
In large-scale LLM pretraining systems with $100\mathrm{k}+$ GPUs, failures become the norm rather than the exception, and restart costs can dominate wall-clock training time. However, existing fault-tolerance mechanisms are largely unprepared for this restart-dominant regime. To address this challenge, we propose SPARe—Stacked Parallelism with Adaptive Reordering—a fault-tolerance framework that masks node failures during gradient synchronization by stacking redundant data shards across parallelism groups and adaptively reordering execution. SPARe achieves availability comparable to traditional replication while maintaining near-constant computation overhead of only $2\sim3\times$, even under high redundancy where traidional replication would require linearly inflating overhead. We derive closed-form expressions for endurable failure count and computation overhead, validate them via SimGrid-based discrete-event simulation, and jointly optimize redundancy and checkpointing to minimize training time. At extreme scale with up to $600\mathrm{k}$ GPUs, SPARe reduces time-to-train by $40\sim50$% compared to traditional replication.
Successful Page Load