Machine learning (ML) training algorithms often possess an inherent self-correcting behavior due to their iterative- convergent nature. Recent systems exploit this property to achieve adaptability and efficiency in unreliable computing environments by relaxing the consistency of execution and allowing calculation errors to be self-corrected during training. However, the behavior of such systems are only well understood for specific types of calculation errors, such as those caused by staleness, reduced precision, or asynchronicity, and for specific algorithms, such as stochastic gradient descent. In this paper, we develop a general framework to quantify the effects of calculation errors on iterative-convergent algorithms. We then use this framework to derive a worst-case upper bound on the cost of arbitrary perturbations to model parameters during training and to design new strategies for checkpoint-based fault tolerance. Our system, SCAR, can reduce the cost of partial failures by 78%–95% when compared with traditional checkpoint-based fault tolerance across a variety of ML models and training algorithms, providing near-optimal performance in recovering from failures.
Aurick Qiao (Petuum, Inc. and Carnegie Mellon University)
Bryon Aragam (Carnegie Mellon University)
Bingjing Zhang (Petuum, Inc.)
Eric Xing (Petuum Inc. and CMU)
Related Events (a corresponding poster, oral, or spotlight)
2019 Oral: Fault Tolerance in Iterative-Convergent Machine Learning »
Tue Jun 11th 04:40 -- 05:00 PM Room Grand Ballroom