JanusPipe: Efficient Pipeline Parallel Training for Machine Learning Interatomic Potentials
Hongyu Wang ⋅ Weijian Liu ⋅ Mingzhen Li ⋅ Hongtao Xu ⋅ Yan Wang ⋅ Weile Jia ⋅ Guangming Tan
Abstract
Discovering atom-level phenomena requires molecular dynamics (MD) simulations with ab initio accuracy. Machine learning interatomic potentials (MLIPs) enable stable, high-accuracy MD simulations, and their models exhibit scaling-law trends similar to large language models. However, the lack of scalable and efficient distributed training systems for conservative MLIPs makes them difficult to scale. This is because conservative MLIPs inherently follow a double-backward execution pattern, which involves computing gradients during the forward pass. This pattern creates a mismatch with existing distributed training systems, especially for pipeline parallelism. Therefore, we present JanusPipe, an efficient 3D-parallel (PP/DP/GP) training system tailored for conservative MLIPs. It integrates SymFold to enable memory-efficient pipeline parallelism for conservative MLIPs, and WaveK to reduce pipeline bubbles by balancing the four-phase compute time. Experimental results on 32 GPUs show that JanusPipe improves throughput by $1.51\times$ and $1.45\times$ on average over 1F1B and Hanayo, respectively.
Successful Page Load