Timezone: »

Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Hong Liu · Zhiyuan Li · David Hall · Percy Liang · Tengyu Ma
Event URL: https://openreview.net/forum?id=WAujq2apRW »

Given the massive cost of language model pre-training, a non-trivial improvement of the optimization algorithm would lead to a material reduction on the time and cost of training. Adam and its variants have been state-of-the-art for years, and more sophisticated second-order (Hessian-based) optimizers often incur too much per-step overhead. In this paper, we propose Sophia, Second-order Clipped Stochastic Optimization, a simple scalable second-order optimizer that uses a light-weight estimate of the diagonal Hessian as the pre-conditioner. The update is the moving average of the gradients divided by the moving average of the estimated Hessian, followed by element-wise clipping. The clipping controls the worst-case update size and tames the negative impact of non-convexity and rapid change of Hessian along the trajectory. Sophia only estimates the diagonal Hessian every handful of iterations, which has negligible average per-step time and memory overhead. On language modeling with GPT-2 models of sizes ranging from 125M to 770M, Sophia achieves a 2x speed-up compared with Adam in the number of steps, total compute, and wall-clock time.

Author Information

Hong Liu (Stanford University)
Zhiyuan Li (Computer Science Department, Stanford University)
David Hall (Stanford University)
Percy Liang (Stanford University)
Tengyu Ma (Stanford University)

More from the Same Authors