On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization
Hao Yu · rong jin · Sen Yang

Wed Jun 12th 11:20 -- 11:25 AM @ Room 103

Recent developments on large-scale distributed machine learning applications, e.g., deep neural networks, benefit enormously from the advances in distributed non-convex optimization techniques, e.g., distributed Stochastic Gradient Descent (SGD). A series of recent works study the linear speedup property of distributed SGD variants with reduced communication. The linear speedup property enable us to scale out the computing capability by adding more computing nodes into our system. The reduced communication complexity is desirable since communication overhead is often the performance bottleneck in distributed systems. Recently, momentum methods are more and more widely adopted in training machine learning models and can often converge faster and generalize better. For example, many practitioners use distributed SGDs with momentum to train deep neural networks with big data. However, it remains unclear whether any distributed momentum SGD possesses the same linear speedup property as distributed SGDs and has reduced communication complexity. This paper fills the gap between practice and theory by considering a distributed communication efficient momentum SGD method and proving its linear speedup property.

Author Information

Hao Yu (Alibaba Group (US) Inc)
rong jin (alibaba group)
Sen Yang

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors