Timezone: »
A standard approach in large scale machine learning is distributed stochastic gradient training, which requires the computation of aggregated stochastic gradients over multiple nodes on a network. Communication is a major bottleneck in such applications, and in recent years, compressed stochastic gradient methods such as QSGD (quantized SGD) and sparse SGD have been proposed to reduce communication. It was also shown that error compensation can be combined with compression to achieve better convergence in a scheme that each node compresses its local stochastic gradient and broadcast the result to all other nodes over the network in a single pass. However, such a single pass broadcast approach is not realistic in many practical implementations. For example, under the popular parameter-server model for distributed learning, the worker nodes need to send the compressed local gradients to the parameter server, which performs the aggregation. The parameter server has to compress the aggregated stochastic gradient again before sending it back to the worker nodes. In this work, we provide a detailed analysis on this two-pass communication model, with error-compensated compression both on the worker nodes and on the parameter server. We show that the error-compensated stochastic gradient algorithm admits three very nice properties: 1) it is compatible with an \emph{arbitrary} compression technique; 2) it admits an improved convergence rate than the non error-compensated stochastic gradient method such as QSGD and sparse SGD; 3) it admits linear speedup with respect to the number of workers. The empirical study is also conducted to validate our theoretical results.
Author Information
Hanlin Tang (University of Rochester)
Chen Yu (University of Rochester)
Xiangru Lian (University of Rochester)
Tong Zhang (Tecent AI Lab)

Tong Zhang is a professor of Computer Science and Mathematics at the Hong Kong University of Science and Technology. His research interests are machine learning, big data and their applications. He obtained a BA in Mathematics and Computer Science from Cornell University, and a PhD in Computer Science from Stanford University. Before joining HKUST, Tong Zhang was a professor at Rutgers University, and worked previously at IBM, Yahoo as research scientists, Baidu as the director of Big Data Lab, and Tencent as the founding director of AI Lab. Tong Zhang was an ASA fellow and IMS fellow, and has served as the chair or area-chair in major machine learning conferences such as NIPS, ICML, and COLT, and has served as associate editors in top machine learning journals such as PAMI, JMLR, and Machine Learning Journal.
Ji Liu (Kwai Seattle AI lab, University of Rochester)
Ji Liu is an Assistant Professor in Computer Science, Electrical and Computer Engineering, and Goergen Institute for Data Science at University of Rochester (UR). He received his Ph.D. in Computer Science from University of Wisconsin-Madison. His research interests focus on distributed optimization and machine learning. He also has rich experiences in various data analytics applications in healthcare, bioinformatics, social network, computer vision, etc. His recent research focus is on asynchronous parallel optimization, sparse learning (compressed sensing) theory and algorithm, structural model estimation, online learning, abnormal event detection, feature / pattern extraction, etc. He published more than 40 papers in top CS journals and conferences including JMLR, SIOPT, TPAMI, TIP, TKDD, NIPS, ICML, UAI, SIGKDD, ICCV, CVPR, ECCV, AAAI, IJCAI, ACM MM, etc. He won the award of Best Paper honorable mention at SIGKDD 2010 and the award of Best Student Paper award at UAI 2015.
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: $\texttt{DoubleSqueeze}$: Parallel Stochastic Gradient Descent with Double-pass Error-Compensated Compression »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #99
More from the Same Authors
-
2021 : Efficient Exploration by HyperDQN in Deep Reinforcement Learning »
Ziniu Li · Yingru Li · Hao Liang · Tong Zhang -
2023 Poster: Beyond Uniform Lipschitz Condition in Differentially Private Optimization »
Rudrajit Das · Satyen Kale · Zheng Xu · Tong Zhang · Sujay Sanghavi -
2023 Poster: What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL? »
Rui Yang · Yong LIN · Xiaoteng Ma · Hao Hu · Chongjie Zhang · Tong Zhang -
2023 Poster: Learning in POMDPs is Sample-Efficient with Hindsight Observability »
Jonathan Lee · Alekh Agarwal · Christoph Dann · Tong Zhang -
2023 Poster: Generalized Polyak Step Size for First Order Optimization with Momentum »
Xiaoyu Wang · Mikael Johansson · Tong Zhang -
2023 Poster: On the Convergence of Federated Averaging with Cyclic Client Participation »
Yae Jee Cho · PRANAY SHARMA · Gauri Joshi · Zheng Xu · Satyen Kale · Tong Zhang -
2023 Poster: Weakly Supervised Disentangled Generative Causal Representation Learning »
Xinwei Shen · Furui Liu · Hanze Dong · Qing Lian · Zhitang Chen · Tong Zhang -
2023 Poster: Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear Contextual Bandits and Markov Decision Processes »
Chenlu Ye · Wei Xiong · Quanquan Gu · Tong Zhang -
2022 Poster: A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games »
Wei Xiong · Han Zhong · Chengshuai Shi · Cong Shen · Tong Zhang -
2022 Poster: Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets »
Han Zhong · Wei Xiong · Jiyuan Tan · Liwei Wang · Tong Zhang · Zhaoran Wang · Zhuoran Yang -
2022 Spotlight: Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets »
Han Zhong · Wei Xiong · Jiyuan Tan · Liwei Wang · Tong Zhang · Zhaoran Wang · Zhuoran Yang -
2022 Spotlight: A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games »
Wei Xiong · Han Zhong · Chengshuai Shi · Cong Shen · Tong Zhang -
2022 Poster: Benefits of Overparameterized Convolutional Residual Networks: Function Approximation under Smoothness Constraint »
Hao Liu · Minshuo Chen · Siawpeng Er · Wenjing Liao · Tong Zhang · Tuo Zhao -
2022 Spotlight: Benefits of Overparameterized Convolutional Residual Networks: Function Approximation under Smoothness Constraint »
Hao Liu · Minshuo Chen · Siawpeng Er · Wenjing Liao · Tong Zhang · Tuo Zhao -
2022 Poster: A Theoretical Analysis on Independence-driven Importance Weighting for Covariate-shift Generalization »
Renzhe Xu · Xingxuan Zhang · Zheyan Shen · Tong Zhang · Peng Cui -
2022 Poster: Sparse Invariant Risk Minimization »
Xiao Zhou · Yong LIN · Weizhong Zhang · Tong Zhang -
2022 Poster: Model Agnostic Sample Reweighting for Out-of-Distribution Learning »
Xiao Zhou · Yong LIN · Renjie Pi · Weizhong Zhang · Renzhe Xu · Peng Cui · Tong Zhang -
2022 Poster: Probabilistic Bilevel Coreset Selection »
Xiao Zhou · Renjie Pi · Weizhong Zhang · Yong LIN · Zonghao Chen · Tong Zhang -
2022 Spotlight: A Theoretical Analysis on Independence-driven Importance Weighting for Covariate-shift Generalization »
Renzhe Xu · Xingxuan Zhang · Zheyan Shen · Tong Zhang · Peng Cui -
2022 Spotlight: Probabilistic Bilevel Coreset Selection »
Xiao Zhou · Renjie Pi · Weizhong Zhang · Yong LIN · Zonghao Chen · Tong Zhang -
2022 Spotlight: Model Agnostic Sample Reweighting for Out-of-Distribution Learning »
Xiao Zhou · Yong LIN · Renjie Pi · Weizhong Zhang · Renzhe Xu · Peng Cui · Tong Zhang -
2022 Spotlight: Sparse Invariant Risk Minimization »
Xiao Zhou · Yong LIN · Weizhong Zhang · Tong Zhang -
2021 Poster: Streaming Bayesian Deep Tensor Factorization »
Shikai Fang · Zheng Wang · Zhimeng Pan · Ji Liu · Shandian Zhe -
2021 Spotlight: Streaming Bayesian Deep Tensor Factorization »
Shikai Fang · Zheng Wang · Zhimeng Pan · Ji Liu · Shandian Zhe -
2021 Poster: DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning »
Daochen Zha · Jingru Xie · Wenye Ma · Sheng Zhang · Xiangru Lian · Xia Hu · Ji Liu -
2021 Poster: 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed »
Hanlin Tang · Shaoduo Gan · Ammar Ahmad Awan · Samyam Rajbhandari · Conglong Li · Xiangru Lian · Ji Liu · Ce Zhang · Yuxiong He -
2021 Spotlight: 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed »
Hanlin Tang · Shaoduo Gan · Ammar Ahmad Awan · Samyam Rajbhandari · Conglong Li · Xiangru Lian · Ji Liu · Ce Zhang · Yuxiong He -
2021 Spotlight: DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning »
Daochen Zha · Jingru Xie · Wenye Ma · Sheng Zhang · Xiangru Lian · Xia Hu · Ji Liu -
2021 Town Hall: Town Hall »
John Langford · Marina Meila · Tong Zhang · Le Song · Stefanie Jegelka · Csaba Szepesvari -
2020 Poster: Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization »
Rie Johnson · Tong Zhang -
2019 Poster: Distributed Learning over Unreliable Networks »
Chen Yu · Hanlin Tang · Cedric Renggli · Simon Kassing · Ankit Singla · Dan Alistarh · Ce Zhang · Ji Liu -
2019 Oral: Distributed Learning over Unreliable Networks »
Chen Yu · Hanlin Tang · Cedric Renggli · Simon Kassing · Ankit Singla · Dan Alistarh · Ce Zhang · Ji Liu -
2019 Poster: Grid-Wise Control for Multi-Agent Reinforcement Learning in Video Game AI »
Lei Han · Peng Sun · Yali Du · Jiechao Xiong · Qing Wang · Xinghai Sun · Han Liu · Tong Zhang -
2019 Oral: Grid-Wise Control for Multi-Agent Reinforcement Learning in Video Game AI »
Lei Han · Peng Sun · Yali Du · Jiechao Xiong · Qing Wang · Xinghai Sun · Han Liu · Tong Zhang -
2019 Tutorial: Causal Inference and Stable Learning »
Tong Zhang · Peng Cui -
2018 Poster: An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method »
Li Shen · Peng Sun · Yitong Wang · Wei Liu · Tong Zhang -
2018 Poster: Candidates vs. Noises Estimation for Large Multi-Class Classification Problem »
Lei Han · Yiheng Huang · Tong Zhang -
2018 Poster: Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents »
Kaiqing Zhang · Zhuoran Yang · Han Liu · Tong Zhang · Tamer Basar -
2018 Oral: An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method »
Li Shen · Peng Sun · Yitong Wang · Wei Liu · Tong Zhang -
2018 Oral: Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents »
Kaiqing Zhang · Zhuoran Yang · Han Liu · Tong Zhang · Tamer Basar -
2018 Oral: Candidates vs. Noises Estimation for Large Multi-Class Classification Problem »
Lei Han · Yiheng Huang · Tong Zhang -
2018 Poster: Graphical Nonconvex Optimization via an Adaptive Convex Relaxation »
Qiang Sun · Kean Ming Tan · Han Liu · Tong Zhang -
2018 Poster: Composite Functional Gradient Learning of Generative Adversarial Models »
Rie Johnson · Tong Zhang -
2018 Poster: Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization »
Jiaxiang Wu · Weidong Huang · Junzhou Huang · Tong Zhang -
2018 Oral: Graphical Nonconvex Optimization via an Adaptive Convex Relaxation »
Qiang Sun · Kean Ming Tan · Han Liu · Tong Zhang -
2018 Oral: Composite Functional Gradient Learning of Generative Adversarial Models »
Rie Johnson · Tong Zhang -
2018 Oral: Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization »
Jiaxiang Wu · Weidong Huang · Junzhou Huang · Tong Zhang -
2018 Poster: Safe Element Screening for Submodular Function Minimization »
Weizhong Zhang · Bin Hong · Lin Ma · Wei Liu · Tong Zhang -
2018 Poster: End-to-end Active Object Tracking via Reinforcement Learning »
Wenhan Luo · Peng Sun · Fangwei Zhong · Wei Liu · Tong Zhang · Yizhou Wang -
2018 Poster: Asynchronous Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Wei Zhang · Ce Zhang · Ji Liu -
2018 Poster: $D^2$: Decentralized Training over Decentralized Data »
Hanlin Tang · Xiangru Lian · Ming Yan · Ce Zhang · Ji Liu -
2018 Oral: $D^2$: Decentralized Training over Decentralized Data »
Hanlin Tang · Xiangru Lian · Ming Yan · Ce Zhang · Ji Liu -
2018 Oral: End-to-end Active Object Tracking via Reinforcement Learning »
Wenhan Luo · Peng Sun · Fangwei Zhong · Wei Liu · Tong Zhang · Yizhou Wang -
2018 Oral: Safe Element Screening for Submodular Function Minimization »
Weizhong Zhang · Bin Hong · Lin Ma · Wei Liu · Tong Zhang -
2018 Oral: Asynchronous Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Wei Zhang · Ce Zhang · Ji Liu -
2017 Poster: ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning »
Hantian Zhang · Jerry Li · Kaan Kara · Dan Alistarh · Ji Liu · Ce Zhang -
2017 Poster: Projection-free Distributed Online Learning in Networks »
Wenpeng Zhang · Peilin Zhao · Wenwu Zhu · Steven Hoi · Tong Zhang -
2017 Talk: ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning »
Hantian Zhang · Jerry Li · Kaan Kara · Dan Alistarh · Ji Liu · Ce Zhang -
2017 Talk: Projection-free Distributed Online Learning in Networks »
Wenpeng Zhang · Peilin Zhao · Wenwu Zhu · Steven Hoi · Tong Zhang -
2017 Poster: Efficient Distributed Learning with Sparsity »
Jialei Wang · Mladen Kolar · Nati Srebro · Tong Zhang -
2017 Poster: On The Projection Operator to A Three-view Cardinality Constrained Set »
Haichuan Yang · Shupeng Gui · Chuyang Ke · Daniel Stefankovic · Ryohei Fujimaki · Ji Liu -
2017 Talk: Efficient Distributed Learning with Sparsity »
Jialei Wang · Mladen Kolar · Nati Srebro · Tong Zhang -
2017 Talk: On The Projection Operator to A Three-view Cardinality Constrained Set »
Haichuan Yang · Shupeng Gui · Chuyang Ke · Daniel Stefankovic · Ryohei Fujimaki · Ji Liu