Timezone: »

 
Poster
signSGD: Compressed Optimisation for Non-Convex Problems
Jeremy Bernstein · Yu-Xiang Wang · Kamyar Azizzadenesheli · Anima Anandkumar

Wed Jul 11 09:15 AM -- 12:00 PM (PDT) @ Hall B #72
Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. signSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. The relative $\ell_1/\ell_2$ geometry of gradients, noise and curvature informs whether signSGD or SGD is theoretically better suited to a particular problem. On the practical side we find that the momentum counterpart of signSGD is able to match the accuracy and convergence speed of Adam on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence. Code to reproduce experiments is to be found at https://github.com/jxbz/signSGD.

Author Information

Jeremy Bernstein (Caltech)
Yu-Xiang Wang (UC Santa Barbara)
Yu-Xiang Wang

Yu-Xiang Wang is the Eugene Aas Assistant Professor of Computer Science at UCSB. He runs the Statistical Machine Learning lab and co-founded the UCSB Center for Responsible Machine Learning. He is also visiting Amazon Web Services. Yu-Xiang’s research interests include statistical theory and methodology, differential privacy, reinforcement learning, online learning and deep learning.

Kamyar Azizzadenesheli (UC Irvine/Caltech)

Kamyar Azizzadenesheli is a graduate student in the TensorLab at the University of California, Irvine, supervised by Prof. Anima Anandkumar. Currently, he is a visiting researcher at Caltech, hosted by Anima, working with ML and Control researchers at CMS department and the Center for Autonomous Systems and Technologies (CAST) in close collaboration with Prof. Yisong Yue, Prof. Soon-Jo Chung, and Prof. Joel W. Burdick. He is a former visiting researcher at Stanford University, hosted by Prof. Emma Brunskill, and researcher at Simons Institute, UC. Berkeley. He also is a former guest researcher at INRIA France, working with Sequel team hosted by Dr. Alessandro Lazaric as well as a visitor at Microsoft Research Lab, New England, and New York.

Anima Anandkumar (Amazon AI & Caltech)

Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She is passionate about designing principled AI algorithms and applying them to interdisciplinary domains. She has received several honors such as the IEEE fellowship, Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, Venturebeat’s “women in AI” award, NYTimes GoodTech award, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She has appeared in the PBS Frontline documentary on the “Amazon empire” and has given keynotes in many forums such as the TEDx, KDD, ICLR, and ACM. Anima received her BTech from Indian Institute of Technology Madras, her PhD from Cornell University, and did her postdoctoral research at MIT and assistant professorship at University of California Irvine.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors