Timezone: »
Communication overhead is one of the key challenges that hinders the scalability of distributed optimization algorithms to train large neural networks. In recent years, there has been a great deal of research to alleviate communication cost by compressing the gradient vector or using local updates and periodic model averaging. In this paper, we advocate the use of redundancy towards communication-efficient distributed stochastic algorithms for non-convex optimization. In particular, we, both theoretically and practically, show that by properly infusing redundancy to the training data with model averaging, it is possible to significantly reduce the number of communication rounds. To be more precise, we show that redundancy reduces residual error in local averaging, thereby reaching the same level of accuracy with fewer rounds of communication as compared with previous algorithms. Empirical studies on CIFAR10, CIFAR100 and ImageNet datasets in a distributed environment complement our theoretical results; they show that our algorithms have additional beneficial aspects including tolerance to failures, as well as greater gradient diversity.
Author Information
Farzin Haddadpour (Pennsylvania State University)
Mohammad Mahdi Kamani (The Pennsylvania State University)
Mehrdad Mahdavi (Pennsylvania State University)
Viveck Cadambe (Pennsylvania State University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Trading Redundancy for Communication: Speeding up Distributed SGD for Non-convex Optimization »
Wed. Jun 12th 06:35 -- 06:40 PM Room Room 103
More from the Same Authors
-
2019 : Poster Session I »
Stark Draper · Mehmet Aktas · Basak Guler · Hongyi Wang · Venkata Gandikota · Hyegyeong Park · Jinhyun So · Lev Tauz · hema venkata krishna giri Narra · Zhifeng Lin · Mohammadali Maddahali · Yaoqing Yang · Sanghamitra Dutta · Amirhossein Reisizadeh · Jianyu Wang · Eren Balevi · Siddharth Jain · Paul McVay · Michael Rudow · Pedro Soto · Jun Li · Adarsh Subramaniam · Umut Demirhan · Vipul Gupta · Deniz Oktay · Leighton P Barnes · Johannes BallĂ© · Farzin Haddadpour · Haewon Jeong · Rong-Rong Chen · Mohammad Fahim -
2019 : Targeted Meta-Learning for Critical Incident Detection in Weather Data »
Mohammad Mahdi Kamani · Sadegh Farhang · Mehrdad Mahdavi · James Wang -
2019 : Networking Lunch (provided) + Poster Session »
Abraham Stanway · Alex Robson · Aneesh Rangnekar · Ashesh Chattopadhyay · Ashley Pilipiszyn · Benjamin LeRoy · Bolong Cheng · Ce Zhang · Chaopeng Shen · Christian Schroeder · Christian Clough · Clement DUHART · Clement Fung · Cozmin Ududec · Dali Wang · David Dao · di wu · Dimitrios Giannakis · Dino Sejdinovic · Doina Precup · Duncan Watson-Parris · Gege Wen · George Chen · Gopal Erinjippurath · Haifeng Li · Han Zou · Herke van Hoof · Hillary A Scannell · Hiroshi Mamitsuka · Hongbao Zhang · Jaegul Choo · James Wang · James Requeima · Jessica Hwang · Jinfan Xu · Johan Mathe · Jonathan Binas · Joonseok Lee · Kalai Ramea · Kate Duffy · Kevin McCloskey · Kris Sankaran · Lester Mackey · Letif Mones · Loubna Benabbou · Lynn Kaack · Matthew Hoffman · Mayur Mudigonda · Mehrdad Mahdavi · Michael McCourt · Mingchao Jiang · Mohammad Mahdi Kamani · Neel Guha · Niccolo Dalmasso · Nick Pawlowski · Nikola Milojevic-Dupont · Paulo Orenstein · Pedram Hassanzadeh · Pekka Marttinen · Ramesh Nair · Sadegh Farhang · Samuel Kaski · Sandeep Manjanna · Sasha Luccioni · Shuby Deshpande · Soo Kim · Soukayna Mouatadid · Sunghyun Park · Tao Lin · Telmo Felgueira · Thomas Hornigold · Tianle Yuan · Tom Beucler · Tracy Cui · Volodymyr Kuleshov · Wei Yu · yang song · Ydo Wexler · Yoshua Bengio · Zhecheng Wang · Zhuangfang Yi · Zouheir Malki