Timezone: »
Distributed learning aims at computing high-quality models by training over scattered data. This covers a diversity of scenarios, including computer clusters or mobile agents. One of the main challenges is then to deal with heterogeneous machines and unreliable communications. In this setting, we propose and analyze a flexible asynchronous optimization algorithm for solving nonsmooth learning problems. Unlike most existing methods, our algorithm is adjustable to various levels of communication costs, machines computational powers, and data distribution evenness. We prove that the algorithm converges linearly with a fixed learning rate that does not depend on communication delays nor on the number of machines. Although long delays in communication may slow down performance, no delay can break convergence.
Author Information
Konstantin Mishchenko (King Abdullah University of Science & Technology (KAUST))
Franck Iutzeler (Univ. Grenoble Alpes)
Jérôme Malick (CNRS)
Massih-Reza Amini (Univ. Grenoble Alpes)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #155
More from the Same Authors
-
2023 Poster: Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism »
Yu-Guan Hsieh · Franck Iutzeler · Jérôme Malick · Panayotis Mertikopoulos -
2021 : Regularized Newton Method with Global O(1/k^2) Convergence »
Konstantin Mishchenko -
2020 Poster: Adaptive Gradient Descent without Descent »
Yura Malitsky · Konstantin Mishchenko