Skip to yearly menu bar Skip to main content


Poster

Manifold Identification for Ultimately Communication-Efficient Distributed Optimization

Yu-Sheng Li · Wei-Lin Chiang · Ching-pei Lee

Keywords: [ Large Scale Learning and Big Data ] [ Parallel and Distributed Learning ] [ Sparsity and Compressed Sensing ] [ Optimization - Large Scale, Parallel and Distributed ]


Abstract:

This work proposes a progressive manifold identification approach for distributed optimization with sound theoretical justifications to greatly reduce both the rounds of communication and the bytes communicated per round for partly-smooth regularized problems such as the L1- and group-LASSO-regularized ones. Our two-stage method first uses an inexact proximal quasi-Newton method to iteratively identify a sequence of low-dimensional manifolds in which the final solution would lie, and restricts the model update within the current manifold to gradually lower the order of the per-round communication cost from the problem dimension to the dimension of the manifold that contains a solution and makes the problem within it smooth. After identifying this manifold, we take superlinear-convergent truncated semismooth Newton steps computed by preconditioned conjugate gradient to largely reduce the communication rounds by improving the convergence rate from the existing linear or sublinear ones to a superlinear rate. Experiments show that our method can be two orders of magnitude better in the communication cost and an order of magnitude faster in the running time than state of the art.

Chat is not available.