Timezone: »

 
Poster
Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks
Mingyi Hong · Meisam Razaviyayn · Jason Lee

Fri Jul 13 09:15 AM -- 12:00 PM (PDT) @ Hall B #4

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information; it also extends the existing results for first-order, but {primal-only} algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.

Author Information

Mingyi Hong (University of Minnesota)
Meisam Razaviyayn (University of southern California)
Jason Lee (University of Southern California)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors