Poster
Multiplicative Weights Updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always
Ioannis Panageas · Georgios Piliouras · xiao wang
Pacific Ballroom #99
Keywords: [ Game Theory and Mechanism Design ] [ Non-convex Optimization ] [ Online Learning ]
[
Abstract
]
Abstract:
Non-concave maximization has been the subject of much recent study in the optimization and machine learning communities, specifically in deep learning.
Recent papers ([Ge et al. 2015, Lee et al 2017] and references therein) indicate that first order methods work well and avoid saddles points. Results as in [Lee \etal 2017], however, are limited to the \textit{unconstrained} case or for cases where the critical points are in the interior of the feasibility set, which fail to capture some of the most interesting applications. In this paper we focus on \textit{constrained} non-concave maximization. We analyze a variant of a well-established algorithm in machine learning called Multiplicative Weights Update (MWU) for the maximization problem $\max_{\mathbf{x} \in D} P(\mathbf{x})$, where $P$ is non-concave, twice continuously differentiable and $D$ is a product of simplices. We show that MWU converges almost always for small enough stepsizes to critical points that satisfy the second order KKT conditions,
by combining techniques from dynamical systems as well as taking advantage of a recent connection between Baum Eagon inequality and MWU [Palaiopanos et al 2017].
Live content is unavailable. Log in and register to view live content