Skip to yearly menu bar Skip to main content


Poster

First-Order Algorithms Converge Faster than O(1/k) on Convex Problems

Ching-pei Lee · Stephen Wright

Pacific Ballroom #207

Keywords: [ Convex Optimization ]


Abstract: It is well known that both gradient descent and stochastic coordinate descent achieve a global convergence rate of O(1/k) in the objective value, when applied to a scheme for minimizing a Lipschitz-continuously differentiable, unconstrained convex function. In this work, we improve this rate to o(1/k). We extend the result to proximal gradient and proximal coordinate descent on regularized problems to show similar o(1/k) convergence rates. The result is tight in the sense that a rate of O(1/k1+ϵ) is not generally attainable for any ϵ>0, for any of these methods.

Live content is unavailable. Log in and register to view live content