Poster
First-Order Algorithms Converge Faster than on Convex Problems
Ching-pei Lee · Stephen Wright
Pacific Ballroom #207
Keywords: [ Convex Optimization ]
[
Abstract
]
Abstract:
It is well known that both gradient descent and
stochastic coordinate descent achieve a global convergence rate of
in the objective value, when applied to a scheme for
minimizing a Lipschitz-continuously differentiable, unconstrained
convex function. In this work, we improve this rate to .
We extend the result to proximal gradient and proximal coordinate
descent on regularized problems to show similar convergence
rates. The result is tight in the sense that a rate of
is not generally attainable for any
, for any of these methods.
Live content is unavailable. Log in and register to view live content