Continuous-time Lower Bounds for Gradient-based Algorithms
Michael Muehlebach · Michael Jordan
Keywords:
Convex Optimization
Large Scale Learning and Big Data
Non-convex Optimization
Optimization - Convex
2020 Poster
Abstract
This article derives lower bounds on the convergence rate of continuous-time gradient-based optimization algorithms. The algorithms are subjected to a time-normalization constraint that avoids a reparametrization of time in order to make the discussion of continuous-time convergence rates meaningful. We reduce the multi-dimensional problem to a single dimension, recover well-known lower bounds from the discrete-time setting, and provide insights into why these lower bounds occur. We further explicitly provide algorithms that achieve the proposed lower bounds, even when the function class under consideration includes certain non-convex functions.
Video
Chat is not available.
Successful Page Load