Skip to yearly menu bar Skip to main content


A Simple yet Universal Strategy for Online Convex Optimization

Lijun Zhang · Guanghui Wang · Jinfeng Yi · Tianbao Yang

Hall E #1315

Keywords: [ T: Optimization ] [ OPT: Convex ] [ OPT: First-order ] [ OPT: Optimization and Learning under Uncertainty ] [ T: Online Learning and Bandits ]


Recently, several universal methods have been proposed for online convex optimization, and attain minimax rates for multiple types of convex functions simultaneously. However, they need to design and optimize one surrogate loss for each type of functions, making it difficult to exploit the structure of the problem and utilize existing algorithms. In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations. The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses to aggregate predictions from experts. Specifically, the meta-algorithm is required to yield a second-order bound with excess losses, so that it can leverage strong convexity and exponential concavity to control the meta-regret. In this way, our strategy inherits the theoretical guarantee of any expert designed for strongly convex functions and exponentially concave functions, up to a double logarithmic factor. As a result, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds. For general convex functions, it maintains the minimax optimality and also achieves a small-loss bound.

Chat is not available.