Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Optimistic Online Mirror Descent for Bridging Stochastic and Adversarial Online Convex Optimization

SIJIA CHEN · Wei-Wei Tu · Peng Zhao · Lijun Zhang

Exhibit Hall 1 #722
[ ]
[ PDF [ Poster

Abstract: Stochastically Extended Adversarial (SEA) model is introduced by Sachs et al. (2022) as an interpolation between stochastic and adversarial online convex optimization. Under the smoothness condition, they demonstrate that the expected regret of optimistic follow-the-regularized-leader (FTRL) depends on the cumulative stochastic variance σ21:T and the cumulative adversarial variation Σ21:T for convex functions. They also provide a slightly weaker bound based on the maximal stochastic variance σ2max and the maximal adversarial variation Σ2max for strongly convex functions. Inspired by their work, we investigate the theoretical guarantees of optimistic online mirror descent (OMD) for the SEA model. For convex and smooth functions, we obtain the same O(σ21:T+Σ21:T) regret bound, without the convexity requirement of individual functions. For strongly convex and smooth functions, we establish an O(min{log(σ21:T+Σ21:T),(σ2max+Σ2max)logT}) bound, better than their O((σ2max+Σ2max)logT) result. For exp-concave and smooth functions, we achieve a new O(dlog(σ21:T+Σ21:T)) bound. Owing to the OMD framework, we further establish dynamic regret for convex and smooth functions, which is more favorable in non-stationary online scenarios.

Chat is not available.