Dynamic Regret via Discounted-to-Dynamic Reduction with Applications to Curved Losses and Adam Optimizer
Yan-Feng Xie ⋅ Yu-Jie Zhang ⋅ Peng Zhao ⋅ Zhi-Hua Zhou
Abstract
We study dynamic regret minimization in non-stationary online learning, with a primary focus on follow-the-regularized-leader (FTRL) methods. FTRL is important for curved losses and for understanding adaptive algorithms, yet existing dynamic regret analyses are less explored for FTRL. To address this, we build on the discounted-to-dynamic reduction and present a modular way to obtain dynamic regret bounds. The reduction simplifies prior proofs for online linear regression, recovers optimal rates, and provides new guarantees for online logistic regression, covering two representative curved losses. Beyond online convex optimization, we apply the reduction to analyze the Adam optimizers, obtaining optimal convergence rates in stochastic, non-convex, and non-smooth settings. The reduction also enables a more detailed treatment of Adam with two discount parameters $(\beta_1,\beta_2)$, leading to new results for both clipped and clip-free variants.
Successful Page Load