Timezone: »
Poster
Optimistic Online Mirror Descent for Bridging Stochastic and Adversarial Online Convex Optimization
SIJIA CHEN · Wei-Wei Tu · Peng Zhao · Lijun Zhang
Stochastically Extended Adversarial (SEA) model is introduced by Sachs et al. (2022) as an interpolation between stochastic and adversarial online convex optimization. Under the smoothness condition, they demonstrate that the expected regret of optimistic follow-the-regularized-leader (FTRL) depends on the cumulative stochastic variance $\sigma_{1:T}^2$ and the cumulative adversarial variation $\Sigma_{1:T}^2$ for convex functions. They also provide a slightly weaker bound based on the maximal stochastic variance $\sigma_{\max}^2$ and the maximal adversarial variation $\Sigma_{\max}^2$ for strongly convex functions. Inspired by their work, we investigate the theoretical guarantees of optimistic online mirror descent (OMD) for the SEA model. For convex and smooth functions, we obtain the same $\mathcal{O}(\sqrt{\sigma_{1:T}^2}+\sqrt{\Sigma_{1:T}^2})$ regret bound, without the convexity requirement of individual functions. For strongly convex and smooth functions, we establish an $\mathcal{O}(\min\{\log (\sigma_{1:T}^2+\Sigma_{1:T}^2), (\sigma_{\max}^2 + \Sigma_{\max}^2) \log T\})$ bound, better than their $\mathcal{O}((\sigma_{\max}^2 + \Sigma_{\max}^2) \log T)$ result. For exp-concave and smooth functions, we achieve a new $\mathcal{O}(d\log(\sigma_{1:T}^2+\Sigma_{1:T}^2))$ bound. Owing to the OMD framework, we further establish dynamic regret for convex and smooth functions, which is more favorable in non-stationary online scenarios.
Author Information
SIJIA CHEN (Nanjing University)
Wei-Wei Tu (4Paradigm Inc.)
Peng Zhao (Nanjing University)
Lijun Zhang (Nanjing University)
More from the Same Authors
-
2022 : Optimal Rates of (Locally) Differentially Private Heavy-tailed Multi-Armed Bandits »
Yulian Wu · Youming Tao · Peng Zhao · Di Wang -
2023 Poster: Fast Rates in Time-Varying Strongly Monotone Games »
Yu-Hu Yan · Peng Zhao · Zhi-Hua Zhou -
2023 Poster: Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization »
Zi-Hao Qiu · Quanqi Hu · Zhuoning Yuan · Denny Zhou · Lijun Zhang · Tianbao Yang -
2023 Poster: Learning Unnormalized Statistical Models via Compositional Optimization »
Wei Jiang · Jiayu Qin · Lingyu Wu · Changyou Chen · Tianbao Yang · Lijun Zhang -
2023 Poster: Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for Multi-Block Bilevel Optimization »
Quanqi Hu · Zi-Hao Qiu · Zhishuai Guo · Lijun Zhang · Tianbao Yang -
2022 Poster: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance »
Zhuoning Yuan · Yuexin Wu · Zi-Hao Qiu · Xianzhi Du · Lijun Zhang · Denny Zhou · Tianbao Yang -
2022 Poster: A Simple yet Universal Strategy for Online Convex Optimization »
Lijun Zhang · Guanghui Wang · Jinfeng Yi · Tianbao Yang -
2022 Oral: A Simple yet Universal Strategy for Online Convex Optimization »
Lijun Zhang · Guanghui Wang · Jinfeng Yi · Tianbao Yang -
2022 Spotlight: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance »
Zhuoning Yuan · Yuexin Wu · Zi-Hao Qiu · Xianzhi Du · Lijun Zhang · Denny Zhou · Tianbao Yang -
2022 Poster: No-Regret Learning in Time-Varying Zero-Sum Games »
Mengxiao Zhang · Peng Zhao · Haipeng Luo · Zhi-Hua Zhou -
2022 Poster: Optimal Algorithms for Stochastic Multi-Level Compositional Optimization »
Wei Jiang · Bokun Wang · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence »
Zi-Hao Qiu · Quanqi Hu · Yongjian Zhong · Lijun Zhang · Tianbao Yang -
2022 Spotlight: Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence »
Zi-Hao Qiu · Quanqi Hu · Yongjian Zhong · Lijun Zhang · Tianbao Yang -
2022 Spotlight: No-Regret Learning in Time-Varying Zero-Sum Games »
Mengxiao Zhang · Peng Zhao · Haipeng Luo · Zhi-Hua Zhou -
2022 Spotlight: Optimal Algorithms for Stochastic Multi-Level Compositional Optimization »
Wei Jiang · Bokun Wang · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: Dynamic Regret of Online Markov Decision Processes »
Peng Zhao · Long-Fei Li · Zhi-Hua Zhou -
2022 Spotlight: Dynamic Regret of Online Markov Decision Processes »
Peng Zhao · Long-Fei Li · Zhi-Hua Zhou -
2020 Poster: Projection-free Distributed Online Convex Optimization with $O(\sqrt{T})$ Communication Complexity »
Yuanyu Wan · Wei-Wei Tu · Lijun Zhang -
2020 Poster: Learning with Feature and Distribution Evolvable Streams »
Zhen-Yu Zhang · Peng Zhao · Yuan Jiang · Zhi-Hua Zhou -
2020 Poster: Stochastic Optimization for Non-convex Inf-Projection Problems »
Yan Yan · Yi Xu · Lijun Zhang · Wang Xiaoyu · Tianbao Yang -
2020 Affinity Workshop: New In ML »
Zhen Xu · Sparkle Russell-Puleri · Zhengying Liu · Sinead A Williamson · Matthias W Seeger · Wei-Wei Tu · Samy Bengio · Isabelle Guyon -
2019 Poster: Adaptive Regret of Convex and Smooth Functions »
Lijun Zhang · Tie-Yan Liu · Zhi-Hua Zhou -
2019 Oral: Adaptive Regret of Convex and Smooth Functions »
Lijun Zhang · Tie-Yan Liu · Zhi-Hua Zhou -
2019 Poster: Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards »
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang -
2019 Oral: Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards »
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang -
2018 Poster: Dynamic Regret of Strongly Adaptive Methods »
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou -
2018 Oral: Dynamic Regret of Strongly Adaptive Methods »
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou -
2017 Poster: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Tianbao Yang · Qihang Lin · Lijun Zhang -
2017 Talk: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Tianbao Yang · Qihang Lin · Lijun Zhang