Timezone: »
To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms that have small dynamic regrets for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that exponential concavity is utilized to upper bound the dynamic regret. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret.
Author Information
Lijun Zhang (Nanjing University)
Tianbao Yang (The University of Iowa)
rong jin (alibaba group)
Zhi-Hua Zhou (Nanjing University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Dynamic Regret of Strongly Adaptive Methods »
Fri Jul 13th 07:30 -- 07:50 AM Room A5
More from the Same Authors
-
2020 Poster: Cost-effectively Identifying Causal Effects When Only Response Variable is Observable »
Tian-Zuo Wang · Xi-Zhu Wu · Sheng-Jun Huang · Zhi-Hua Zhou -
2020 Poster: Projection-free Distributed Online Convex Optimization with $O(\sqrt{T})$ Communication Complexity »
Yuanyu Wan · Wei-Wei Tu · Lijun Zhang -
2020 Poster: Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks »
Zhishuai Guo · Mingrui Liu · Zhuoning Yuan · Li Shen · Wei Liu · Tianbao Yang -
2020 Poster: Learning with Feature and Distribution Evolvable Streams »
Zhen-Yu Zhang · Peng Zhao · Yuan Jiang · Zhi-Hua Zhou -
2020 Poster: Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints »
Runchao Ma · Qihang Lin · Tianbao Yang -
2020 Poster: Stochastic Optimization for Non-convex Inf-Projection Problems »
Yan Yan · Yi Xu · Lijun Zhang · Wang Xiaoyu · Tianbao Yang -
2019 Poster: Adaptive Regret of Convex and Smooth Functions »
Lijun Zhang · Tie-Yan Liu · Zhi-Hua Zhou -
2019 Oral: Adaptive Regret of Convex and Smooth Functions »
Lijun Zhang · Tie-Yan Liu · Zhi-Hua Zhou -
2019 Poster: On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization »
Hao Yu · rong jin · Sen Yang -
2019 Poster: On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization »
Hao Yu · rong jin -
2019 Poster: Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards »
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang -
2019 Poster: Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence »
Yi Xu · Qi Qi · Qihang Lin · rong jin · Tianbao Yang -
2019 Oral: Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence »
Yi Xu · Qi Qi · Qihang Lin · rong jin · Tianbao Yang -
2019 Oral: On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization »
Hao Yu · rong jin -
2019 Oral: On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization »
Hao Yu · rong jin · Sen Yang -
2019 Oral: Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards »
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang -
2019 Poster: Heterogeneous Model Reuse via Optimizing Multiparty Multiclass Margin »
Xi-Zhu Wu · Song Liu · Zhi-Hua Zhou -
2019 Poster: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2019 Oral: Heterogeneous Model Reuse via Optimizing Multiparty Multiclass Margin »
Xi-Zhu Wu · Song Liu · Zhi-Hua Zhou -
2019 Oral: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2018 Poster: Rectify Heterogeneous Models with Semantic Mapping »
Han-Jia Ye · De-Chuan Zhan · Yuan Jiang · Zhi-Hua Zhou -
2018 Poster: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Poster: Level-Set Methods for Finite-Sum Constrained Convex Optimization »
Qihang Lin · Runchao Ma · Tianbao Yang -
2018 Oral: Level-Set Methods for Finite-Sum Constrained Convex Optimization »
Qihang Lin · Runchao Ma · Tianbao Yang -
2018 Oral: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Oral: Rectify Heterogeneous Models with Semantic Mapping »
Han-Jia Ye · De-Chuan Zhan · Yuan Jiang · Zhi-Hua Zhou -
2018 Poster: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2018 Oral: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2017 Poster: A Unified View of Multi-Label Performance Measures »
Xi-Zhu Wu · Zhi-Hua Zhou -
2017 Talk: A Unified View of Multi-Label Performance Measures »
Xi-Zhu Wu · Zhi-Hua Zhou -
2017 Poster: Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence »
Yi Xu · Qihang Lin · Tianbao Yang -
2017 Poster: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Tianbao Yang · Qihang Lin · Lijun Zhang -
2017 Talk: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Tianbao Yang · Qihang Lin · Lijun Zhang -
2017 Talk: Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence »
Yi Xu · Qihang Lin · Tianbao Yang