Timezone: »
Recently, several universal methods have been proposed for online convex optimization, and attain minimax rates for multiple types of convex functions simultaneously. However, they need to design and optimize one surrogate loss for each type of functions, making it difficult to exploit the structure of the problem and utilize existing algorithms. In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations. The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses to aggregate predictions from experts. Specifically, the meta-algorithm is required to yield a second-order bound with excess losses, so that it can leverage strong convexity and exponential concavity to control the meta-regret. In this way, our strategy inherits the theoretical guarantee of any expert designed for strongly convex functions and exponentially concave functions, up to a double logarithmic factor. As a result, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds. For general convex functions, it maintains the minimax optimality and also achieves a small-loss bound.
Author Information
Lijun Zhang (Nanjing University)
Guanghui Wang (Georgia Tech)
Jinfeng Yi (JD AI Research)
Tianbao Yang (The University of Iowa)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: A Simple yet Universal Strategy for Online Convex Optimization »
Thu. Jul 21st through Fri the 22nd Room Hall E #1315
More from the Same Authors
-
2021 : Fast Certified Robust Training with Short Warmup »
Zhouxing Shi · Yihan Wang · Huan Zhang · Jinfeng Yi · Cho-Jui Hsieh -
2023 Poster: Provable Multi-instance Deep AUC Maximization with Stochastic Pooling »
Dixian Zhu · Bokun Wang · Zhi Chen · Yaxing Wang · Milan Sonka · Xiaodong Wu · Tianbao Yang -
2023 Poster: Label Distributionally Robust Losses for Multi-class Classification: Consistency, Robustness and Adaptivity »
Dixian Zhu · Yiming Ying · Tianbao Yang -
2023 Poster: Generalization Analysis for Contrastive Representation Learning »
Yunwen Lei · Tianbao Yang · Yiming Ying · Ding-Xuan Zhou -
2023 Poster: Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization »
Zi-Hao Qiu · Quanqi Hu · Zhuoning Yuan · Denny Zhou · Lijun Zhang · Tianbao Yang -
2023 Poster: Learning Unnormalized Statistical Models via Compositional Optimization »
Wei Jiang · Jiayu Qin · Lingyu Wu · Changyou Chen · Tianbao Yang · Lijun Zhang -
2023 Poster: Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for Multi-Block Bilevel Optimization »
Quanqi Hu · Zi-Hao Qiu · Zhishuai Guo · Lijun Zhang · Tianbao Yang -
2023 Poster: FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks »
Bingqing Song · Prashant Khanduri · xinwei zhang · Jinfeng Yi · Mingyi Hong -
2023 Poster: FeDXL: Provable Federated Learning for Deep X-Risk Optimization »
Zhishuai Guo · Rong Jin · Jiebo Luo · Tianbao Yang -
2022 Poster: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance »
Zhuoning Yuan · Yuexin Wu · Zi-Hao Qiu · Xianzhi Du · Lijun Zhang · Denny Zhou · Tianbao Yang -
2022 Spotlight: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance »
Zhuoning Yuan · Yuexin Wu · Zi-Hao Qiu · Xianzhi Du · Lijun Zhang · Denny Zhou · Tianbao Yang -
2022 Poster: GraphFM: Improving Large-Scale GNN Training via Feature Momentum »
Haiyang Yu · Limei Wang · Bokun Wang · Meng Liu · Tianbao Yang · Shuiwang Ji -
2022 Poster: Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Mingyi Hong · Steven Wu · Jinfeng Yi -
2022 Poster: Optimal Algorithms for Stochastic Multi-Level Compositional Optimization »
Wei Jiang · Bokun Wang · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence »
Zi-Hao Qiu · Quanqi Hu · Yongjian Zhong · Lijun Zhang · Tianbao Yang -
2022 Poster: Finite-Sum Coupled Compositional Stochastic Optimization: Theory and Applications »
Bokun Wang · Tianbao Yang -
2022 Spotlight: GraphFM: Improving Large-Scale GNN Training via Feature Momentum »
Haiyang Yu · Limei Wang · Bokun Wang · Meng Liu · Tianbao Yang · Shuiwang Ji -
2022 Spotlight: Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence »
Zi-Hao Qiu · Quanqi Hu · Yongjian Zhong · Lijun Zhang · Tianbao Yang -
2022 Spotlight: Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Mingyi Hong · Steven Wu · Jinfeng Yi -
2022 Spotlight: Finite-Sum Coupled Compositional Stochastic Optimization: Theory and Applications »
Bokun Wang · Tianbao Yang -
2022 Spotlight: Optimal Algorithms for Stochastic Multi-Level Compositional Optimization »
Wei Jiang · Bokun Wang · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee »
Dixian Zhu · Gang Li · Bokun Wang · Xiaodong Wu · Tianbao Yang -
2022 Spotlight: When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee »
Dixian Zhu · Gang Li · Bokun Wang · Xiaodong Wu · Tianbao Yang -
2021 Poster: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems »
Yunwen Lei · Zhenhuan Yang · Tianbao Yang · Yiming Ying -
2021 Oral: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems »
Yunwen Lei · Zhenhuan Yang · Tianbao Yang · Yiming Ying -
2021 Poster: Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity »
Zhuoning Yuan · Zhishuai Guo · Yi Xu · Yiming Ying · Tianbao Yang -
2021 Spotlight: Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity »
Zhuoning Yuan · Zhishuai Guo · Yi Xu · Yiming Ying · Tianbao Yang -
2020 Poster: Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks »
Zhishuai Guo · Mingrui Liu · Zhuoning Yuan · Li Shen · Wei Liu · Tianbao Yang -
2020 Poster: Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints »
Runchao Ma · Qihang Lin · Tianbao Yang -
2020 Poster: Stochastic Optimization for Non-convex Inf-Projection Problems »
Yan Yan · Yi Xu · Lijun Zhang · Wang Xiaoyu · Tianbao Yang -
2019 Poster: On the Convergence and Robustness of Adversarial Training »
Yisen Wang · Xingjun Ma · James Bailey · Jinfeng Yi · Bowen Zhou · Quanquan Gu -
2019 Poster: Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards »
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang -
2019 Poster: Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence »
Yi Xu · Qi Qi · Qihang Lin · rong jin · Tianbao Yang -
2019 Oral: Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence »
Yi Xu · Qi Qi · Qihang Lin · rong jin · Tianbao Yang -
2019 Oral: Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards »
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang -
2019 Oral: On the Convergence and Robustness of Adversarial Training »
Yisen Wang · Xingjun Ma · James Bailey · Jinfeng Yi · Bowen Zhou · Quanquan Gu -
2019 Poster: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2019 Oral: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2018 Poster: Dynamic Regret of Strongly Adaptive Methods »
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou -
2018 Poster: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Poster: Level-Set Methods for Finite-Sum Constrained Convex Optimization »
Qihang Lin · Runchao Ma · Tianbao Yang -
2018 Oral: Level-Set Methods for Finite-Sum Constrained Convex Optimization »
Qihang Lin · Runchao Ma · Tianbao Yang -
2018 Oral: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Oral: Dynamic Regret of Strongly Adaptive Methods »
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou -
2018 Poster: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2018 Oral: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2017 Poster: Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence »
Yi Xu · Qihang Lin · Tianbao Yang -
2017 Poster: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Tianbao Yang · Qihang Lin · Lijun Zhang -
2017 Talk: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Tianbao Yang · Qihang Lin · Lijun Zhang -
2017 Talk: Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence »
Yi Xu · Qihang Lin · Tianbao Yang