Timezone: »
Poster
A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates
Tianbao Yang · Qihang Lin · Lijun Zhang
This paper focuses on convex constrained optimization problems, where the solution is subject to a convex inequality constraint. In particular, we aim at challenging problems for which both projection into the constrained domain and a linear optimization under the inequality constraint are time-consuming, which render both projected gradient methods and conditional gradient methods (a.k.a. the Frank-Wolfe algorithm) expensive. In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates under a certain regularity condition of the constraint function. We first present a general theory of optimization with only one projection. Its application to smooth optimization with only one projection yields $O(1/\epsilon)$ iteration complexity, which improves over the $O(1/\epsilon^2)$ iteration complexity established before for non-smooth optimization and can be further reduced under strong convexity. Then we introduce a local error bound condition and develop faster algorithms for non-strongly convex optimization at the price of a logarithmic number of projections. In particular, we achieve an iteration complexity of $\widetilde O(1/\epsilon^{2(1-\theta)})$ for non-smooth optimization and $\widetilde O(1/\epsilon^{1-\theta})$ for smooth optimization, where $\theta\in(0,1]$ appearing the local error bound condition characterizes the functional local growth rate around the optimal solutions. Novel applications in solving the constrained $\ell_1$ minimization problem and a positive semi-definite constrained distance metric learning problem demonstrate that the proposed algorithms achieve significant speed-up compared with previous algorithms.
Author Information
Tianbao Yang (Texas A&M University)
Qihang Lin (Univ Iowa)
Lijun Zhang (Nanjing University)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Mon. Aug 7th 06:24 -- 06:42 AM Room Parkside 2
More from the Same Authors
-
2023 Poster: Provable Multi-instance Deep AUC Maximization with Stochastic Pooling »
Dixian Zhu · Bokun Wang · Zhi Chen · Yaxing Wang · Milan Sonka · Xiaodong Wu · Tianbao Yang -
2023 Poster: Label Distributionally Robust Losses for Multi-class Classification: Consistency, Robustness and Adaptivity »
Dixian Zhu · Yiming Ying · Tianbao Yang -
2023 Poster: Generalization Analysis for Contrastive Representation Learning »
Yunwen Lei · Tianbao Yang · Yiming Ying · Ding-Xuan Zhou -
2023 Poster: Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization »
Zi-Hao Qiu · Quanqi Hu · Zhuoning Yuan · Denny Zhou · Lijun Zhang · Tianbao Yang -
2023 Poster: Learning Unnormalized Statistical Models via Compositional Optimization »
Wei Jiang · Jiayu Qin · Lingyu Wu · Changyou Chen · Tianbao Yang · Lijun Zhang -
2023 Poster: Optimistic Online Mirror Descent for Bridging Stochastic and Adversarial Online Convex Optimization »
SIJIA CHEN · Wei-Wei Tu · Peng Zhao · Lijun Zhang -
2023 Poster: Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for Multi-Block Bilevel Optimization »
Quanqi Hu · Zi-Hao Qiu · Zhishuai Guo · Lijun Zhang · Tianbao Yang -
2023 Poster: FeDXL: Provable Federated Learning for Deep X-Risk Optimization »
Zhishuai Guo · Rong Jin · Jiebo Luo · Tianbao Yang -
2022 Poster: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance »
Zhuoning Yuan · Yuexin Wu · Zi-Hao Qiu · Xianzhi Du · Lijun Zhang · Denny Zhou · Tianbao Yang -
2022 Poster: A Simple yet Universal Strategy for Online Convex Optimization »
Lijun Zhang · Guanghui Wang · Jinfeng Yi · Tianbao Yang -
2022 Oral: A Simple yet Universal Strategy for Online Convex Optimization »
Lijun Zhang · Guanghui Wang · Jinfeng Yi · Tianbao Yang -
2022 Spotlight: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance »
Zhuoning Yuan · Yuexin Wu · Zi-Hao Qiu · Xianzhi Du · Lijun Zhang · Denny Zhou · Tianbao Yang -
2022 Poster: GraphFM: Improving Large-Scale GNN Training via Feature Momentum »
Haiyang Yu · Limei Wang · Bokun Wang · Meng Liu · Tianbao Yang · Shuiwang Ji -
2022 Poster: Optimal Algorithms for Stochastic Multi-Level Compositional Optimization »
Wei Jiang · Bokun Wang · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence »
Zi-Hao Qiu · Quanqi Hu · Yongjian Zhong · Lijun Zhang · Tianbao Yang -
2022 Poster: Finite-Sum Coupled Compositional Stochastic Optimization: Theory and Applications »
Bokun Wang · Tianbao Yang -
2022 Spotlight: GraphFM: Improving Large-Scale GNN Training via Feature Momentum »
Haiyang Yu · Limei Wang · Bokun Wang · Meng Liu · Tianbao Yang · Shuiwang Ji -
2022 Spotlight: Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence »
Zi-Hao Qiu · Quanqi Hu · Yongjian Zhong · Lijun Zhang · Tianbao Yang -
2022 Spotlight: Finite-Sum Coupled Compositional Stochastic Optimization: Theory and Applications »
Bokun Wang · Tianbao Yang -
2022 Spotlight: Optimal Algorithms for Stochastic Multi-Level Compositional Optimization »
Wei Jiang · Bokun Wang · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee »
Dixian Zhu · Gang Li · Bokun Wang · Xiaodong Wu · Tianbao Yang -
2022 Spotlight: When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee »
Dixian Zhu · Gang Li · Bokun Wang · Xiaodong Wu · Tianbao Yang -
2021 Poster: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems »
Yunwen Lei · Zhenhuan Yang · Tianbao Yang · Yiming Ying -
2021 Oral: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems »
Yunwen Lei · Zhenhuan Yang · Tianbao Yang · Yiming Ying -
2021 Poster: Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity »
Zhuoning Yuan · Zhishuai Guo · Yi Xu · Yiming Ying · Tianbao Yang -
2021 Spotlight: Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity »
Zhuoning Yuan · Zhishuai Guo · Yi Xu · Yiming Ying · Tianbao Yang -
2020 Poster: Projection-free Distributed Online Convex Optimization with $O(\sqrt{T})$ Communication Complexity »
Yuanyu Wan · Wei-Wei Tu · Lijun Zhang -
2020 Poster: Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks »
Zhishuai Guo · Mingrui Liu · Zhuoning Yuan · Li Shen · Wei Liu · Tianbao Yang -
2020 Poster: Transparency Promotion with Model-Agnostic Linear Competitors »
Hassan Rafique · Tong Wang · Qihang Lin · Arshia Singhani -
2020 Poster: Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints »
Runchao Ma · Qihang Lin · Tianbao Yang -
2020 Poster: Stochastic Optimization for Non-convex Inf-Projection Problems »
Yan Yan · Yi Xu · Lijun Zhang · Wang Xiaoyu · Tianbao Yang -
2019 Poster: Adaptive Regret of Convex and Smooth Functions »
Lijun Zhang · Tie-Yan Liu · Zhi-Hua Zhou -
2019 Oral: Adaptive Regret of Convex and Smooth Functions »
Lijun Zhang · Tie-Yan Liu · Zhi-Hua Zhou -
2019 Poster: Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards »
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang -
2019 Poster: Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence »
Yi Xu · Qi Qi · Qihang Lin · rong jin · Tianbao Yang -
2019 Oral: Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence »
Yi Xu · Qi Qi · Qihang Lin · rong jin · Tianbao Yang -
2019 Oral: Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards »
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang -
2019 Poster: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2019 Oral: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2018 Poster: Dynamic Regret of Strongly Adaptive Methods »
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou -
2018 Poster: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Poster: Level-Set Methods for Finite-Sum Constrained Convex Optimization »
Qihang Lin · Runchao Ma · Tianbao Yang -
2018 Oral: Level-Set Methods for Finite-Sum Constrained Convex Optimization »
Qihang Lin · Runchao Ma · Tianbao Yang -
2018 Oral: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Oral: Dynamic Regret of Strongly Adaptive Methods »
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou -
2018 Poster: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2018 Oral: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2017 Poster: Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence »
Yi Xu · Qihang Lin · Tianbao Yang -
2017 Talk: Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence »
Yi Xu · Qihang Lin · Tianbao Yang