Timezone: »
Spotlight
Bilevel Optimization: Convergence Analysis and Enhanced Design
Kaiyi Ji · Junjie Yang · Yingbin LIANG
Bilevel optimization has arisen as a powerful tool for many machine learning problems such as meta-learning, hyperparameter optimization, and reinforcement learning. In this paper, we investigate the nonconvex-strongly-convex bilevel optimization problem. For deterministic bilevel optimization, we provide a comprehensive convergence rate analysis for two popular algorithms respectively based on approximate implicit differentiation (AID) and iterative differentiation (ITD). For the AID-based method, we orderwisely improve the previous convergence rate analysis due to a more practical parameter selection as well as a warm start strategy, and for the ITD-based method we establish the first theoretical convergence rate. Our analysis also provides a quantitative comparison between ITD and AID based approaches. For stochastic bilevel optimization, we propose a novel algorithm named stocBiO, which features a sample-efficient hypergradient estimator using efficient Jacobian- and Hessian-vector product computations. We provide the convergence rate guarantee for stocBiO, and show that stocBiO outperforms the best known computational complexities orderwisely with respect to the condition number $\kappa$ and the target accuracy $\epsilon$. We further validate our theoretical results and demonstrate the efficiency of bilevel optimization algorithms by the experiments on meta-learning and hyperparameter optimization.
Author Information
Kaiyi Ji (The Ohio State University)
Junjie Yang (The Ohio State University)
Yingbin LIANG (The Ohio State University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Bilevel Optimization: Convergence Analysis and Enhanced Design »
Wed. Jul 21st 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2021 : CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2023 Poster: Generalized-Smooth Nonconvex Optimization is As Efficient As Smooth Nonconvex Optimization »
Ziyi Chen · Yi Zhou · Yingbin LIANG · Zhaosong Lu -
2023 Poster: Theory on Forgetting and Generalization of Continual Learning »
Sen Lin · Peizhong Ju · Yingbin LIANG · Ness Shroff -
2023 Poster: Non-stationary Reinforcement Learning under General Function Approximation »
Songtao Feng · Ming Yin · Ruiquan Huang · Yu-Xiang Wang · Jing Yang · Yingbin LIANG -
2023 Poster: A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints »
Ming Shi · Yingbin LIANG · Ness Shroff -
2021 : CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2021 Poster: Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality »
Tengyu Xu · Zhuoran Yang · Zhaoran Wang · Yingbin LIANG -
2021 Poster: CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2021 Spotlight: Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality »
Tengyu Xu · Zhuoran Yang · Zhaoran Wang · Yingbin LIANG -
2021 Spotlight: CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2020 Poster: History-Gradient Aided Batch Size Adaptation for Variance Reduced Algorithms »
Kaiyi Ji · Zhe Wang · Bowen Weng · Yi Zhou · Wei Zhang · Yingbin LIANG -
2019 Poster: Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization »
Kaiyi Ji · Zhe Wang · Yi Zhou · Yingbin LIANG -
2019 Oral: Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization »
Kaiyi Ji · Zhe Wang · Yi Zhou · Yingbin LIANG