Timezone: »
Variance-reduced algorithms, although achieve great theoretical performance, can run slowly in practice due to the periodic gradient estimation with a large batch of data. Batch-size adaptation thus arises as a promising approach to accelerate such algorithms. However, existing schemes either apply prescribed batch-size adaption rule or exploit the information along optimization path via additional backtracking and condition verification steps. In this paper, we propose a novel scheme, which eliminates backtracking line search but still exploits the information along optimization path by adapting the batch size via history stochastic gradients. We further theoretically show that such a scheme substantially reduces the overall complexity for popular variance-reduced algorithms SVRG and SARAH/SPIDER for both conventional nonconvex optimization and reinforcement learning problems. To this end, we develop a new convergence analysis framework to handle the dependence of the batch size on history stochastic gradients. Extensive experiments validate the effectiveness of the proposed batch-size adaptation scheme.
Author Information
Kaiyi Ji (The Ohio State University)
Zhe Wang (Ohio State University)
Bowen Weng (Ohio State University)
Yi Zhou (University of Utah)
Wei Zhang (Southern University of Science and Technology)
Yingbin LIANG (The Ohio State University)
More from the Same Authors
-
2021 : CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2023 Poster: Generalized-Smooth Nonconvex Optimization is As Efficient As Smooth Nonconvex Optimization »
Ziyi Chen · Yi Zhou · Yingbin LIANG · Zhaosong Lu -
2023 Poster: Theory on Forgetting and Generalization of Continual Learning »
Sen Lin · Peizhong Ju · Yingbin LIANG · Ness Shroff -
2023 Poster: Non-stationary Reinforcement Learning under General Function Approximation »
Songtao Feng · Ming Yin · Ruiquan Huang · Yu-Xiang Wang · Jing Yang · Yingbin LIANG -
2023 Poster: A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints »
Ming Shi · Yingbin LIANG · Ness Shroff -
2022 Poster: Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis »
Ziyi Chen · Yi Zhou · Rong-Rong Chen · Shaofeng Zou -
2022 Spotlight: Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis »
Ziyi Chen · Yi Zhou · Rong-Rong Chen · Shaofeng Zou -
2021 : CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2021 Poster: Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality »
Tengyu Xu · Zhuoran Yang · Zhaoran Wang · Yingbin LIANG -
2021 Poster: CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2021 Spotlight: Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality »
Tengyu Xu · Zhuoran Yang · Zhaoran Wang · Yingbin LIANG -
2021 Spotlight: CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2021 Poster: Bilevel Optimization: Convergence Analysis and Enhanced Design »
Kaiyi Ji · Junjie Yang · Yingbin LIANG -
2021 Spotlight: Bilevel Optimization: Convergence Analysis and Enhanced Design »
Kaiyi Ji · Junjie Yang · Yingbin LIANG -
2020 Poster: Understanding the Impact of Model Incoherence on Convergence of Incremental SGD with Random Reshuffle »
Shaocong Ma · Yi Zhou -
2019 Poster: Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization »
Kaiyi Ji · Zhe Wang · Yi Zhou · Yingbin LIANG -
2019 Oral: Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization »
Kaiyi Ji · Zhe Wang · Yi Zhou · Yingbin LIANG