Timezone: »
Poster
Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization
Kaiyi Ji · Zhe Wang · Yi Zhou · Yingbin LIANG
Two types of zeroth-order stochastic algorithms have recently been designed for nonconvex optimization respectively based on the first-order techniques SVRG and SARAH/SPIDER. This paper addresses several important issues that are still open in these methods. First, all existing SVRG-type zeroth-order algorithms suffer from worse function query complexities than either zeroth-order gradient descent (ZO-GD) or stochastic gradient descent (ZO-SGD). In this paper, we propose a new algorithm ZO-SVRG-Coord-Rand and develop a new analysis for an existing ZO-SVRG-Coord algorithm proposed in Liu et al. 2018b, and show that both ZO-SVRG-Coord-Rand and ZO-SVRG-Coord (under our new analysis) outperform other exiting SVRG-type zeroth-order methods as well as ZO-GD and ZO-SGD. Second, the existing SPIDER-type algorithm SPIDER-SZO (Fang et al., 2018) has superior theoretical performance, but suffers from the generation of a large number of Gaussian random variables as well as a $\sqrt{\epsilon}$-level stepsize in practice. In this paper, we develop a new algorithm ZO-SPIDER-Coord, which is free from Gaussian variable generation and allows a large constant stepsize while maintaining the same convergence rate and query complexity, and we further show that ZO-SPIDER-Coord automatically achieves a linear convergence rate as the iterate enters into a local PL region without restart and algorithmic modification.
Author Information
Kaiyi Ji (The Ohio State University)
Zhe Wang (Ohio State University)
Yi Zhou (Duke University)
Yingbin LIANG (The Ohio State University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization »
Tue. Jun 11th 06:20 -- 06:25 PM Room Room 104
More from the Same Authors
-
2021 : CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2023 Poster: Generalized-Smooth Nonconvex Optimization is As Efficient As Smooth Nonconvex Optimization »
Ziyi Chen · Yi Zhou · Yingbin LIANG · Zhaosong Lu -
2023 Poster: Theory on Forgetting and Generalization of Continual Learning »
Sen Lin · Peizhong Ju · Yingbin LIANG · Ness Shroff -
2023 Poster: Non-stationary Reinforcement Learning under General Function Approximation »
Songtao Feng · Ming Yin · Ruiquan Huang · Yu-Xiang Wang · Jing Yang · Yingbin LIANG -
2023 Poster: A Near-Optimal Algorithm for Safe Reinforcement Learning Under Instantaneous Hard Constraints »
Ming Shi · Yingbin LIANG · Ness Shroff -
2021 : CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2021 Poster: Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality »
Tengyu Xu · Zhuoran Yang · Zhaoran Wang · Yingbin LIANG -
2021 Poster: CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2021 Spotlight: Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality »
Tengyu Xu · Zhuoran Yang · Zhaoran Wang · Yingbin LIANG -
2021 Spotlight: CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee »
Tengyu Xu · Yingbin LIANG · Guanghui Lan -
2021 Poster: Bilevel Optimization: Convergence Analysis and Enhanced Design »
Kaiyi Ji · Junjie Yang · Yingbin LIANG -
2021 Spotlight: Bilevel Optimization: Convergence Analysis and Enhanced Design »
Kaiyi Ji · Junjie Yang · Yingbin LIANG -
2020 Poster: History-Gradient Aided Batch Size Adaptation for Variance Reduced Algorithms »
Kaiyi Ji · Zhe Wang · Bowen Weng · Yi Zhou · Wei Zhang · Yingbin LIANG