Timezone: »
Recently, some accelerated stochastic variance reduction algorithms such as Katyusha and ASVRG-ADMM achieve faster convergence than non-accelerated methods such as SVRG and SVRG-ADMM. However, there are still some gaps between the oracle complexities and their lower bounds. To fill in these gaps, this paper proposes a novel Directly Accelerated stochastic Variance reductIon (DAVIS) algorithm with two Snapshots for non-strongly convex (non-SC) unconstrained problems. Our theoretical results show that DAVIS achieves the optimal convergence rate O(1/(nS^2)) and optimal gradient complexity O(n+\sqrt{nL/\epsilon}), which is identical to its lower bound. To the best of our knowledge, this is the first directly accelerated algorithm that attains the optimal lower bound and improves the convergence rate from O(1/S^2) to O(1/(nS^2)). Moreover, we extend DAVIS and theoretical results to non-SC problems with a structured regularizer, and prove that the proposed algorithm with double-snapshots also attains the optimal convergence rate O(1/(nS)) and optimal oracle complexity O(n+L/\epsilon) for such problems, and it is at least a factor n/S faster than existing accelerated stochastic algorithms, where n\gg S in general.
Author Information
Yuanyuan Liu (Xidian University)
Fanhua Shang (School of Artificial Intelligence, Xidian University)
Weixin An (xidian university)
Hongying Liu (Key Lab. of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, China)
Zhouchen Lin (Peking University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Kill a Bird with Two Stones: Closing the Convergence Gaps in Non-Strongly Convex Optimization by Directly Accelerated SVRG with Double Compensation and Snapshots »
Wed. Jul 20th through Thu the 21st Room Hall E #716
More from the Same Authors
-
2021 : Learned Interpretable Residual Extragradient ISTA for Sparse Coding »
· Connie Kong · Fanhua Shang -
2021 : Demystifying Adversarial Training via A Unified Probabilistic Framework »
Yisen Wang · Jiansheng Yang · Zhouchen Lin · Yifei Wang -
2022 Poster: PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs »
Zhengyang Shen · Tao Hong · Qi She · Jinwen Ma · Zhouchen Lin -
2022 Spotlight: PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs »
Zhengyang Shen · Tao Hong · Qi She · Jinwen Ma · Zhouchen Lin -
2022 Poster: Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity »
Huan Li · Zhouchen Lin -
2022 Poster: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Poster: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2022 Poster: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity »
Huan Li · Zhouchen Lin -
2022 Spotlight: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Spotlight: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2021 Poster: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Poster: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Spotlight: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Oral: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Poster: Uncertainty Principles of Encoding GANs »
Ruili Feng · Zhouchen Lin · Jiapeng Zhu · Deli Zhao · Jingren Zhou · Zheng-Jun Zha -
2021 Spotlight: Uncertainty Principles of Encoding GANs »
Ruili Feng · Zhouchen Lin · Jiapeng Zhu · Deli Zhao · Jingren Zhou · Zheng-Jun Zha -
2020 Poster: PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions »
Zhengyang Shen · Lingshen He · Zhouchen Lin · Jinwen Ma -
2020 Poster: Boosted Histogram Transform for Regression »
Yuchao Cai · Hanyuan Hang · Hanfang Yang · Zhouchen Lin -
2020 Poster: Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability »
Mingjie Li · Lingshen He · Zhouchen Lin -
2020 Poster: Maximum-and-Concatenation Networks »
Xingyu Xie · Hao Kong · Jianlong Wu · Wayne Zhang · Guangcan Liu · Zhouchen Lin -
2019 Poster: Differentiable Linearized ADMM »
Xingyu Xie · Jianlong Wu · Guangcan Liu · Zhisheng Zhong · Zhouchen Lin -
2019 Oral: Differentiable Linearized ADMM »
Xingyu Xie · Jianlong Wu · Guangcan Liu · Zhisheng Zhong · Zhouchen Lin -
2018 Poster: A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates »
Kaiwen Zhou · Fanhua Shang · James Cheng -
2018 Oral: A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates »
Kaiwen Zhou · Fanhua Shang · James Cheng