Timezone: »
Poster
Lower Bounds for Smooth Nonconvex Finite-Sum Optimization
Dongruo Zhou · Quanquan Gu
Smooth finite-sum optimization has been widely studied in both convex and nonconvex settings. However, existing lower bounds for finite-sum optimization are mostly limited to the setting where each component function is (strongly) convex, while the lower bounds for nonconvex finite-sum optimization remain largely unsolved. In this paper, we study the lower bounds for smooth nonconvex finite-sum optimization, where the objective function is the average of $n$ nonconvex component functions. We prove tight lower bounds for the complexity of finding $\epsilon$-suboptimal point and $\epsilon$-approximate stationary point in different settings, for a wide regime of the smallest eigenvalue of the Hessian of the objective function (or each component function). Given our lower bounds, we can show that existing algorithms including {KatyushaX} \citep{allen2018katyushax}, {Natasha} \citep{allen2017natasha} and {StagewiseKatyusha} \citep{yang2018does} have achieved optimal {Incremental First-order Oracle} (IFO) complexity (i.e., number of IFO calls) up to logarithm factors for nonconvex finite-sum optimization. We also point out potential ways to further improve these complexity results, in terms of making stronger assumptions or by a different convergence analysis.
Author Information
Dongruo Zhou (UCLA)
Quanquan Gu (University of California, Los Angeles)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Lower Bounds for Smooth Nonconvex Finite-Sum Optimization »
Tue. Jun 11th 06:30 -- 06:35 PM Room Room 104
More from the Same Authors
-
2021 : Benign Overfitting in Adversarially Robust Linear Classification »
Jinghui Chen · Yuan Cao · Yuan Cao · Quanquan Gu -
2021 : Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures »
Yuan Cao · Yuan Cao · Quanquan Gu · Mikhail Belkin -
2021 : Nearly Minimax Optimal Regret for Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation »
Yue Wu · Dongruo Zhou · Quanquan Gu -
2021 : Nearly Optimal Regret for Learning Adversarial MDPs with Linear Function Approximation »
Jiafan He · Dongruo Zhou · Quanquan Gu -
2021 : Nearly Minimax Optimal Reinforcement Learning for Discounted MDPs »
Jiafan He · Dongruo Zhou · Quanquan Gu -
2021 : Almost Optimal Algorithms for Two-player Markov Games with Linear Function Approximation »
Zixiang Chen · Dongruo Zhou · Quanquan Gu -
2022 : The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift »
Jingfeng Wu · Difan Zou · Vladimir Braverman · Quanquan Gu · Sham Kakade -
2022 Poster: Learning Stochastic Shortest Path with Linear Function Approximation »
Yifei Min · Jiafan He · Tianhao Wang · Quanquan Gu -
2022 Spotlight: Learning Stochastic Shortest Path with Linear Function Approximation »
Yifei Min · Jiafan He · Tianhao Wang · Quanquan Gu -
2022 Poster: Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression »
Jingfeng Wu · Difan Zou · Vladimir Braverman · Quanquan Gu · Sham Kakade -
2022 Poster: On the Sample Complexity of Learning Infinite-horizon Discounted Linear Kernel MDPs »
Yuanzhou Chen · Jiafan He · Quanquan Gu -
2022 Oral: Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression »
Jingfeng Wu · Difan Zou · Vladimir Braverman · Quanquan Gu · Sham Kakade -
2022 Spotlight: On the Sample Complexity of Learning Infinite-horizon Discounted Linear Kernel MDPs »
Yuanzhou Chen · Jiafan He · Quanquan Gu -
2022 Poster: Dimension-free Complexity Bounds for High-order Nonconvex Finite-sum Optimization »
Dongruo Zhou · Quanquan Gu -
2022 Spotlight: Dimension-free Complexity Bounds for High-order Nonconvex Finite-sum Optimization »
Dongruo Zhou · Quanquan Gu -
2021 : Stochastic Variance-Reduced High-order Optimization for Nonconvex Optimization »
Quanquan Gu -
2021 Workshop: Over-parameterization: Pitfalls and Opportunities »
Yasaman Bahri · Quanquan Gu · Amin Karbasi · Hanie Sedghi -
2021 Poster: On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients »
Difan Zou · Quanquan Gu -
2021 Spotlight: On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients »
Difan Zou · Quanquan Gu -
2021 Poster: Almost Optimal Anytime Algorithm for Batched Multi-Armed Bandits »
Tianyuan Jin · Jing Tang · Pan Xu · Keke Huang · Xiaokui Xiao · Quanquan Gu -
2021 Poster: MOTS: Minimax Optimal Thompson Sampling »
Tianyuan Jin · Pan Xu · Jieming Shi · Xiaokui Xiao · Quanquan Gu -
2021 Poster: Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping »
Dongruo Zhou · Jiafan He · Quanquan Gu -
2021 Poster: Logarithmic Regret for Reinforcement Learning with Linear Function Approximation »
Jiafan He · Dongruo Zhou · Quanquan Gu -
2021 Spotlight: Almost Optimal Anytime Algorithm for Batched Multi-Armed Bandits »
Tianyuan Jin · Jing Tang · Pan Xu · Keke Huang · Xiaokui Xiao · Quanquan Gu -
2021 Spotlight: Logarithmic Regret for Reinforcement Learning with Linear Function Approximation »
Jiafan He · Dongruo Zhou · Quanquan Gu -
2021 Spotlight: MOTS: Minimax Optimal Thompson Sampling »
Tianyuan Jin · Pan Xu · Jieming Shi · Xiaokui Xiao · Quanquan Gu -
2021 Spotlight: Provably Efficient Reinforcement Learning for Discounted MDPs with Feature Mapping »
Dongruo Zhou · Jiafan He · Quanquan Gu -
2021 Poster: Provable Robustness of Adversarial Training for Learning Halfspaces with Noise »
Difan Zou · Spencer Frei · Quanquan Gu -
2021 Poster: Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins »
Spencer Frei · Yuan Cao · Quanquan Gu -
2021 Poster: Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise »
Spencer Frei · Yuan Cao · Quanquan Gu -
2021 Spotlight: Provable Robustness of Adversarial Training for Learning Halfspaces with Noise »
Difan Zou · Spencer Frei · Quanquan Gu -
2021 Oral: Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins »
Spencer Frei · Yuan Cao · Quanquan Gu -
2021 Spotlight: Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise »
Spencer Frei · Yuan Cao · Quanquan Gu -
2020 Poster: A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation »
Pan Xu · Quanquan Gu -
2020 Poster: Optimization Theory for ReLU Neural Networks Trained with Normalization Layers »
Yonatan Dukler · Quanquan Gu · Guido Montufar -
2020 Poster: Neural Contextual Bandits with UCB-based Exploration »
Dongruo Zhou · Lihong Li · Quanquan Gu -
2019 Poster: On the Convergence and Robustness of Adversarial Training »
Yisen Wang · Xingjun Ma · James Bailey · Jinfeng Yi · Bowen Zhou · Quanquan Gu -
2019 Oral: On the Convergence and Robustness of Adversarial Training »
Yisen Wang · Xingjun Ma · James Bailey · Jinfeng Yi · Bowen Zhou · Quanquan Gu -
2018 Poster: Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow »
Xiao Zhang · Simon Du · Quanquan Gu -
2018 Poster: Continuous and Discrete-time Accelerated Stochastic Mirror Descent for Strongly Convex Functions »
Pan Xu · Tianhao Wang · Quanquan Gu -
2018 Oral: Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow »
Xiao Zhang · Simon Du · Quanquan Gu -
2018 Oral: Continuous and Discrete-time Accelerated Stochastic Mirror Descent for Strongly Convex Functions »
Pan Xu · Tianhao Wang · Quanquan Gu -
2018 Poster: A Primal-Dual Analysis of Global Optimality in Nonconvex Low-Rank Matrix Recovery »
Xiao Zhang · Lingxiao Wang · Yaodong Yu · Quanquan Gu -
2018 Poster: Stochastic Variance-Reduced Hamilton Monte Carlo Methods »
Difan Zou · Pan Xu · Quanquan Gu -
2018 Oral: Stochastic Variance-Reduced Hamilton Monte Carlo Methods »
Difan Zou · Pan Xu · Quanquan Gu -
2018 Oral: A Primal-Dual Analysis of Global Optimality in Nonconvex Low-Rank Matrix Recovery »
Xiao Zhang · Lingxiao Wang · Yaodong Yu · Quanquan Gu -
2018 Poster: Stochastic Variance-Reduced Cubic Regularized Newton Method »
Dongruo Zhou · Pan Xu · Quanquan Gu -
2018 Poster: Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization »
Jinghui Chen · Pan Xu · Lingxiao Wang · Jian Ma · Quanquan Gu -
2018 Oral: Stochastic Variance-Reduced Cubic Regularized Newton Method »
Dongruo Zhou · Pan Xu · Quanquan Gu -
2018 Oral: Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization »
Jinghui Chen · Pan Xu · Lingxiao Wang · Jian Ma · Quanquan Gu -
2017 Poster: Uncertainty Assessment and False Discovery Rate Control in High-Dimensional Granger Causal Inference »
Aditya Chaudhry · Pan Xu · Quanquan Gu -
2017 Poster: High-Dimensional Variance-Reduced Stochastic Gradient Expectation-Maximization Algorithm »
Rongda Zhu · Lingxiao Wang · Chengxiang Zhai · Quanquan Gu -
2017 Poster: Robust Gaussian Graphical Model Estimation with Arbitrary Corruption »
Lingxiao Wang · Quanquan Gu -
2017 Talk: High-Dimensional Variance-Reduced Stochastic Gradient Expectation-Maximization Algorithm »
Rongda Zhu · Lingxiao Wang · Chengxiang Zhai · Quanquan Gu -
2017 Talk: Robust Gaussian Graphical Model Estimation with Arbitrary Corruption »
Lingxiao Wang · Quanquan Gu -
2017 Talk: Uncertainty Assessment and False Discovery Rate Control in High-Dimensional Granger Causal Inference »
Aditya Chaudhry · Pan Xu · Quanquan Gu -
2017 Poster: A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery »
Lingxiao Wang · Xiao Zhang · Quanquan Gu -
2017 Talk: A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery »
Lingxiao Wang · Xiao Zhang · Quanquan Gu