Skip to yearly menu bar Skip to main content


Plenary Talk
in
Workshop: Beyond first-order methods in machine learning systems

Stochastic Variance-Reduced High-order Optimization for Nonconvex Optimization

Quanquan Gu


Abstract:

High-order optimization methods, such as cubic regularization methods, have attracted great interest in recent years due to their power to better leverage the optimization landscape. To apply it to large-scale optimization in machine learning, it is of great interest to extend it to stochastic optimization. In this talk, I will introduce a stochastic variance-reduced p-th-order method for finding first-order stationary points in nonconvex finite-sum optimization. Our algorithm enjoys state-of-the-art complexities under several complexity measures including gradient and Hessian sample complexities. I will also introduce corresponding lower bound results to suggest the near-optimality of our algorithm under some specific circumstances.