Timezone: »

 
Poster
Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood
Qiujiang Jin · Alec Koppel · Ketan Rajawat · Aryan Mokhtari

Tue Jul 19 03:30 PM -- 05:30 PM (PDT) @ Hall E #631
Non-asymptotic analysis of quasi-Newton methods have received a lot of attention recently. In particular, several works have established a non-asymptotic superlinear rate of $$\mathcal{O}((1/\sqrt{t})^t)$$ for the (classic) BFGS method by exploiting the fact that its error of Newton direction approximation approaches zero. Moreover, a greedy variant of the BFGS method was recently proposed which accelerates the convergence of BFGS by directly approximating the Hessian matrix, instead of Newton direction, and achieves a fast local quadratic convergence rate. Alas, the local quadratic convergence of Greedy-BFGS requires way more updates compared to the number of iterations that BFGS requires for a local superlinear rate. This is due to the fact that in Greedy-BFGS the Hessian is directly approximated and the Newton direction approximation may not be as accurate as the one for BFGS. In this paper, we close this gap and present a novel BFGS method that has the best of two worlds. More precisely, it leverages the approximation ideas of both BFGS and Greedy-BFGS to properly approximate both the Newton direction and the Hessian matrix. Our theoretical results show that our method out-performs both BFGS and Greedy-BFGS in terms of convergence rate, while it reaches its quadratic convergence rate with fewer steps compared to Greedy-BFGS. Numerical experiments on various datasets also confirm our theoretical findings.

Author Information

Qiujiang Jin (University of Texas at Austin)
Alec Koppel (JP Morgan Chase AI Research)
Ketan Rajawat (Indian Institute of Technology Kanpur)

Ketan Rajawat received his B.Tech and M.Tech degrees in Electrical Engineering from the Indian Institute of Technology (IIT) Kanpur, India, in 2007, and his Ph.D. degree in Electrical and Computer Engineering from the University of Minnesota, Minneapolis, MN, USA, in 2012. He is currently an Assistant Professor in the Department of Electrical Engineering, IIT Kanpur. His research interests are in the broad areas of signal processing, robotics, and communications networks, with particular emphasis on distributed optimization and online learning. His current research focuses on the development and analysis of distributed and asynchronous optimization algorithms, online convex optimization algorithms, stochastic optimization algorithms, and the application of these algorithms to problems in machine learning, communications, and smart grid systems. He is currently serving as an Associate Editor with the IEEE Communications Letters. He is also the recipient of the 2018 INSA Medal for Young Scientists and the 2019 INAE Young Engineer Award.

Aryan Mokhtari (UT Austin)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors