Timezone: »
Poster
Second-Order Optimization with Lazy Hessians
Nikita Doikov · El Mahdi Chayti · Martin Jaggi
We analyze Newton's method with lazy Hessian updates for solving general possibly non-convex optimization problems. We propose to reuse a previously seen Hessian for several iterations while computing new gradients at each step of the method. This significantly reduces the overall arithmetic complexity of second-order optimization schemes. By using the cubic regularization technique, we establish fast global convergence of our method to a second-order stationary point, while the Hessian does not need to be updated each iteration. For convex problems, we justify global and local superlinear rates for lazy Newton steps with quadratic regularization, which is easier to compute. The optimal frequency for updating the Hessian is once every $d$ iterations, where $d$ is the dimension of the problem. This provably improves the total arithmetic complexity of second-order algorithms by a factor $\sqrt{d}$.
Author Information
Nikita Doikov (EPFL)
El Mahdi Chayti (Swiss Federal Institute of Technology Lausanne)
Martin Jaggi (EPFL)
Related Events (a corresponding poster, oral, or spotlight)
-
2023 Oral: Second-Order Optimization with Lazy Hessians »
Fri. Jul 28th 01:00 -- 01:08 AM Room Ballroom B
More from the Same Authors
-
2021 : iFedAvg – Interpretable Data-Interoperability for Federated Learning »
David Roschewitz · Mary-Anne Hartley · Luca Corinzia · Martin Jaggi -
2022 : The Gap Between Continuous and Discrete Gradient Descent »
Amirkeivan Mohtashami · Martin Jaggi · Sebastian Stich -
2023 : Layerwise Linear Mode Connectivity »
Linara Adilova · Asja Fischer · Martin Jaggi -
2023 : Landmark Attention: Random-Access Infinite Context Length for Transformers »
Amirkeivan Mohtashami · Martin Jaggi -
2023 : 🎤 Fast Causal Attention with Dynamic Sparsity »
Daniele Paliotta · Matteo Pagliardini · Martin Jaggi · François Fleuret -
2023 Poster: Special Properties of Gradient Descent with Large Learning Rates »
Amirkeivan Mohtashami · Martin Jaggi · Sebastian Stich -
2023 Poster: Polynomial Preconditioning for Gradient Methods »
Nikita Doikov · Anton Rodomanov -
2021 : Exact Optimization of Conformal Predictors via Incremental and Decremental Learning (Spotlight #13) »
Giovanni Cherubin · Konstantinos Chatzikokolakis · Martin Jaggi -
2021 Poster: Exact Optimization of Conformal Predictors via Incremental and Decremental Learning »
Giovanni Cherubin · Konstantinos Chatzikokolakis · Martin Jaggi -
2021 Poster: Consensus Control for Decentralized Deep Learning »
Lingjing Kong · Tao Lin · Anastasiia Koloskova · Martin Jaggi · Sebastian Stich -
2021 Poster: Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data »
Tao Lin · Sai Praneeth Reddy Karimireddy · Sebastian Stich · Martin Jaggi -
2021 Spotlight: Exact Optimization of Conformal Predictors via Incremental and Decremental Learning »
Giovanni Cherubin · Konstantinos Chatzikokolakis · Martin Jaggi -
2021 Spotlight: Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data »
Tao Lin · Sai Praneeth Reddy Karimireddy · Sebastian Stich · Martin Jaggi -
2021 Spotlight: Consensus Control for Decentralized Deep Learning »
Lingjing Kong · Tao Lin · Anastasiia Koloskova · Martin Jaggi · Sebastian Stich -
2021 Poster: Learning from History for Byzantine Robust Optimization »
Sai Praneeth Reddy Karimireddy · Lie He · Martin Jaggi -
2021 Spotlight: Learning from History for Byzantine Robust Optimization »
Sai Praneeth Reddy Karimireddy · Lie He · Martin Jaggi -
2020 Poster: Extrapolation for Large-batch Training in Deep Learning »
Tao Lin · Lingjing Kong · Sebastian Stich · Martin Jaggi -
2020 Poster: Optimizer Benchmarking Needs to Account for Hyperparameter Tuning »
Prabhu Teja Sivaprasad · Florian Mai · Thijs Vogels · Martin Jaggi · François Fleuret -
2020 Poster: A Unified Theory of Decentralized SGD with Changing Topology and Local Updates »
Anastasiia Koloskova · Nicolas Loizou · Sadra Boreiri · Martin Jaggi · Sebastian Stich -
2020 Poster: Inexact Tensor Methods with Dynamic Accuracies »
Nikita Doikov · Yurii Nesterov -
2019 Poster: Overcoming Multi-model Forgetting »
Yassine Benyahia · Kaicheng Yu · Kamil Bennani-Smires · Martin Jaggi · Anthony C. Davison · Mathieu Salzmann · Claudiu Musat -
2019 Oral: Overcoming Multi-model Forgetting »
Yassine Benyahia · Kaicheng Yu · Kamil Bennani-Smires · Martin Jaggi · Anthony C. Davison · Mathieu Salzmann · Claudiu Musat -
2019 Poster: Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication »
Anastasiia Koloskova · Sebastian Stich · Martin Jaggi -
2019 Poster: Error Feedback Fixes SignSGD and other Gradient Compression Schemes »
Sai Praneeth Reddy Karimireddy · Quentin Rebjock · Sebastian Stich · Martin Jaggi -
2019 Oral: Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication »
Anastasiia Koloskova · Sebastian Stich · Martin Jaggi -
2019 Oral: Error Feedback Fixes SignSGD and other Gradient Compression Schemes »
Sai Praneeth Reddy Karimireddy · Quentin Rebjock · Sebastian Stich · Martin Jaggi -
2018 Poster: On Matching Pursuit and Coordinate Descent »
Francesco Locatello · Anant Raj · Sai Praneeth Reddy Karimireddy · Gunnar Ratsch · Bernhard Schölkopf · Sebastian Stich · Martin Jaggi -
2018 Oral: On Matching Pursuit and Coordinate Descent »
Francesco Locatello · Anant Raj · Sai Praneeth Reddy Karimireddy · Gunnar Ratsch · Bernhard Schölkopf · Sebastian Stich · Martin Jaggi -
2018 Poster: A Distributed Second-Order Algorithm You Can Trust »
Celestine Mendler-Dünner · Aurelien Lucchi · Matilde Gargiani · Yatao Bian · Thomas Hofmann · Martin Jaggi -
2018 Oral: A Distributed Second-Order Algorithm You Can Trust »
Celestine Mendler-Dünner · Aurelien Lucchi · Matilde Gargiani · Yatao Bian · Thomas Hofmann · Martin Jaggi -
2017 Poster: Approximate Steepest Coordinate Descent »
Sebastian Stich · Anant Raj · Martin Jaggi -
2017 Talk: Approximate Steepest Coordinate Descent »
Sebastian Stich · Anant Raj · Martin Jaggi