Timezone: »
Talk by Peter Richtarik - Fast linear convergence of randomized BFGS
Peter Richtarik
Fri Jul 17 08:15 AM -- 09:00 AM (PDT) @ None
Since the late 1950's when quasi-Newton methods first appeared, they have become one of the most widely used and efficient algorithmic paradigms for unconstrained optimization. Despite their immense practical success, there is little theory that shows why these methods are so efficient. We provide a semi-local rate of convergence for the randomized BFGS method which can be significantly better than that of gradient descent, finally giving theoretical evidence supporting the superior empirical performance of the method.
Author Information
Peter Richtarik (King Abdullah University of Science and Technology (KAUST) - University of Edinburgh, Scotland)
More from the Same Authors
-
2021 : EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Peter Richtarik · Ilyas Fatkhullin -
2021 Workshop: International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2021 (FL-ICML'21) »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Shiqiang Wang · Han Yu -
2020 : Q&A with Peter Richtarik »
Peter Richtarik -
2018 Poster: Randomized Block Cubic Newton Method »
Nikita Doikov · Peter Richtarik -
2018 Oral: Randomized Block Cubic Newton Method »
Nikita Doikov · Peter Richtarik