Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Beyond first order methods in machine learning systems

Talk by Peter Richtarik - Fast linear convergence of randomized BFGS

Peter Richtarik


Abstract:

Since the late 1950's when quasi-Newton methods first appeared, they have become one of the most widely used and efficient algorithmic paradigms for unconstrained optimization. Despite their immense practical success, there is little theory that shows why these methods are so efficient. We provide a semi-local rate of convergence for the randomized BFGS method which can be significantly better than that of gradient descent, finally giving theoretical evidence supporting the superior empirical performance of the method.

Chat is not available.