Skip to yearly menu bar Skip to main content


Talk

Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values

Chaoxu Zhou · Wenbo Gao · Donald Goldfarb

Parkside 2

Abstract:

We propose a novel class of stochastic, adaptive methods for minimizing self-concordant functions which can be expressed as an expected value. These methods generate an estimate of the true objective function by taking the empirical mean over a sample drawn at each step, making the problem tractable. The use of adaptive step sizes eliminates the need for the user to supply a step size. Methods in this class include extensions of gradient descent (GD) and BFGS. We show that, given a suitable amount of sampling, the stochastic adaptive GD attains linear convergence in expectation, and with further sampling, the stochastic adaptive BFGS attains R-superlinear convergence. We present experiments showing that these methods compare favorably to SGD.

Live content is unavailable. Log in and register to view live content