Timezone: »

 
Poster
Stochastic Adaptive Quasi-Newton Methods for Minimizing Expected Values
Chaoxu Zhou · Wenbo Gao · Donald Goldfarb

Tue Aug 08 01:30 AM -- 05:00 AM (PDT) @ Gallery #17

We propose a novel class of stochastic, adaptive methods for minimizing self-concordant functions which can be expressed as an expected value. These methods generate an estimate of the true objective function by taking the empirical mean over a sample drawn at each step, making the problem tractable. The use of adaptive step sizes eliminates the need for the user to supply a step size. Methods in this class include extensions of gradient descent (GD) and BFGS. We show that, given a suitable amount of sampling, the stochastic adaptive GD attains linear convergence in expectation, and with further sampling, the stochastic adaptive BFGS attains R-superlinear convergence. We present experiments showing that these methods compare favorably to SGD.

Author Information

Chaoxu Zhou (Columbia University)
Wenbo Gao (Columbia University)
Donald Goldfarb (Columbia University)

Related Events (a corresponding poster, oral, or spotlight)