Timezone: »

A Progressive Batching L-BFGS Method for Machine Learning
Vijaya Raghavendra Bollapragada · Jorge Nocedal · Dheevatsa Mudigere · Hao-Jun M Shi · Peter Tang

Fri Jul 13 09:15 AM -- 12:00 PM (PDT) @ Hall B #5

The standard L-BFGS method relies on gradient approximations that are not dominated by noise, so that search directions are descent directions, the line search is reliable, and quasi-Newton updating yields useful quadratic models of the objective function. All of this appears to call for a full batch approach, but since small batch sizes give rise to faster algorithms with better generalization properties, L-BFGS is currently not considered an algorithm of choice for large-scale machine learning applications. One need not, however, choose between the two extremes represented by the full batch or highly stochastic regimes, and may instead follow a progressive batching approach in which the sample size increases during the course of the optimization. In this paper, we present a new version of the L-BFGS algorithm that combines three basic components - progressive batching, a stochastic line search, and stable quasi-Newton updating - and that performs well on training logistic regression and deep neural networks. We provide supporting convergence theory for the method.

Author Information

Raghu Bollapragada (Northwestern University)
Jorge Nocedal (Northwestern University)
Dheevatsa Mudigere (Intel Labs)
Hao-Jun M Shi (Northwestern University)
Ping Tak Tang (Intel Corporation)

Related Events (a corresponding poster, oral, or spotlight)