Timezone: »

 
Poster
History-Gradient Aided Batch Size Adaptation for Variance Reduced Algorithms
Kaiyi Ji · Zhe Wang · Bowen Weng · Yi Zhou · Wei Zhang · Yingbin LIANG

Thu Jul 16 06:00 AM -- 06:45 AM & Thu Jul 16 05:00 PM -- 05:45 PM (PDT) @

Variance-reduced algorithms, although achieve great theoretical performance, can run slowly in practice due to the periodic gradient estimation with a large batch of data. Batch-size adaptation thus arises as a promising approach to accelerate such algorithms. However, existing schemes either apply prescribed batch-size adaption rule or exploit the information along optimization path via additional backtracking and condition verification steps. In this paper, we propose a novel scheme, which eliminates backtracking line search but still exploits the information along optimization path by adapting the batch size via history stochastic gradients. We further theoretically show that such a scheme substantially reduces the overall complexity for popular variance-reduced algorithms SVRG and SARAH/SPIDER for both conventional nonconvex optimization and reinforcement learning problems. To this end, we develop a new convergence analysis framework to handle the dependence of the batch size on history stochastic gradients. Extensive experiments validate the effectiveness of the proposed batch-size adaptation scheme.

Author Information

Kaiyi Ji (The Ohio State University)
Zhe Wang (Ohio State University)
Bowen Weng (Ohio State University)
Yi Zhou (University of Utah)
Wei Zhang (Southern University of Science and Technology)
Yingbin LIANG (The Ohio State University)

More from the Same Authors