Timezone: »
In this paper we study a family of variance reduction methods with randomized batch size---at each step, the algorithm first randomly chooses the batch size and then selects a batch of samples to conduct a variance-reduced stochastic update. We give the linear converge rate for this framework for composite functions, and show that the optimal strategy to achieve the best converge rate per data access is to always choose batch size equalling to 1, which is equivalent to the SAGA algorithm. However, due to the presence of cache/disk IO effect in computer architecture, number of data access cannot reflect the running time because of 1) random memory access is much slower than sequential access, 2) when data is too big to fit into memory, disk seeking takes even longer time. After taking these into account, choosing batch size equals to 1 is no longer optimal, so we propose a new algorithm called SAGA++ and theoretically show how to calculate the optimal average batch size. Our algorithm outperforms SAGA and other existing batch and stochastic solvers on real datasets. In addition, we also conduct a precise analysis to compare different update rules for variance reduction methods, showing that SAGA++ converges faster than SVRG in theory.
Author Information
University of California Xuanqing Liu (University of California, Davis)
Cho-Jui Hsieh (University of California, Davis)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Fast Variance Reduction Method with Stochastic Batch Size »
Wed. Jul 11th 03:20 -- 03:30 PM Room A9
More from the Same Authors
-
2018 Poster: Towards Fast Computation of Certified Robustness for ReLU Networks »
Tsui-Wei Weng · Huan Zhang · Hongge Chen · Zhao Song · Cho-Jui Hsieh · Luca Daniel · Duane Boning · Inderjit Dhillon -
2018 Poster: SQL-Rank: A Listwise Approach to Collaborative Ranking »
LIWEI WU · Cho-Jui Hsieh · University of California James Sharpnack -
2018 Poster: Extreme Learning to Rank via Low Rank Assumption »
Minhao Cheng · Ian Davidson · Cho-Jui Hsieh -
2018 Oral: Towards Fast Computation of Certified Robustness for ReLU Networks »
Tsui-Wei Weng · Huan Zhang · Hongge Chen · Zhao Song · Cho-Jui Hsieh · Luca Daniel · Duane Boning · Inderjit Dhillon -
2018 Oral: Extreme Learning to Rank via Low Rank Assumption »
Minhao Cheng · Ian Davidson · Cho-Jui Hsieh -
2018 Oral: SQL-Rank: A Listwise Approach to Collaborative Ranking »
LIWEI WU · Cho-Jui Hsieh · University of California James Sharpnack -
2017 Poster: Gradient Boosted Decision Trees for High Dimensional Sparse Output »
Si Si · Huan Zhang · Sathiya Keerthi · Dhruv Mahajan · Inderjit Dhillon · Cho-Jui Hsieh -
2017 Talk: Gradient Boosted Decision Trees for High Dimensional Sparse Output »
Si Si · Huan Zhang · Sathiya Keerthi · Dhruv Mahajan · Inderjit Dhillon · Cho-Jui Hsieh