Timezone: »

 
Poster
Variance Reduced Training with Stratified Sampling for Forecasting Models
Yucheng Lu · Youngsuk Park · Lifan Chen · Yuyang Wang · Christopher De Sa · Dean Foster

Wed Jul 21 09:00 AM -- 11:00 AM (PDT) @ None #None

In large-scale time series forecasting, one often encounters the situation where the temporal patterns of time series, while drifting over time, differ from one another in the same dataset. In this paper, we provably show under such heterogeneity, training a forecasting model with commonly used stochastic optimizers (e.g. SGD) potentially suffers large variance on gradient estimation, and thus incurs long-time training. We show that this issue can be efficiently alleviated via stratification, which allows the optimizer to sample from pre-grouped time series strata. For better trading-off gradient variance and computation complexity, we further propose SCott (Stochastic Stratified Control Variate Gradient Descent), a variance reduced SGD-style optimizer that utilizes stratified sampling via control variate. In theory, we provide the convergence guarantee of SCott on smooth non-convex objectives. Empirically, we evaluate SCott and other baseline optimizers on both synthetic and real-world time series forecasting problems, and demonstrate SCott converges faster with respect to both iterations and wall clock time.

Author Information

Yucheng Lu (Cornell University)
Youngsuk Park (Amazon Research)
Lifan Chen (Amazon)
Bernie Wang (AWS AI Labs)
Christopher De Sa (Cornell)
Dean Foster (Amazon)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors