Skip to yearly menu bar Skip to main content


SGD without Replacement: Sharper Rates for General Smooth Convex Functions

Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli

Pacific Ballroom #202

Keywords: [ Large Scale Learning and Big Data ] [ Convex Optimization ]

Abstract: We study stochastic gradient descent {\em without replacement} (SGDo) for smooth convex functions. SGDo is widely observed to converge faster than true SGD where each sample is drawn independently {\em with replacement} (Bottou,2009) and hence, is more popular in practice. But it's convergence properties are not well understood as sampling without replacement leads to coupling between iterates and gradients. By using method of exchangeable pairs to bound Wasserstein distance, we provide the first non-asymptotic results for SGDo when applied to {\em general smooth, strongly-convex} functions. In particular, we show that SGDo converges at a rate of $O(1/K^2)$ while SGD is known to converge at $O(1/K)$ rate, where $K$ denotes the number of passes over data and is required to be {\em large enough}. Existing results for SGDo in this setting require additional {\em Hessian Lipschitz assumption} (Gurbuzbalaban et al, 2015; HaoChen and Sra 2018). For {\em small} $K$, we show SGDo can achieve same convergence rate as SGD for {\em general smooth strongly-convex} functions. Existing results in this setting require $K=1$ and hold only for generalized linear models (Shamir,2016). In addition, by careful analysis of the coupling, for both large and small $K$, we obtain better dependence on problem dependent parameters like condition number.

Live content is unavailable. Log in and register to view live content