Timezone: »
Oral
SGD without Replacement: Sharper Rates for General Smooth Convex Functions
Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli
We study stochastic gradient descent {\em without replacement} (\sgdwor) for smooth convex functions. \sgdwor is widely observed to converge faster than true \sgd where each sample is drawn independently {\em with replacement}~\cite{bottou2009curiously} and hence, is more popular in practice. But it's convergence properties are not well understood as sampling without replacement leads to coupling between iterates and gradients. By using method of exchangeable pairs to bound Wasserstein distance, we provide the first non-asymptotic results for \sgdwor when applied to {\em general smooth, strongly-convex} functions. In particular, we show that \sgdwor converges at a rate of $O(1/K^2)$ while \sgd~is known to converge at $O(1/K)$ rate, where $K$ denotes the number of passes over data and is required to be {\em large enough}. Existing results for \sgdwor in this setting require additional {\em Hessian Lipschitz assumption}~\cite{gurbuzbalaban2015random,haochen2018random}.
For {\em small} $K$, we show \sgdwor can achieve same convergence rate as \sgd for {\em general smooth strongly-convex} functions. Existing results in this setting require $K=1$ and hold only for generalized linear models \cite{shamir2016without}. In addition, by careful analysis of the coupling, for both large and small $K$, we obtain better dependence on problem dependent parameters like condition number.
Author Information
Dheeraj Nagaraj (Massachusetts Institute of Technology)
Prateek Jain (Microsoft Research)
Praneeth Netrapalli (Microsoft Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: SGD without Replacement: Sharper Rates for General Smooth Convex Functions »
Thu. Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom #202
More from the Same Authors
-
2020 : Discussion Panel »
Krzysztof Dembczynski · Prateek Jain · Alina Beygelzimer · Inderjit Dhillon · Anna Choromanska · Maryam Majzoubi · Yashoteja Prabhu · John Langford -
2020 Poster: Soft Threshold Weight Reparameterization for Learnable Sparsity »
Aditya Kusupati · Vivek Ramanujan · Raghav Somani · Mitchell Wortsman · Prateek Jain · Sham Kakade · Ali Farhadi -
2020 Poster: Optimization and Analysis of the pAp@k Metric for Recommender Systems »
Gaurush Hiranandani · Warut Vijitbenjaronk · Sanmi Koyejo · Prateek Jain -
2020 Poster: DROCC: Deep Robust One-Class Classification »
Sachin Goyal · Aditi Raghunathan · Moksh Jain · Harsha Vardhan Simhadri · Prateek Jain -
2020 Poster: What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? »
Chi Jin · Praneeth Netrapalli · Michael Jordan -
2020 Poster: Efficient Domain Generalization via Common-Specific Low-Rank Decomposition »
Vihari Piratla · Praneeth Netrapalli · Sunita Sarawagi -
2018 Poster: Differentially Private Matrix Completion Revisited »
Prateek Jain · Om Dipakbhai Thakkar · Abhradeep Thakurta -
2018 Oral: Differentially Private Matrix Completion Revisited »
Prateek Jain · Om Dipakbhai Thakkar · Abhradeep Thakurta -
2017 Workshop: ML on a budget: IoT, Mobile and other tiny-ML applications »
Manik Varma · Venkatesh Saligrama · Prateek Jain -
2017 Poster: ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices »
Chirag Gupta · ARUN SUGGALA · Ankit Goyal · Saurabh Goyal · Ashish Kumar · Bhargavi Paranjape · Harsha Vardhan Simhadri · Raghavendra Udupa · Manik Varma · Prateek Jain -
2017 Poster: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Talk: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Talk: ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices »
Chirag Gupta · ARUN SUGGALA · Ankit Goyal · Saurabh Goyal · Ashish Kumar · Bhargavi Paranjape · Harsha Vardhan Simhadri · Raghavendra Udupa · Manik Varma · Prateek Jain -
2017 Poster: Recovery Guarantees for One-hidden-layer Neural Networks »
Kai Zhong · Zhao Song · Prateek Jain · Peter Bartlett · Inderjit Dhillon -
2017 Poster: Nearly Optimal Robust Matrix Completion »
Yeshwanth Cherapanamjeri · Prateek Jain · Kartik Gupta -
2017 Poster: Active Heteroscedastic Regression »
Kamalika Chaudhuri · Prateek Jain · Nagarajan Natarajan -
2017 Talk: Active Heteroscedastic Regression »
Kamalika Chaudhuri · Prateek Jain · Nagarajan Natarajan -
2017 Talk: Nearly Optimal Robust Matrix Completion »
Yeshwanth Cherapanamjeri · Prateek Jain · Kartik Gupta -
2017 Talk: Recovery Guarantees for One-hidden-layer Neural Networks »
Kai Zhong · Zhao Song · Prateek Jain · Peter Bartlett · Inderjit Dhillon