Timezone: »

 
Oral
Nonconvex Variance Reduced Optimization with Arbitrary Sampling
Samuel Horvath · Peter Richtarik

Tue Jun 11 11:35 AM -- 11:40 AM (PDT) @ Room 104

We provide the first importance sampling variants of variance reduced algorithms for empirical risk minimization with non-convex loss functions. In particular, we analyze non-convex versions of \texttt{SVRG}, \texttt{SAGA} and \texttt{SARAH}. Our methods have the capacity to speed up the training process by an order of magnitude compared to the state of the art on real datasets. Moreover, we also improve upon current mini-batch analysis of these methods by proposing importance sampling for minibatches in this setting. Surprisingly, our approach can in some regimes lead to superlinear speedup with respect to the minibatch size, which is not usually present in stochastic optimization. All the above results follow from a general analysis of the methods which works with {\em arbitrary sampling}, i.e., fully general randomized strategy for the selection of subsets of examples to be sampled in each iteration. Finally, we also perform a novel importance sampling analysis of \texttt{SARAH} in the convex setting.

Author Information

Samuel Horvath (KAUST)
Samuel Horvath

I am currently an assistant professor of Machine Learning at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). Prior to that, I completed my MS and Ph. D. in statistics at King Abdullah University of Science and Technology (KAUST), advised by professor Peter Richtárik. Before that, I was an undergraduate student in Financial Mathematics at Comenius University. In my research, I focus on providing a fundamental understanding of how distributed and federated training algorithms work and how they interact with different sources of heterogeneity, such as system-level variability in the computing infrastructure and statistical variability in the training data. Inspired by the theoretical insights, I aim to design efficient and practical distributed/federated training algorithms. I am broadly interested in federated learning, distributed optimization, and efficient deep learning.

Peter Richtarik (KAUST)

Peter Richtarik is an Associate Professor of Computer Science and Mathematics at KAUST and an Associate Professor of Mathematics at the University of Edinburgh. He is an EPSRC Fellow in Mathematical Sciences, Fellow of the Alan Turing Institute, and is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Dr. Richtarik received his PhD from Cornell University in 2007, and then worked as a Postdoctoral Fellow in Louvain, Belgium, before joining Edinburgh in 2009, and KAUST in 2017. Dr. Richtarik's research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, high performance computing and applied probability. Through his recent work on randomized decomposition algorithms (such as randomized coordinate descent methods, stochastic gradient descent methods and their numerous extensions, improvements and variants), he has contributed to the foundations of the emerging field of big data optimization, randomized numerical linear algebra, and stochastic methods for empirical risk minimization. Several of his papers attracted international awards, including the SIAM SIGEST Best Paper Award, the IMA Leslie Fox Prize (2nd prize, twice), and the INFORMS Computing Society Best Student Paper Award (sole runner up). He is the founder and organizer of the Optimization and Big Data workshop series.​

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors