Timezone: »

 
Poster
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance
Abdurakhmon Sadiev · Marina Danilova · Eduard Gorbunov · Samuel Horváth · Gauthier Gidel · Pavel Dvurechenskii · Alexander Gasnikov · Peter Richtarik

Thu Jul 27 04:30 PM -- 06:00 PM (PDT) @ Exhibit Hall 1 #638
During the recent years the interest of optimization and machine learning communities in high-probability convergence of stochastic optimization methods has been growing. One of the main reasons for this is that high-probability complexity bounds are more accurate and less studied than in-expectation ones. However, SOTA high-probability non-asymptotic convergence results are derived under strong assumptions such as boundedness of the gradient noise variance or of the objective's gradient itself. In this paper, we propose several algorithms with high-probability convergence results under less restrictive assumptions. In particular, we derive new high-probability convergence results under the assumption that the gradient/operator noise has bounded central $\alpha$-th moment for $\alpha \in (1,2]$ in the following setups: (i) smooth non-convex / Polyak-Lojasiewicz / convex / strongly convex / quasi-strongly convex minimization problems, (ii) Lipschitz / star-cocoercive and monotone / quasi-strongly monotone variational inequalities. These results justify the usage of the considered methods for solving problems that do not fit standard functional classes studied in stochastic optimization.

Author Information

Abdurakhmon Sadiev (King Abdullah University of Science and Technology)
Marina Danilova (Moscow Institute of Physics and Technology)
Eduard Gorbunov (Mohamed bin Zayed University of Artificial Intelligence)
Samuel Horváth (Mohamed bin Zayed University of Artificial Intelligence)
Gauthier Gidel (Mila)
Pavel Dvurechenskii (Weierstrass Institute)

Graduated with honors from Moscow Institute of Physics and Technology. PhD on differential games in the same university. At the moment research associate in the area of optimization under inexact information in Berlin. Research interest include - algorithms for convex and non-convex large-scale optimization problems; - optimization under deterministic and stochastic inexact information; - randomized algorithms: random coordinate descent, random (derivative-free) directional search; - numerical aspects of Optimal Transport - Algorithms for saddle-point problems and variational inequalities

Alexander Gasnikov (Moscow Institute of Physics and Technology)
Peter Richtarik (KAUST)

Peter Richtarik is an Associate Professor of Computer Science and Mathematics at KAUST and an Associate Professor of Mathematics at the University of Edinburgh. He is an EPSRC Fellow in Mathematical Sciences, Fellow of the Alan Turing Institute, and is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Dr. Richtarik received his PhD from Cornell University in 2007, and then worked as a Postdoctoral Fellow in Louvain, Belgium, before joining Edinburgh in 2009, and KAUST in 2017. Dr. Richtarik's research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, high performance computing and applied probability. Through his recent work on randomized decomposition algorithms (such as randomized coordinate descent methods, stochastic gradient descent methods and their numerous extensions, improvements and variants), he has contributed to the foundations of the emerging field of big data optimization, randomized numerical linear algebra, and stochastic methods for empirical risk minimization. Several of his papers attracted international awards, including the SIAM SIGEST Best Paper Award, the IMA Leslie Fox Prize (2nd prize, twice), and the INFORMS Computing Society Best Student Paper Award (sole runner up). He is the founder and organizer of the Optimization and Big Data workshop series.​

More from the Same Authors