Timezone: »
Author Information
Abdurakhmon Sadiev (King Abdullah University of Science and Technology)
Marina Danilova (Moscow Institute of Physics and Technology)
Eduard Gorbunov (Mohamed bin Zayed University of Artificial Intelligence)
Samuel Horváth (Mohamed bin Zayed University of Artificial Intelligence)
Gauthier Gidel (Mila)
Pavel Dvurechenskii (Weierstrass Institute)
Graduated with honors from Moscow Institute of Physics and Technology. PhD on differential games in the same university. At the moment research associate in the area of optimization under inexact information in Berlin. Research interest include - algorithms for convex and non-convex large-scale optimization problems; - optimization under deterministic and stochastic inexact information; - randomized algorithms: random coordinate descent, random (derivative-free) directional search; - numerical aspects of Optimal Transport - Algorithms for saddle-point problems and variational inequalities
Alexander Gasnikov (Moscow Institute of Physics and Technology)
Peter Richtarik (KAUST)
Peter Richtarik is an Associate Professor of Computer Science and Mathematics at KAUST and an Associate Professor of Mathematics at the University of Edinburgh. He is an EPSRC Fellow in Mathematical Sciences, Fellow of the Alan Turing Institute, and is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Dr. Richtarik received his PhD from Cornell University in 2007, and then worked as a Postdoctoral Fellow in Louvain, Belgium, before joining Edinburgh in 2009, and KAUST in 2017. Dr. Richtarik's research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, high performance computing and applied probability. Through his recent work on randomized decomposition algorithms (such as randomized coordinate descent methods, stochastic gradient descent methods and their numerous extensions, improvements and variants), he has contributed to the foundations of the emerging field of big data optimization, randomized numerical linear algebra, and stochastic methods for empirical risk minimization. Several of his papers attracted international awards, including the SIAM SIGEST Best Paper Award, the IMA Leslie Fox Prize (2nd prize, twice), and the INFORMS Computing Society Best Student Paper Award (sole runner up). He is the founder and organizer of the Optimization and Big Data workshop series.
More from the Same Authors
-
2021 : EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Peter Richtarik · Ilyas Fatkhullin -
2023 : Kernel Mirror Prox and RKHS Gradient Flow for Mixed Functional Nash Equilibrium »
Pavel Dvurechenskii · Jia-Jie Zhu -
2023 : Improving Accelerated Federated Learning with Compression and Importance Sampling »
Michał Grudzień · Grigory Malinovsky · Peter Richtarik -
2023 : Federated Learning with Regularized Client Participation »
Grigory Malinovsky · Samuel Horváth · Konstantin Burlachenko · Peter Richtarik -
2023 : Federated Optimization Algorithms with Random Reshuffling and Gradient Compression »
Abdurakhmon Sadiev · Grigory Malinovsky · Eduard Gorbunov · Igor Sokolov · Ahmed Khaled · Konstantin Burlachenko · Peter Richtarik -
2023 : Momentum Provably Improves Error Feedback! »
Ilyas Fatkhullin · Alexander Tyurin · Peter Richtarik -
2023 : ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression »
Avetik Karagulyan · Peter Richtarik -
2023 : Towards a Better Theoretical Understanding of Independent Subnetwork Training »
Egor Shulgin · Peter Richtarik -
2023 : Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes »
Konstantin Mishchenko · Slavomír Hanzely · Peter Richtarik -
2023 : Kernel Mirror Prox and RKHS Gradient Flow for Mixed Functional Nash Equilibrium »
Pavel Dvurechenskii · Jia-Jie Zhu -
2023 Poster: Is Consensus Acceleration Possible in Decentralized Optimization over Slowly Time-Varying Networks? »
Dmitry Metelev · Alexander Rogozin · Dmitry Kovalev · Alexander Gasnikov -
2023 Poster: Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization »
Chris Junchi Li · Huizhuo Yuan · Gauthier Gidel · Quanquan Gu · Michael Jordan -
2023 Poster: EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression »
Kaja Gruntkowska · Alexander Tyurin · Peter Richtarik -
2023 Poster: Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity »
Eduard Gorbunov · Adrien Taylor · Samuel Horváth · Gauthier Gidel -
2023 : Omega: Optimistic EMA Gradients »
Juan Ramirez · Rohan Sukumaran · Quentin Bertrand · Gauthier Gidel -
2023 : Omega: Optimistic EMA Gradients »
Juan Ramirez · Rohan Sukumaran · Quentin Bertrand · Gauthier Gidel -
2022 : The ICML 2022 Expressive Vocalizations Workshop and Competition: Recognizing, Generating, and Personalizing Vocal Bursts »
Alice Baird · Panagiotis Tzirakis · Alan Cowen · Gauthier Gidel · Marco Jiralerspong · Eilif Muller · Kory Mathewson · Bjoern Schuller · Erik Cambria · Dacher Keltner -
2022 Poster: Proximal and Federated Random Reshuffling »
Konstantin Mishchenko · Ahmed Khaled · Peter Richtarik -
2022 Poster: A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1 »
Adil Salim · Lukang Sun · Peter Richtarik -
2022 Poster: 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation »
Peter Richtarik · Igor Sokolov · Elnur Gasanov · Ilyas Fatkhullin · Zhize Li · Eduard Gorbunov -
2022 Spotlight: A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1 »
Adil Salim · Lukang Sun · Peter Richtarik -
2022 Spotlight: Proximal and Federated Random Reshuffling »
Konstantin Mishchenko · Ahmed Khaled · Peter Richtarik -
2022 Spotlight: 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation »
Peter Richtarik · Igor Sokolov · Elnur Gasanov · Ilyas Fatkhullin · Zhize Li · Eduard Gorbunov -
2022 Poster: ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! »
Konstantin Mishchenko · Grigory Malinovsky · Sebastian Stich · Peter Richtarik -
2022 Poster: The power of first-order smooth optimization for black-box non-smooth problems »
Alexander Gasnikov · Anton Novitskii · Vasilii Novitskii · Farshed Abdukhakimov · Dmitry Kamzolov · Aleksandr Beznosikov · Martin Takac · Pavel Dvurechenskii · Bin Gu -
2022 Spotlight: The power of first-order smooth optimization for black-box non-smooth problems »
Alexander Gasnikov · Anton Novitskii · Vasilii Novitskii · Farshed Abdukhakimov · Dmitry Kamzolov · Aleksandr Beznosikov · Martin Takac · Pavel Dvurechenskii · Bin Gu -
2022 Spotlight: ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! »
Konstantin Mishchenko · Grigory Malinovsky · Sebastian Stich · Peter Richtarik -
2022 Poster: FedNL: Making Newton-Type Methods Applicable to Federated Learning »
Mher Safaryan · Rustem Islamov · Xun Qian · Peter Richtarik -
2022 Poster: Only tails matter: Average-Case Universality and Robustness in the Convex Regime »
LEONARDO CUNHA · Gauthier Gidel · Fabian Pedregosa · Damien Scieur · Courtney Paquette -
2022 Spotlight: FedNL: Making Newton-Type Methods Applicable to Federated Learning »
Mher Safaryan · Rustem Islamov · Xun Qian · Peter Richtarik -
2022 Spotlight: Only tails matter: Average-Case Universality and Robustness in the Convex Regime »
LEONARDO CUNHA · Gauthier Gidel · Fabian Pedregosa · Damien Scieur · Courtney Paquette -
2021 : Closing Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 : Opening Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 Poster: ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks »
Dmitry Kovalev · Egor Shulgin · Peter Richtarik · Alexander Rogozin · Alexander Gasnikov -
2021 Spotlight: ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks »
Dmitry Kovalev · Egor Shulgin · Peter Richtarik · Alexander Rogozin · Alexander Gasnikov -
2021 Poster: MARINA: Faster Non-Convex Distributed Learning with Compression »
Eduard Gorbunov · Konstantin Burlachenko · Zhize Li · Peter Richtarik -
2021 Spotlight: MARINA: Faster Non-Convex Distributed Learning with Compression »
Eduard Gorbunov · Konstantin Burlachenko · Zhize Li · Peter Richtarik -
2021 Poster: On a Combination of Alternating Minimization and Nesterov's Momentum »
Sergey Guminov · Pavel Dvurechenskii · Nazarii Tupitsa · Alexander Gasnikov -
2021 Spotlight: On a Combination of Alternating Minimization and Nesterov's Momentum »
Sergey Guminov · Pavel Dvurechenskii · Nazarii Tupitsa · Alexander Gasnikov -
2021 Poster: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization »
Zhize Li · Hongyan Bao · Xiangliang Zhang · Peter Richtarik -
2021 Poster: Stochastic Sign Descent Methods: New Algorithms and Better Theory »
Mher Safaryan · Peter Richtarik -
2021 Poster: Distributed Second Order Methods with Fast Rates and Compressed Communication »
Rustem Islamov · Xun Qian · Peter Richtarik -
2021 Poster: Newton Method over Networks is Fast up to the Statistical Precision »
Amir Daneshmand · Gesualdo Scutari · Pavel Dvurechenskii · Alexander Gasnikov -
2021 Spotlight: Distributed Second Order Methods with Fast Rates and Compressed Communication »
Rustem Islamov · Xun Qian · Peter Richtarik -
2021 Oral: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization »
Zhize Li · Hongyan Bao · Xiangliang Zhang · Peter Richtarik -
2021 Spotlight: Newton Method over Networks is Fast up to the Statistical Precision »
Amir Daneshmand · Gesualdo Scutari · Pavel Dvurechenskii · Alexander Gasnikov -
2021 Spotlight: Stochastic Sign Descent Methods: New Algorithms and Better Theory »
Mher Safaryan · Peter Richtarik -
2020 Poster: Stochastic Subspace Cubic Newton Method »
Filip Hanzely · Nikita Doikov · Yurii Nesterov · Peter Richtarik -
2020 Poster: Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems »
Filip Hanzely · Dmitry Kovalev · Peter Richtarik -
2020 Poster: Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization »
Zhize Li · Dmitry Kovalev · Xun Qian · Peter Richtarik -
2020 Poster: From Local SGD to Local Fixed-Point Methods for Federated Learning »
Grigory Malinovsky · Dmitry Kovalev · Elnur Gasanov · Laurent CONDAT · Peter Richtarik -
2020 Poster: Self-Concordant Analysis of Frank-Wolfe Algorithms »
Pavel Dvurechenskii · Petr Ostroukhov · Kamil Safin · Shimrit Shtern · Mathias Staudigl -
2019 Poster: On the Complexity of Approximating Wasserstein Barycenters »
Alexey Kroshnin · Nazarii Tupitsa · Darina Dvinskikh · Pavel Dvurechenskii · Alexander Gasnikov · Cesar Uribe -
2019 Oral: On the Complexity of Approximating Wasserstein Barycenters »
Alexey Kroshnin · Nazarii Tupitsa · Darina Dvinskikh · Pavel Dvurechenskii · Alexander Gasnikov · Cesar Uribe -
2019 Poster: Nonconvex Variance Reduced Optimization with Arbitrary Sampling »
Samuel Horvath · Peter Richtarik -
2019 Poster: SAGA with Arbitrary Sampling »
Xun Qian · Zheng Qu · Peter Richtarik -
2019 Poster: SGD: General Analysis and Improved Rates »
Robert Gower · Nicolas Loizou · Xun Qian · Alibek Sailanbayev · Egor Shulgin · Peter Richtarik -
2019 Oral: SAGA with Arbitrary Sampling »
Xun Qian · Zheng Qu · Peter Richtarik -
2019 Oral: SGD: General Analysis and Improved Rates »
Robert Gower · Nicolas Loizou · Xun Qian · Alibek Sailanbayev · Egor Shulgin · Peter Richtarik -
2019 Oral: Nonconvex Variance Reduced Optimization with Arbitrary Sampling »
Samuel Horvath · Peter Richtarik -
2018 Poster: SGD and Hogwild! Convergence Without the Bounded Gradients Assumption »
Lam Nguyen · PHUONG_HA NGUYEN · Marten van Dijk · Peter Richtarik · Katya Scheinberg · Martin Takac -
2018 Poster: Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn's Algorithm »
Pavel Dvurechenskii · Alexander Gasnikov · Alexey Kroshnin -
2018 Oral: SGD and Hogwild! Convergence Without the Bounded Gradients Assumption »
Lam Nguyen · PHUONG_HA NGUYEN · Marten van Dijk · Peter Richtarik · Katya Scheinberg · Martin Takac -
2018 Oral: Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn's Algorithm »
Pavel Dvurechenskii · Alexander Gasnikov · Alexey Kroshnin