Timezone: »
Author Information
Konstantin Mishchenko (CNRS)
Grigory Malinovsky (KAUST)
Sebastian Stich (CISPA Helmholtz Center for Information Security gGmbH)
Peter Richtarik (KAUST)
Peter Richtarik is an Associate Professor of Computer Science and Mathematics at KAUST and an Associate Professor of Mathematics at the University of Edinburgh. He is an EPSRC Fellow in Mathematical Sciences, Fellow of the Alan Turing Institute, and is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Dr. Richtarik received his PhD from Cornell University in 2007, and then worked as a Postdoctoral Fellow in Louvain, Belgium, before joining Edinburgh in 2009, and KAUST in 2017. Dr. Richtarik's research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, high performance computing and applied probability. Through his recent work on randomized decomposition algorithms (such as randomized coordinate descent methods, stochastic gradient descent methods and their numerous extensions, improvements and variants), he has contributed to the foundations of the emerging field of big data optimization, randomized numerical linear algebra, and stochastic methods for empirical risk minimization. Several of his papers attracted international awards, including the SIAM SIGEST Best Paper Award, the IMA Leslie Fox Prize (2nd prize, twice), and the INFORMS Computing Society Best Student Paper Award (sole runner up). He is the founder and organizer of the Optimization and Big Data workshop series.
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! »
Wed. Jul 20th through Thu the 21st Room Hall E #710
More from the Same Authors
-
2021 : EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Peter Richtarik · Ilyas Fatkhullin -
2022 : The Gap Between Continuous and Discrete Gradient Descent »
Amirkeivan Mohtashami · Martin Jaggi · Sebastian Stich -
2023 : Improving Accelerated Federated Learning with Compression and Importance Sampling »
Michał Grudzień · Grigory Malinovsky · Peter Richtarik -
2023 : Federated Learning with Regularized Client Participation »
Grigory Malinovsky · Samuel Horváth · Konstantin Burlachenko · Peter Richtarik -
2023 : Federated Optimization Algorithms with Random Reshuffling and Gradient Compression »
Abdurakhmon Sadiev · Grigory Malinovsky · Eduard Gorbunov · Igor Sokolov · Ahmed Khaled · Konstantin Burlachenko · Peter Richtarik -
2023 : Momentum Provably Improves Error Feedback! »
Ilyas Fatkhullin · Alexander Tyurin · Peter Richtarik -
2023 : ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression »
Avetik Karagulyan · Peter Richtarik -
2023 : Towards a Better Theoretical Understanding of Independent Subnetwork Training »
Egor Shulgin · Peter Richtarik -
2023 : Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes »
Konstantin Mishchenko · Slavomír Hanzely · Peter Richtarik -
2023 Oral: Learning-Rate-Free Learning by D-Adaptation »
Aaron Defazio · Konstantin Mishchenko -
2023 Poster: High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance »
Abdurakhmon Sadiev · Marina Danilova · Eduard Gorbunov · Samuel Horváth · Gauthier Gidel · Pavel Dvurechenskii · Alexander Gasnikov · Peter Richtarik -
2023 Poster: Special Properties of Gradient Descent with Large Learning Rates »
Amirkeivan Mohtashami · Martin Jaggi · Sebastian Stich -
2023 Poster: Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy »
Blake Woodworth · Konstantin Mishchenko · Francis Bach -
2023 Poster: EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression »
Kaja Gruntkowska · Alexander Tyurin · Peter Richtarik -
2023 Poster: Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees »
Anastasiia Koloskova · Hadrien Hendrikx · Sebastian Stich -
2023 Poster: Learning-Rate-Free Learning by D-Adaptation »
Aaron Defazio · Konstantin Mishchenko -
2022 Poster: ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training »
Hui-Po Wang · Sebastian Stich · Yang He · Mario Fritz -
2022 Poster: Proximal and Federated Random Reshuffling »
Konstantin Mishchenko · Ahmed Khaled · Peter Richtarik -
2022 Poster: A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1 »
Adil Salim · Lukang Sun · Peter Richtarik -
2022 Poster: 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation »
Peter Richtarik · Igor Sokolov · Elnur Gasanov · Ilyas Fatkhullin · Zhize Li · Eduard Gorbunov -
2022 Spotlight: A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1 »
Adil Salim · Lukang Sun · Peter Richtarik -
2022 Spotlight: Proximal and Federated Random Reshuffling »
Konstantin Mishchenko · Ahmed Khaled · Peter Richtarik -
2022 Spotlight: ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training »
Hui-Po Wang · Sebastian Stich · Yang He · Mario Fritz -
2022 Spotlight: 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation »
Peter Richtarik · Igor Sokolov · Elnur Gasanov · Ilyas Fatkhullin · Zhize Li · Eduard Gorbunov -
2022 Poster: FedNL: Making Newton-Type Methods Applicable to Federated Learning »
Mher Safaryan · Rustem Islamov · Xun Qian · Peter Richtarik -
2022 Spotlight: FedNL: Making Newton-Type Methods Applicable to Federated Learning »
Mher Safaryan · Rustem Islamov · Xun Qian · Peter Richtarik -
2021 : Regularized Newton Method with Global O(1/k^2) Convergence »
Konstantin Mishchenko -
2021 : Closing Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 : Algorithms for Efficient Federated and Decentralized Learning (Q&A) »
Sebastian Stich -
2021 : Algorithms for Efficient Federated and Decentralized Learning »
Sebastian Stich -
2021 : Opening Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 Poster: ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks »
Dmitry Kovalev · Egor Shulgin · Peter Richtarik · Alexander Rogozin · Alexander Gasnikov -
2021 Spotlight: ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks »
Dmitry Kovalev · Egor Shulgin · Peter Richtarik · Alexander Rogozin · Alexander Gasnikov -
2021 Poster: Consensus Control for Decentralized Deep Learning »
Lingjing Kong · Tao Lin · Anastasiia Koloskova · Martin Jaggi · Sebastian Stich -
2021 Poster: Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data »
Tao Lin · Sai Praneeth Reddy Karimireddy · Sebastian Stich · Martin Jaggi -
2021 Spotlight: Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data »
Tao Lin · Sai Praneeth Reddy Karimireddy · Sebastian Stich · Martin Jaggi -
2021 Spotlight: Consensus Control for Decentralized Deep Learning »
Lingjing Kong · Tao Lin · Anastasiia Koloskova · Martin Jaggi · Sebastian Stich -
2021 Poster: MARINA: Faster Non-Convex Distributed Learning with Compression »
Eduard Gorbunov · Konstantin Burlachenko · Zhize Li · Peter Richtarik -
2021 Spotlight: MARINA: Faster Non-Convex Distributed Learning with Compression »
Eduard Gorbunov · Konstantin Burlachenko · Zhize Li · Peter Richtarik -
2021 Poster: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization »
Zhize Li · Hongyan Bao · Xiangliang Zhang · Peter Richtarik -
2021 Poster: Stochastic Sign Descent Methods: New Algorithms and Better Theory »
Mher Safaryan · Peter Richtarik -
2021 Poster: Distributed Second Order Methods with Fast Rates and Compressed Communication »
Rustem Islamov · Xun Qian · Peter Richtarik -
2021 Spotlight: Distributed Second Order Methods with Fast Rates and Compressed Communication »
Rustem Islamov · Xun Qian · Peter Richtarik -
2021 Oral: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization »
Zhize Li · Hongyan Bao · Xiangliang Zhang · Peter Richtarik -
2021 Spotlight: Stochastic Sign Descent Methods: New Algorithms and Better Theory »
Mher Safaryan · Peter Richtarik -
2020 Poster: Extrapolation for Large-batch Training in Deep Learning »
Tao Lin · Lingjing Kong · Sebastian Stich · Martin Jaggi -
2020 Poster: Stochastic Subspace Cubic Newton Method »
Filip Hanzely · Nikita Doikov · Yurii Nesterov · Peter Richtarik -
2020 Poster: Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems »
Filip Hanzely · Dmitry Kovalev · Peter Richtarik -
2020 Poster: A Unified Theory of Decentralized SGD with Changing Topology and Local Updates »
Anastasiia Koloskova · Nicolas Loizou · Sadra Boreiri · Martin Jaggi · Sebastian Stich -
2020 Poster: Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization »
Zhize Li · Dmitry Kovalev · Xun Qian · Peter Richtarik -
2020 Poster: Adaptive Gradient Descent without Descent »
Yura Malitsky · Konstantin Mishchenko -
2020 Poster: From Local SGD to Local Fixed-Point Methods for Federated Learning »
Grigory Malinovsky · Dmitry Kovalev · Elnur Gasanov · Laurent CONDAT · Peter Richtarik -
2020 Poster: SCAFFOLD: Stochastic Controlled Averaging for Federated Learning »
Sai Praneeth Reddy Karimireddy · Satyen Kale · Mehryar Mohri · Sashank Jakkam Reddi · Sebastian Stich · Ananda Theertha Suresh -
2020 Poster: Is Local SGD Better than Minibatch SGD? »
Blake Woodworth · Kumar Kshitij Patel · Sebastian Stich · Zhen Dai · Brian Bullins · Brendan McMahan · Ohad Shamir · Nati Srebro -
2019 Poster: Nonconvex Variance Reduced Optimization with Arbitrary Sampling »
Samuel Horvath · Peter Richtarik -
2019 Poster: SAGA with Arbitrary Sampling »
Xun Qian · Zheng Qu · Peter Richtarik -
2019 Poster: Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication »
Anastasiia Koloskova · Sebastian Stich · Martin Jaggi -
2019 Poster: SGD: General Analysis and Improved Rates »
Robert Gower · Nicolas Loizou · Xun Qian · Alibek Sailanbayev · Egor Shulgin · Peter Richtarik -
2019 Poster: Error Feedback Fixes SignSGD and other Gradient Compression Schemes »
Sai Praneeth Reddy Karimireddy · Quentin Rebjock · Sebastian Stich · Martin Jaggi -
2019 Oral: SAGA with Arbitrary Sampling »
Xun Qian · Zheng Qu · Peter Richtarik -
2019 Oral: Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication »
Anastasiia Koloskova · Sebastian Stich · Martin Jaggi -
2019 Oral: SGD: General Analysis and Improved Rates »
Robert Gower · Nicolas Loizou · Xun Qian · Alibek Sailanbayev · Egor Shulgin · Peter Richtarik -
2019 Oral: Error Feedback Fixes SignSGD and other Gradient Compression Schemes »
Sai Praneeth Reddy Karimireddy · Quentin Rebjock · Sebastian Stich · Martin Jaggi -
2019 Oral: Nonconvex Variance Reduced Optimization with Arbitrary Sampling »
Samuel Horvath · Peter Richtarik -
2018 Poster: On Matching Pursuit and Coordinate Descent »
Francesco Locatello · Anant Raj · Sai Praneeth Reddy Karimireddy · Gunnar Ratsch · Bernhard Schölkopf · Sebastian Stich · Martin Jaggi -
2018 Oral: On Matching Pursuit and Coordinate Descent »
Francesco Locatello · Anant Raj · Sai Praneeth Reddy Karimireddy · Gunnar Ratsch · Bernhard Schölkopf · Sebastian Stich · Martin Jaggi -
2018 Poster: A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning »
Konstantin Mishchenko · Franck Iutzeler · Jérôme Malick · Massih-Reza Amini -
2018 Oral: A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning »
Konstantin Mishchenko · Franck Iutzeler · Jérôme Malick · Massih-Reza Amini -
2018 Poster: SGD and Hogwild! Convergence Without the Bounded Gradients Assumption »
Lam Nguyen · PHUONG_HA NGUYEN · Marten van Dijk · Peter Richtarik · Katya Scheinberg · Martin Takac -
2018 Oral: SGD and Hogwild! Convergence Without the Bounded Gradients Assumption »
Lam Nguyen · PHUONG_HA NGUYEN · Marten van Dijk · Peter Richtarik · Katya Scheinberg · Martin Takac -
2017 Poster: Approximate Steepest Coordinate Descent »
Sebastian Stich · Anant Raj · Martin Jaggi -
2017 Talk: Approximate Steepest Coordinate Descent »
Sebastian Stich · Anant Raj · Martin Jaggi