Timezone: »
Author Information
Shiqiang Wang (IBM Research)
Nathalie Baracaldo (IBM Research)
Olivia Choudhury (Amazon)
Gauri Joshi (Carnegie Mellon University)
Peter Richtarik (KAUST)
Peter Richtarik is an Associate Professor of Computer Science and Mathematics at KAUST and an Associate Professor of Mathematics at the University of Edinburgh. He is an EPSRC Fellow in Mathematical Sciences, Fellow of the Alan Turing Institute, and is affiliated with the Visual Computing Center and the Extreme Computing Research Center at KAUST. Dr. Richtarik received his PhD from Cornell University in 2007, and then worked as a Postdoctoral Fellow in Louvain, Belgium, before joining Edinburgh in 2009, and KAUST in 2017. Dr. Richtarik's research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, high performance computing and applied probability. Through his recent work on randomized decomposition algorithms (such as randomized coordinate descent methods, stochastic gradient descent methods and their numerous extensions, improvements and variants), he has contributed to the foundations of the emerging field of big data optimization, randomized numerical linear algebra, and stochastic methods for empirical risk minimization. Several of his papers attracted international awards, including the SIAM SIGEST Best Paper Award, the IMA Leslie Fox Prize (2nd prize, twice), and the INFORMS Computing Society Best Student Paper Award (sole runner up). He is the founder and organizer of the Optimization and Big Data workshop series.
Praneeth Vepakomma (MIT)
Han Yu (Nanyang Technological University)
More from the Same Authors
-
2021 : Parallel Quasi-concave set optimization: A new frontier that scales without needing submodularity »
Praneeth Vepakomma · Ramesh Raskar -
2021 : BiG-Fed: Bilevel Optimization Enhanced Graph-Aided Federated Learning »
Pengwei Xing · Xing pengwei · Han Yu -
2021 : EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Peter Richtarik · Ilyas Fatkhullin -
2021 : Industrial Booth (IBM) »
Shiqiang Wang · Nathalie Baracaldo -
2022 : Formal Privacy Guarantees for Neural Network queries by estimating local Lipschitz constant »
Abhishek Singh · Praneeth Vepakomma · Vivek Sharma · Ramesh Raskar -
2023 Poster: High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance »
Abdurakhmon Sadiev · Marina Danilova · Eduard Gorbunov · Samuel Horváth · Gauthier Gidel · Pavel Dvurechenskii · Alexander Gasnikov · Peter Richtarik -
2023 Poster: On the Convergence of Federated Averaging with Cyclic Client Participation »
Yae Jee Cho · PRANAY SHARMA · Gauri Joshi · Zheng Xu · Satyen Kale · Tong Zhang -
2023 Poster: EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression »
Kaja Gruntkowska · Alexander Tyurin · Peter Richtarik -
2023 Poster: LESS-VFL: Communication-Efficient Feature Selection for Vertical Federated Learning »
Timothy Castiglia · Yi Zhou · Shiqiang Wang · Swanand Kadhe · Nathalie Baracaldo · Stacy Patterson -
2023 Poster: The Blessing of Heterogeneity in Federated Q-learning: Linear Speedup and Beyond »
Jiin Woo · Gauri Joshi · Yuejie Chi -
2023 Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities »
Zheng Xu · Peter Kairouz · Bo Li · Tian Li · John Nguyen · Jianyu Wang · Shiqiang Wang · Ayfer Ozgur -
2022 Poster: Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data »
Timothy Castiglia · Anirban Das · Shiqiang Wang · Stacy Patterson -
2022 Poster: Proximal and Federated Random Reshuffling »
Konstantin Mishchenko · Ahmed Khaled · Peter Richtarik -
2022 Poster: Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling »
sajad khodadadian · PRANAY SHARMA · Gauri Joshi · Siva Maguluri -
2022 Poster: A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1 »
Adil Salim · Lukang Sun · Peter Richtarik -
2022 Poster: 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation »
Peter Richtarik · Igor Sokolov · Elnur Gasanov · Ilyas Fatkhullin · Zhize Li · Eduard Gorbunov -
2022 Spotlight: Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data »
Timothy Castiglia · Anirban Das · Shiqiang Wang · Stacy Patterson -
2022 Spotlight: A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1 »
Adil Salim · Lukang Sun · Peter Richtarik -
2022 Spotlight: Proximal and Federated Random Reshuffling »
Konstantin Mishchenko · Ahmed Khaled · Peter Richtarik -
2022 Spotlight: 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation »
Peter Richtarik · Igor Sokolov · Elnur Gasanov · Ilyas Fatkhullin · Zhize Li · Eduard Gorbunov -
2022 Oral: Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling »
sajad khodadadian · PRANAY SHARMA · Gauri Joshi · Siva Maguluri -
2022 Poster: ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! »
Konstantin Mishchenko · Grigory Malinovsky · Sebastian Stich · Peter Richtarik -
2022 Spotlight: ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! »
Konstantin Mishchenko · Grigory Malinovsky · Sebastian Stich · Peter Richtarik -
2022 Poster: FedNL: Making Newton-Type Methods Applicable to Federated Learning »
Mher Safaryan · Rustem Islamov · Xun Qian · Peter Richtarik -
2022 Poster: Federated Minimax Optimization: Improved Convergence Analyses and Algorithms »
PRANAY SHARMA · Rohan Panda · Gauri Joshi · Pramod K Varshney -
2022 Spotlight: Federated Minimax Optimization: Improved Convergence Analyses and Algorithms »
PRANAY SHARMA · Rohan Panda · Gauri Joshi · Pramod K Varshney -
2022 Spotlight: FedNL: Making Newton-Type Methods Applicable to Federated Learning »
Mher Safaryan · Rustem Islamov · Xun Qian · Peter Richtarik -
2021 : Parallel Quasi-concave set optimization: A new frontier that scales without needing submodularity »
Ramesh Raskar · Praneeth Vepakomma -
2021 : Industrial Panel »
Nathalie Baracaldo · Shiqiang Wang · Peter Kairouz · Zheng Xu · Kshitiz Malik · Tao Zhang -
2021 Workshop: International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2021 (FL-ICML'21) »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Shiqiang Wang · Han Yu -
2021 : Opening Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 Poster: ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks »
Dmitry Kovalev · Egor Shulgin · Peter Richtarik · Alexander Rogozin · Alexander Gasnikov -
2021 Spotlight: ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks »
Dmitry Kovalev · Egor Shulgin · Peter Richtarik · Alexander Rogozin · Alexander Gasnikov -
2021 Poster: MARINA: Faster Non-Convex Distributed Learning with Compression »
Eduard Gorbunov · Konstantin Burlachenko · Zhize Li · Peter Richtarik -
2021 Affinity Workshop: Women in Machine Learning (WiML) Un-Workshop »
Wenshuo Guo · Beliz Gokkaya · Arushi G K Majha · Vaidheeswaran Archana · Berivan Isik · Olivia Choudhury · Liyue Shen · Hadia Samil · Tatjana Chavdarova -
2021 Spotlight: MARINA: Faster Non-Convex Distributed Learning with Compression »
Eduard Gorbunov · Konstantin Burlachenko · Zhize Li · Peter Richtarik -
2021 Poster: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization »
Zhize Li · Hongyan Bao · Xiangliang Zhang · Peter Richtarik -
2021 Poster: Stochastic Sign Descent Methods: New Algorithms and Better Theory »
Mher Safaryan · Peter Richtarik -
2021 Poster: Distributed Second Order Methods with Fast Rates and Compressed Communication »
Rustem Islamov · Xun Qian · Peter Richtarik -
2021 Spotlight: Distributed Second Order Methods with Fast Rates and Compressed Communication »
Rustem Islamov · Xun Qian · Peter Richtarik -
2021 Oral: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization »
Zhize Li · Hongyan Bao · Xiangliang Zhang · Peter Richtarik -
2021 Spotlight: Stochastic Sign Descent Methods: New Algorithms and Better Theory »
Mher Safaryan · Peter Richtarik -
2021 : Governance in FL: Providing AI Fairness and Accountability »
Nathalie Baracaldo · Ali Anwar · Annie Abay -
2021 Expo Talk Panel: Enterprise-Strength Federated Learning: New Algorithms, New Paradigms, and a Participant-Interactive Demonstration Session »
Laura Wynter · Nathalie Baracaldo · Chaitanya Kumar · Parijat Dube · Mikhail Yurochkin · Theodoros Salonidis · Shiqiang Wang -
2021 : Adaptive Federated Learning for Communication and Computation Efficiency (2021 IEEE Leonard Prize-winning work). »
Shiqiang Wang -
2020 : Closing remarks »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Shiqiang Wang · Han Yu -
2020 : Opening remarks »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Shiqiang Wang · Han Yu -
2020 Workshop: Federated Learning for User Privacy and Data Confidentiality »
Nathalie Baracaldo · Olivia Choudhury · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Gauri Joshi · Shiqiang Wang · Han Yu -
2020 Poster: Stochastic Subspace Cubic Newton Method »
Filip Hanzely · Nikita Doikov · Yurii Nesterov · Peter Richtarik -
2020 Poster: Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems »
Filip Hanzely · Dmitry Kovalev · Peter Richtarik -
2020 Poster: Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization »
Zhize Li · Dmitry Kovalev · Xun Qian · Peter Richtarik -
2020 Poster: From Local SGD to Local Fixed-Point Methods for Federated Learning »
Grigory Malinovsky · Dmitry Kovalev · Elnur Gasanov · Laurent CONDAT · Peter Richtarik -
2019 Workshop: Coding Theory For Large-scale Machine Learning »
Viveck Cadambe · Pulkit Grover · Dimitris Papailiopoulos · Gauri Joshi -
2019 Poster: Nonconvex Variance Reduced Optimization with Arbitrary Sampling »
Samuel Horvath · Peter Richtarik -
2019 Poster: SAGA with Arbitrary Sampling »
Xun Qian · Zheng Qu · Peter Richtarik -
2019 Poster: SGD: General Analysis and Improved Rates »
Robert Gower · Nicolas Loizou · Xun Qian · Alibek Sailanbayev · Egor Shulgin · Peter Richtarik -
2019 Oral: SAGA with Arbitrary Sampling »
Xun Qian · Zheng Qu · Peter Richtarik -
2019 Oral: SGD: General Analysis and Improved Rates »
Robert Gower · Nicolas Loizou · Xun Qian · Alibek Sailanbayev · Egor Shulgin · Peter Richtarik -
2019 Oral: Nonconvex Variance Reduced Optimization with Arbitrary Sampling »
Samuel Horvath · Peter Richtarik -
2018 Poster: SGD and Hogwild! Convergence Without the Bounded Gradients Assumption »
Lam Nguyen · PHUONG_HA NGUYEN · Marten van Dijk · Peter Richtarik · Katya Scheinberg · Martin Takac -
2018 Oral: SGD and Hogwild! Convergence Without the Bounded Gradients Assumption »
Lam Nguyen · PHUONG_HA NGUYEN · Marten van Dijk · Peter Richtarik · Katya Scheinberg · Martin Takac