Timezone: »
Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.
Despite the advantages of FL, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.
The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical challenges, and discuss potential solutions. This will lead to an overall advancement of FL and its impact in the community, while noting that FL has become an increasingly popular topic in the ICML community in recent years.
Sat 5:00 a.m. - 5:15 a.m.
|
Opening Remarks
|
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu 🔗 |
Sat 5:15 a.m. - 5:40 a.m.
|
Algorithms for Efficient Federated and Decentralized Learning
(
Invited Talk
)
SlidesLive Video » |
Sebastian Stich 🔗 |
Sat 5:40 a.m. - 5:45 a.m.
|
Algorithms for Efficient Federated and Decentralized Learning (Q&A)
(
Q&A
)
|
Sebastian Stich 🔗 |
Sat 5:45 a.m. - 6:45 a.m.
|
Contributed Oral Presentation Session 1
(
Live Presentations
)
SlidesLive Video » |
🔗 |
Sat 6:45 a.m. - 7:10 a.m.
|
The ML data center is dead: What comes next?
(
Invited Talk
)
|
Nic Lane 🔗 |
Sat 7:10 a.m. - 7:15 a.m.
|
The ML data center is dead: What comes next? (Q&A)
(
Q&A
)
|
Nic Lane 🔗 |
Sat 7:15 a.m. - 7:30 a.m.
|
Break
|
🔗 |
Sat 7:30 a.m. - 8:00 a.m.
|
Contributed Oral Presentation Session 2
(
Live Presentations
)
SlidesLive Video » |
🔗 |
Sat 8:00 a.m. - 8:25 a.m.
|
Pandemic Response with Crowdsourced Data: The Participatory Privacy Preserving Approach
(
Invited Talk
)
SlidesLive Video » |
Ramesh Raskar 🔗 |
Sat 8:25 a.m. - 8:30 a.m.
|
Pandemic Response with Crowdsourced Data: The Participatory Privacy Preserving Approach (Q&A)
(
Q&A
)
|
Ramesh Raskar 🔗 |
Sat 8:30 a.m. - 9:30 a.m.
|
Industrial Panel
(
Panel
)
SlidesLive Video » |
Nathalie Baracaldo · Shiqiang Wang · Peter Kairouz · Zheng Xu · Kshitiz Malik · Tao Zhang 🔗 |
Sat 9:30 a.m. - 10:30 a.m.
|
Poster Session 1 & Industrial Booths ( GatherTown Session ) link » | 🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
Break
|
🔗 |
Sat 11:00 a.m. - 11:25 a.m.
|
Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing
(
Invited Talk
)
SlidesLive Video » |
Ameet Talwalkar 🔗 |
Sat 11:25 a.m. - 11:30 a.m.
|
Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing (Q&A)
(
Q&A
)
|
Ameet Talwalkar 🔗 |
Sat 11:30 a.m. - 12:30 p.m.
|
Contributed Oral Presentation Session 3
(
Live Presentations
)
SlidesLive Video » |
🔗 |
Sat 12:30 p.m. - 12:55 p.m.
|
Optimization Aspects of Personalized Federated Learning
(
Invited Talk
)
SlidesLive Video » |
Filip Hanzely 🔗 |
Sat 12:55 p.m. - 1:00 p.m.
|
Optimization Aspects of Personalized Federated Learning (Q&A)
(
Q&A
)
|
Filip Hanzely 🔗 |
Sat 1:00 p.m. - 2:00 p.m.
|
Poster Session 2 & Industrial Booths ( GatherTown Session ) link » | 🔗 |
Sat 2:00 p.m. - 2:15 p.m.
|
Break
|
🔗 |
Sat 2:15 p.m. - 2:40 p.m.
|
Dreaming of Federated Robustness: Inherent Barriers and Unavoidable Tradeoffs
(
Invited Talk
)
SlidesLive Video » |
Dimitris Papailiopoulos 🔗 |
Sat 2:40 p.m. - 2:45 p.m.
|
Dreaming of Federated Robustness: Inherent Barriers and Unavoidable Tradeoffs (Q&A)
(
Q&A
)
|
🔗 |
Sat 2:45 p.m. - 3:10 p.m.
|
Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning
(
Invited Talk
)
SlidesLive Video » Secure aggregation is a critical component in federated learning, which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring theprivacy of individual users in a single training round. We contend that such designs can lead to significant privacy leakages over multiple training rounds, due to partial user selection/participation at each round of federated learning. In fact, we empiricallyshow that the conventional random user selection strategies for federated learning lead to leaking users' individual models within number of rounds linear in the number of users. To address this challenge, we introduce a secure aggregation framework with multi-roundprivacy guarantees. In particular, we introduce a new metric to quantify the privacy guarantees of federated learning over multiple training rounds, and develop a structured user selection strategy that guarantees the long-term privacy of each user (over anynumber of training rounds). Our framework also carefully accounts for the fairness and the average number of participating users at each round. We perform several experiments on various datasets in the IID and the non-IID settings to demonstrate the performanceimprovement over the baseline algorithms, both in terms of privacy protection and test accuracy. We conclude the talk by discussing several open problems in this domain. (This talk is based on the following paper: https://arxiv.org/abs/2106.03328) |
Salman Avestimehr 🔗 |
Sat 3:10 p.m. - 3:15 p.m.
|
Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning (Q&A)
(
Q&A
)
|
Salman Avestimehr 🔗 |
Sat 3:15 p.m. - 3:30 p.m.
|
Closing Remarks
SlidesLive Video » |
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu 🔗 |
-
|
Implicit Gradient Alignment in Distributed and Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
Decentralized federated learning of deep neural networks on non-iid data
(
Workshop Poster
)
|
🔗 |
-
|
GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization
(
Workshop Poster
)
|
🔗 |
-
|
Accelerating Federated Learning with Split Learning on Locally Generated Losses
(
Workshop Poster
)
|
🔗 |
-
|
Handling Both Stragglers and Adversaries for Robust Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
Towards Federated Learning With Byzantine-Robust Client Weighting
(
Workshop Poster
)
|
🔗 |
-
|
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
(
Workshop Poster
)
|
🔗 |
-
|
Defending against Reconstruction Attack in Vertical Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
Fed-EINI: An Efficient and Interpretable Inference Framework for Decision Tree Ensembles in Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
FlyNN: Fruit-fly Inspired Federated Nearest Neighbor Classification
(
Workshop Poster
)
|
🔗 |
-
|
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
(
Workshop Poster
)
|
🔗 |
-
|
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
(
Workshop Poster
)
|
🔗 |
-
|
Smoothness-Aware Quantization Techniques
(
Workshop Poster
)
|
🔗 |
-
|
FedNL: Making Newton-Type Methods Applicable to Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
OmniLytics: A Blockchain-based Secure Data Market for Decentralized Machine Learning
(
Workshop Poster
)
|
🔗 |
-
|
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity
(
Workshop Poster
)
|
🔗 |
-
|
Federated Random Reshuffling with Compression and Variance Reduction
(
Workshop Poster
)
|
🔗 |
-
|
Federated Learning with Metric Loss
(
Workshop Poster
)
|
🔗 |
-
|
FedMix: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
Achieving Optimal Sample and Communication Complexities for Non-IID Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
A New Analysis Framework for Federated Learning on Time-Evolving Heterogeneous Data
(
Workshop Poster
)
|
🔗 |
-
|
Subgraph Federated Learning with Missing Neighbor Generation
(
Workshop Poster
)
|
🔗 |
-
|
Byzantine Fault-Tolerance of Local Gradient-Descent in Federated Model under 2f-Redundancy
(
Workshop Poster
)
|
🔗 |
-
|
Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy
(
Workshop Poster
)
|
🔗 |
-
|
Federated Graph Classification over Non-IID Graphs
(
Workshop Poster
)
|
🔗 |
-
|
Local Adaptivity in Federated Learning: Convergence and Consistency
(
Workshop Poster
)
|
🔗 |
-
|
New Metrics to Evaluate the Performance and Fairness of Personalized Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
Diverse Client Selection for Federated Learning: Submodularity and Convergence Analysis
(
Workshop Poster
)
|
🔗 |
-
|
BYGARS: Byzantine SGD with Arbitrary Number of Attackers Using Reputation Scores
(
Workshop Poster
)
|
🔗 |
-
|
Bi-directional Adaptive Communication for Heterogenous Distributed Learning
(
Workshop Poster
)
|
🔗 |
-
|
Communication and Energy Efficient Slimmable Federated Learning via Superposition Coding and Successive Decoding
(
Workshop Poster
)
|
🔗 |
-
|
BiG-Fed: Bilevel Optimization Enhanced Graph-Aided Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
Gradient Inversion with Generative Image Prior
(
Workshop Poster
)
|
🔗 |
-
|
On Large-Cohort Training for Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
FedGNN: Federated Graph Neural Network for Privacy-Preserving Recommendation
(
Workshop Poster
)
|
🔗 |
-
|
Robust and Differentially Private Mean Estimation
(
Workshop Poster
)
|
🔗 |
-
|
A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning
(
Workshop Poster
)
|
🔗 |
-
|
Multistage stepsize schedule in Federated Learning: Bridging Theory and Practice
(
Workshop Poster
)
|
🔗 |
-
|
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization over Time-Varying Networks
(
Workshop Poster
)
|
🔗 |
-
|
Federated Multi-Task Learning under a Mixture of Distributions
(
Workshop Poster
)
|
🔗 |
-
|
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
(
Workshop Poster
)
|
🔗 |
-
|
Optimal Model Averaging: Towards Personalized Collaborative Learning
(
Workshop Poster
)
|
🔗 |
-
|
Federated Learning with Buffered Asynchronous Aggregation
(
Workshop Poster
)
|
🔗 |
-
|
Industrial Booth (IBM)
(
Workshop Poster
)
|
🔗 |
-
|
Industrial Booth (Facebook)
(
Workshop Poster
)
|
🔗 |
-
|
Industrial Booth (Google)
(
Workshop Poster
)
|
🔗 |
-
|
-
(
Workshop Poster
)
|
🔗 |
Author Information
Nathalie Baracaldo (IBM Research)
Olivia Choudhury (Amazon)
Gauri Joshi (Carnegie Mellon University)
Peter Richtarik (King Abdullah University of Science and Technology (KAUST) - University of Edinburgh, Scotland)
Praneeth Vepakomma (MIT)
Shiqiang Wang (IBM Research)
Han Yu (Nanyang Technological University)
More from the Same Authors
-
2021 : Parallel Quasi-concave set optimization: A new frontier that scales without needing submodularity »
Praneeth Vepakomma · Ramesh Raskar -
2021 : BiG-Fed: Bilevel Optimization Enhanced Graph-Aided Federated Learning »
Pengwei Xing · Xing pengwei · Han Yu -
2021 : EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Peter Richtarik · Ilyas Fatkhullin -
2021 : Industrial Booth (IBM) »
Shiqiang Wang · Nathalie Baracaldo -
2022 : Formal Privacy Guarantees for Neural Network queries by estimating local Lipschitz constant »
Abhishek Singh · Praneeth Vepakomma · Vivek Sharma · Ramesh Raskar -
2023 : Towards a Theoretical and Practical Understanding of One-Shot Federated Learning with Fisher Information »
Divyansh Jhunjhunwala · Shiqiang Wang · Gauri Joshi -
2023 : A New Theoretical Perspective on Data Heterogeneity in Federated Optimization »
Jiayi Wang · Shiqiang Wang · Rong-Rong Chen · Mingyue Ji -
2023 Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities »
Zheng Xu · Peter Kairouz · Bo Li · Tian Li · John Nguyen · Jianyu Wang · Shiqiang Wang · Ayfer Ozgur -
2023 Poster: The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond »
Jiin Woo · Gauri Joshi · Yuejie Chi -
2023 Poster: On the Convergence of Federated Averaging with Cyclic Client Participation »
Yae Jee Cho · PRANAY SHARMA · Gauri Joshi · Zheng Xu · Satyen Kale · Tong Zhang -
2023 Poster: LESS-VFL: Communication-Efficient Feature Selection for Vertical Federated Learning »
Timothy Castiglia · Yi Zhou · Shiqiang Wang · Swanand Kadhe · Nathalie Baracaldo · Stacy Patterson -
2022 Poster: Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data »
Timothy Castiglia · Anirban Das · Shiqiang Wang · Stacy Patterson -
2022 Poster: Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling »
sajad khodadadian · PRANAY SHARMA · Gauri Joshi · Siva Maguluri -
2022 Spotlight: Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data »
Timothy Castiglia · Anirban Das · Shiqiang Wang · Stacy Patterson -
2022 Oral: Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling »
sajad khodadadian · PRANAY SHARMA · Gauri Joshi · Siva Maguluri -
2022 Poster: Federated Minimax Optimization: Improved Convergence Analyses and Algorithms »
PRANAY SHARMA · Rohan Panda · Gauri Joshi · Pramod K Varshney -
2022 Spotlight: Federated Minimax Optimization: Improved Convergence Analyses and Algorithms »
PRANAY SHARMA · Rohan Panda · Gauri Joshi · Pramod K Varshney -
2021 : Parallel Quasi-concave set optimization: A new frontier that scales without needing submodularity »
Ramesh Raskar · Praneeth Vepakomma -
2021 : Closing Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 : Industrial Panel »
Nathalie Baracaldo · Shiqiang Wang · Peter Kairouz · Zheng Xu · Kshitiz Malik · Tao Zhang -
2021 : Opening Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 : Governance in FL: Providing AI Fairness and Accountability »
Nathalie Baracaldo · Ali Anwar · Annie Abay -
2021 Expo Talk Panel: Enterprise-Strength Federated Learning: New Algorithms, New Paradigms, and a Participant-Interactive Demonstration Session »
Laura Wynter · Nathalie Baracaldo · Chaitanya Kumar · Parijat Dube · Mikhail Yurochkin · Theodoros Salonidis · Shiqiang Wang -
2021 : Adaptive Federated Learning for Communication and Computation Efficiency (2021 IEEE Leonard Prize-winning work). »
Shiqiang Wang -
2020 : Closing remarks »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Shiqiang Wang · Han Yu -
2020 : Opening remarks »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Shiqiang Wang · Han Yu -
2020 Workshop: Federated Learning for User Privacy and Data Confidentiality »
Nathalie Baracaldo · Olivia Choudhury · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Gauri Joshi · Shiqiang Wang · Han Yu -
2020 : Q&A with Peter Richtarik »
Peter Richtarik -
2020 : Talk by Peter Richtarik - Fast linear convergence of randomized BFGS »
Peter Richtarik -
2019 Workshop: Coding Theory For Large-scale Machine Learning »
Viveck Cadambe · Pulkit Grover · Dimitris Papailiopoulos · Gauri Joshi -
2018 Poster: Randomized Block Cubic Newton Method »
Nikita Doikov · Peter Richtarik -
2018 Oral: Randomized Block Cubic Newton Method »
Nikita Doikov · Peter Richtarik