Timezone: »

Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning
Alberto Bietti · Chen-Yu Wei · Miroslav Dudik · John Langford · Steven Wu

Wed Jul 20 08:00 AM -- 08:05 AM (PDT) @ Room 327 - 329

Large-scale machine learning systems often involve data distributed across a collection of users. Federated learning algorithms leverage this structure by communicating model updates to a central server, rather than entire datasets. In this paper, we study stochastic optimization algorithms for a personalized federated learning setting involving local and global models subject to user-level (joint) differential privacy. While learning a private global model induces a cost of privacy, local learning is perfectly private. We provide generalization guarantees showing that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy. We illustrate our theoretical results with experiments on synthetic and real-world datasets.

Author Information

Alberto Bietti (NYU)
Chen-Yu Wei (University of Southern California)
Miroslav Dudik (Microsoft Research)
Miroslav Dudik

Miroslav Dudík is a Senior Principal Researcher in machine learning at Microsoft Research, NYC. His research focuses on combining theoretical and applied aspects of machine learning, statistics, convex optimization, and algorithms. Most recently he has worked on contextual bandits, reinforcement learning, and algorithmic fairness. He received his PhD from Princeton in 2007. He is a co-creator of the Fairlearn toolkit for assessing and improving the fairness of machine learning models and of the Maxent package for modeling species distributions, which is used by biologists around the world to design national parks, model the impacts of climate change, and discover new species.

John Langford (Microsoft Research)
Steven Wu (Carnegie Mellon University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors