Poster

Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning

Alberto Bietti · Chen-Yu Wei · Miroslav Dudik · John Langford · Steven Wu

Hall E #700

Keywords: [ OPT: Large Scale, Parallel and Distributed ] [ SA: Privacy-preserving Statistics and Machine Learning ] [ T: Learning Theory ] [ OPT: Stochastic ]

[ Abstract ]
[ Paper PDF
Wed 20 Jul 3:30 p.m. PDT — 5:30 p.m. PDT
 
Spotlight presentation: OPT: First Order
Wed 20 Jul 7:30 a.m. PDT — 9 a.m. PDT

Abstract:

Large-scale machine learning systems often involve data distributed across a collection of users. Federated learning algorithms leverage this structure by communicating model updates to a central server, rather than entire datasets. In this paper, we study stochastic optimization algorithms for a personalized federated learning setting involving local and global models subject to user-level (joint) differential privacy. While learning a private global model induces a cost of privacy, local learning is perfectly private. We provide generalization guarantees showing that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy. We illustrate our theoretical results with experiments on synthetic and real-world datasets.

Chat is not available.