Timezone: »

Poster
The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning
Wei-Ning Chen · Christopher Choquette Choo · Peter Kairouz · Ananda Suresh

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #1017
We consider the problem of training a differential private federated learning (FL) model with secure aggregation (SecAgg), where a centralized server aggregates local model updates from $n$ clients via a cryptographic multi-party protocol, such that only the sum of these updates can be revealed. On the theoretical side, we analyze the distributed mean estimation task (which is the core of distributed stochastic gradient descent (SGD)) and characterize fundamental communication costs required to achieve centralized accuracy under the same privacy requirement. Our results suggest that $\Theta(n^2\varepsilon^2)$ bits are both sufficient and necessary and that the $\Theta(n^2\varepsilon^2)$ bits limit can be achieved by a linear compression scheme based on sparse random projection. On the empirical side, we evaluate our linear compression scheme on real-world FL tasks and observe compression rates of up to $50\times$ (compared to previous private FL schemes with SecAgg) with no significant decrease in model test accuracy. Our work hence theoretically and empirically specifies the fundamental price of using SecAgg in differentially private federated learning.

#### Author Information

##### Wei-Ning Chen (Stanford University)

Wei-Ning Chen is currently a Ph.D. student at Stanford EE under the support of Stanford Graduate Fellowship (SGF). His research interests broadly lie in information-theoretic and algorithmic aspects of data science. He adopt tools mainly from information theory, theoretical machine learning, and statistical inference, with a current focus on distributed inference, federated learning and differential privacy.