Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities

Federated Experiment Design under Distributed Differential Privacy

Wei-Ning Chen · Graham Cormode · Akash Bharadwaj · Peter Romov · Ayfer Ozgur


Abstract:

Experiment design has a rich history dating back to the early 1920s and has found numerous critical applications across various fields since then. However, the use and collection of users' data in experiments often involve sensitive personal information, so additional measures to protect individual privacy are required during data collection, storage, and usage. In this work, we focus on the rigorous protection of users' privacy (under the notion of differential privacy (DP)) while minimizing the trust toward service providers. Specifically, we consider the estimation of the average treatment effect (ATE) under Neyman's potential outcome framework under DP and secure aggregation, a distributed protocol enabling a service provider to aggregate information without accessing individual data. To achieve DP, we design local privatization mechanisms that are compatible with secure aggregation. We show that when introducing DP noise, it is imperative to 1) cleverly split privacy budgets to estimate both the mean and variance of the outcomes and 2) carefully calibrate the confidence intervals according to the DP noise. Finally, we present comprehensive experimental evaluations of our proposed schemes and show the privacy-utility trade-offs in experiment design.

Chat is not available.