Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2021 (FL-ICML'21)

Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning

Salman Avestimehr


Abstract:

Secure aggregation is a critical component in federated learning, which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring theprivacy of individual users in a single training round. We contend that such designs can lead to significant privacy leakages over multiple training rounds, due to partial user selection/participation at each round of federated learning. In fact, we empiricallyshow that the conventional random user selection strategies for federated learning lead to leaking users' individual models within number of rounds linear in the number of users. To address this challenge, we introduce a secure aggregation framework with multi-roundprivacy guarantees. In particular, we introduce a new metric to quantify the privacy guarantees of federated learning over multiple training rounds, and develop a structured user selection strategy that guarantees the long-term privacy of each user (over anynumber of training rounds). Our framework also carefully accounts for the fairness and the average number of participating users at each round. We perform several experiments on various datasets in the IID and the non-IID settings to demonstrate the performanceimprovement over the baseline algorithms, both in terms of privacy protection and test accuracy. We conclude the talk by discussing several open problems in this domain. (This talk is based on the following paper: https://arxiv.org/abs/2106.03328)