Skip to yearly menu bar Skip to main content


Poster

SRATTA: Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning.

Tanguy MARCHAND · Regis Loeb · Ulysse Marteau-Ferey · Jean Ogier du Terrail · Arthur Pignet

Exhibit Hall 1 #721
[ ]
[ PDF [ Poster

Abstract:

We consider a federated learning (FL) setting where a machine learning model with a fully connected first layer is trained between different clients and a central server using FedAvg, and where the aggregation step can be performed with secure aggregation (SA). We present SRATTA an attack relying only on aggregated models which, under realistic assumptions, (i) recovers data samples from the different clients, and (ii) groups data samples coming from the same client together. While sample recovery has already been explored in an FL setting, the ability to group samples per client, despite the use of SA, is novel. This poses a significant unforeseen security threat to FL and effectively breaks SA. We show that SRATTA is both theoretically grounded and can be used in practice on realistic models and datasets. We also propose counter-measures, and claim that clients should play an active role to guarantee their privacy during training.

Chat is not available.