Optimal Defenses Against Data Reconstruction Attacks
Abstract
Federated Learning (FL) is designed to preventdata leakage through collaborative model train-ing without centralized data storage. However,it is vulnerable to reconstruction attacks that re-cover original training data from shared gradients.To optimize the trade-off between data leakageand utility loss, we first derive a theoretical lowerbound of reconstruction error (among all attack-ers) for the two standard methods: adding noise,and gradient pruning. We then customize thesetwo defenses to be parameter- and model-specificand achieve the optimal trade-off between our ob-tained reconstruction lower bound and model util-ity. Experimental results validate that our methodsoutperform Gradient Noise and Gradient Pruningby protecting the training data better while alsoachieving better utility.