Timezone: »

 
Mechanisms for Privacy Preserving and Adversarial Training in Federated Learning
Parijat Dube · Jayaram Kallapalayam Radhakrishnan

We present two work targeted at (i) preserving privacy of training data, and (ii) improving adversarial robustness of models, in federated learning (FL).

Privacy of training data is key to FL. Model updates were assumed to be private for a long time, but recent reconstruction attacks (DLG,IDLG, IG) have demonstrated otherwise. Existing techniques for privacy preservation (encryption, secure multiparty computation and addition of statistical noise to model updates) are useful but have drawbacks -- either in terms of performance or model accuracy. We present recent work on effective use of model shuffling combined with trusted execution environments (TEEs) for aggregation.

Adversarial training (AT) in the federated learning setting is challenging given limited communication budget and non-iid data distribution among. We propose FedDynAT, a novel algorithm for efficient AT in federated learning. FedDynAT builds on techniques proposed for preventing catastrophic forgetting in federated learning with non-iid data by augmenting them with a dynamic local AT schedule. FedDynAT improves the convergence time upto a factor 14x for limited communication budget and achieves high accuracy at convergence as compared to other state-of-the-art schemes.

Author Information

Parijat Dube
Jayaram Kallapalayam Radhakrishnan (IBM Research)

More from the Same Authors