Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities

Beyond Secure Aggregation: Scalable Multi-Round Secure Collaborative Learning

Umit Basaran · Xingyu Lu · Basak Guler


Abstract:

Privacy-preserving machine learning (PPML) has achieved exciting breakthroughs for secure collaborative training of machine learning models under formal information-theoretic privacy guarantees. Despite the recent advances, communication bottleneck still remains as a major challenge against scalability to large neural networks. To address this challenge, in this work we introduce the first end-to-end multi-round multi-party neural network training framework with linear communication complexity, under formal information-theoretic privacy guarantees. Our key contribution is a scalable secure computing mechanism for iterative polynomial operations, which incurs only linear communication overhead, significantly improving over the quadratic state-of-the-art, while providing formal end-to-end multi-round information-theoretic privacy guarantees. In doing so, our framework achieves equal adversary tolerance, resilience to user dropouts, and model accuracy as the state-of-the-art, while addressing a key challenge in scalable training.

Chat is not available.