Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Federated Learning for User Privacy and Data Confidentiality

Keynote Session 1: Balancing Efficiency and Security in Federated Learning, by Qiang Yang (WeBank)

Qiang Yang


Abstract:

Abstract: Federated learning systems need to balance the efficiency and security of machine learning algorithms while maintaining model accuracy. In this talk we discuss this trade-off in two settings. One is when two collaborating organisations wish to transfer the knowledge from one to another via a federated learning framework. We present a federated transfer learning algorithm to both improve the security and the performance while preserving privacy. Another case is when one exploits differential privacy in a federated learning framework to ensure efficiency, but this may cause security degradation. To solve the problem, we employ a dual-headed network architecture that guarantees training data privacy by exerting secret gradient perturbations to original gradients, while maintaining high performance of the global shared model. We find that the combination of secret-public networks provides a preferable alternative to DP-based mechanisms in federated learning applications.

Biography: Qiang Yang is Chief Artificial Intelligence Officer of WeBank and Chair Professor of CSE Department of Hong Kong Univ. of Sci. and Tech. He is the Conference Chair of AAAI-21, President of Hong Kong Society of Artificial Intelligence and Robotics(HKSAIR) and a former President of IJCAI (2017-2019). He is a fellow of AAAI, ACM, IEEE and AAAS. His research interests include transfer learning and federated learning. He is the founding EiC of two journals: IEEE Transactions on Big Data and ACM Transactions on Intelligent Systems and Technology.

Chat is not available.