Skip to yearly menu bar Skip to main content


GuardHFL: Privacy Guardian for Heterogeneous Federated Learning

Hanxiao Chen · Meng Hao · Hongwei Li · Kangjie Chen · Guowen Xu · Tianwei Zhang · Xilin Zhang

Exhibit Hall 1 #524
[ ]
[ PDF [ Poster


Heterogeneous federated learning (HFL) enables clients with different computation and communication capabilities to collaboratively train their own customized models via a query-response paradigm on auxiliary datasets. However, such a paradigm raises serious privacy concerns due to the leakage of highly sensitive query samples and response predictions. We put forth GuardHFL, the first-of-its-kind efficient and privacy-preserving HFL framework. GuardHFL is equipped with a novel HFL-friendly secure querying scheme built on lightweight secret sharing and symmetric-key techniques. The core of GuardHFL is two customized multiplication and comparison protocols, which substantially boost the execution efficiency. Extensive evaluations demonstrate that GuardHFL significantly outperforms the alternative instantiations based on existing state-of-the-art techniques in both runtime and communication cost.

Chat is not available.