Skip to yearly menu bar Skip to main content


Spotlight

FedScale: Benchmarking Model and System Performance of Federated Learning at Scale

Fan Lai · Yinwei Dai · Sanjay Singapuram · Jiachen Liu · Xiangfeng Zhu · Harsha Madhyastha · Mosharaf Chowdhury

Room 310

Abstract:

We present FedScale, a federated learning (FL) benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research. FedScale datasets encompass a wide range of critical FL tasks, ranging from image classification and object detection to language modeling and speech recognition. Each dataset comes with a unified evaluation protocol using real-world data splits and evaluation metrics. To reproduce realistic FL behavior, FedScale contains a scalable and extensible runtime. It provides high-level APIs to implement FL algorithms, deploy them at scale across diverse hardware and software backends, and evaluate them at scale, all with minimal developer efforts. We combine the two to perform systematic benchmarking experiments and highlight potential opportunities for heterogeneity-aware co-optimizations in FL. FedScale is open-source and actively maintained by contributors from different institutions at http://fedscale.ai. We welcome feedback and contributions from the community.

Chat is not available.