Poster
in
Workshop: 2nd Workshop on Advancing Neural Network Training : Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)
Adaptive Model Pruning in Federated Learning through Loss Exploration
Christian Internò · Elena Raponi · Niki van Stein · Thomas Bäck · Markus Olhofer · Yaochu Jin · CITEC Barbara Hammer
The rapid proliferation of smart devices coupled with the advent of 6G networks has profoundly reshaped the domain of collaborative machine learning. Alongside growing privacy-security concerns in sensitive fields, these developments have positioned federated learning (FL) as a pivotal technology for decentralized model training. Despite its vast potential, FL encounters challenges such as elevated communication costs, computational constraint, and the complexities of non-IID data distributions. We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive model pruning. This innovative mechanism automatically identifies and prunes unimportant model parameters by distilling knowledge on model gradients behavior across different non-IID client losses, thereby optimizing computational efficiency and enhancing model performance on resource-constrained scenarios. Extensive experiments across various datasets and FL tasks reveal that AutoFLIP not only efficiently accelerates global convergence but also achieves superior accuracy and robustness compared to traditional methods. On average, AutoFLIP reduces computational overhead by 48.8% and communication costs by 35.5%, while maintaining high accuracy. By significantly reducing these overheads, AutoFLIP offer the way for efficient FL deployment in real-world applications, from healthcare to smart cities.