Towards Understanding Generalization of Federated Adversarial Learning: Perspective of Algorithmic Stability
Abstract
Federated Adversarial Learning (FAL) enhances model robustness by integrating adversarial training into the federated learning framework. Despite recent advances proposing efficient FAL algorithms, existing work has mainly focused on convergence properties, with limited understanding of their generalization capabilities. To address this, we propose the first unified theoretical analysis of FAL generalization through the lens of algorithmic stability. We first analyze general FAL algorithms based on stochastic gradient descent and derive perturbation-dependent generalization bounds, which reveal that stronger adversarial attacks can lead to degraded generalization. To mitigate the impact of adversarial perturbations, we further leverage Moreau envelope optimization, deriving a perturbation-independent bound that enhances the robustness and generalization of the federated model. Finally, we extend our analysis to the practical black-box setting, demonstrating that zeroth-order optimization techniques can effectively maintain both robustness and generalization even without local gradient access.