Coupled Trigger Optimization and Vulnerable Parameter Alignment for Persistent Backdoor Attacks on Federated Learning
Abstract
Federated learning (FL) is vulnerable to backdoor attacks. Yet sustaining backdoor effectiveness under repeated aggregation remains challenging. Existing methods often rely on heuristic trigger designs or indiscriminant parameter manipulation, leading to rapid decay or detectable anomalies. In this work, we view FL backdoor persistence through the lens of optimization dynamics, and argue that long-lasting attacks require alignment between trigger-induced representations and aggregation-stable parameter directions. Based on this insight, we propose the Coupled Trigger Optimization and Vulnerable Parameter Alignment (CTO-VPA) FL backdoor attack method. By constraining updates to this coupled subspace, backdoor behaviors can be embedded into optimization-stable directions while preserving benign performance. Experiments across multiple datasets and defense settings show that CTO-VPA achieves substantially improved persistence and robustness compared to prior attacks, highlighting the importance of trigger–parameter coupling in FL settings.