Weaving in the Clouds: Achieving Synergistic Collaboration among LLM Agents via Federated Learning
Abstract
Multi-Agent Systems (MAS) powered by Large Language Models (LLMs) have recently become a strong paradigm for solving complex workflow-structured tasks through expert collaboration. However, the data that make such collaboration effective are typically distributed across organizations and cannot be centrally pooled due to privacy, intellectual property, and compliance constraints. Federated Learning preserves data locality, yet most federated paradigms treat clients as independent and fail to capture workflow dependencies that are essential for coherent multi-stage collaboration. Data locality and workflow dependency are orthogonal, and the key challenge arises where both must be satisfied, namely federated, workflow-aware collaboration. We introduce FedWave, a framework that enables LLM-based experts to solve sequential workflows under strict privacy constraints. FedWave integrates a Value Chain Layer that encodes inter-stage dependencies with communication-efficient federated LoRA adaptation, a server-side Mixture-of-Experts (MoE) router that performs input-conditioned expert fusion at inference time while retaining standard federated aggregation during training, and a Direct Preference Optimization (DPO) stage that aligns collaborative outputs using router-induced preferences. Experiments show that FedWave consistently outperforms strong federated baselines and remains competitive with centralized multi-agent systems without compromising data privacy. Code is available at https://anonymous.4open.science/r/FedWave-111A.