Federated Manifold Learning (FML): Tackling Domain Heterogeneity with Structural Knowledge Transfer
Abstract
Federated Learning (FL) faces significant challenges due to domain heterogeneity, where data from different clients exhibit substantial statistical shifts that hinder the generalization of the global model. Although existing methods attempt to mitigate this by exchanging class prototypes, they fall short by representing an entire class's complex distribution with a single point. This oversimplification disregards the rich structural information within the data, especially across diverse domains. To address this limitation, we propose a paradigm shift from point-based representation to structure-based knowledge transfer. We introduce Federated Manifold Learning (FML), a novel framework that leverages perceptual manifolds—the intrinsic geometric structures of classes in the feature space—as rich knowledge carriers. In FML, clients transmit compressed manifolds, which are adaptively fused on the server using an attention-based Manifold Mutual Learning (MML) mechanism. This process enables domain-specific structures to learn from each other, creating a unified yet flexible global convergence target. Manifold-guided local training, enforced by a manifold approximation loss and a separation loss, further aligns local models with this global structure. Extensive experiments on the Digits and Office31 benchmarks demonstrate that FML substantially outperforms state-of-the-art methods, achieving accuracy improvements of up to 6.48%.