Controlled Collaboration Geometry for Personalized Federated Learning
Abstract
In personalized federated learning (PFL), collaboration graphs specify model aggregation among clients. However, without constraints on the collaboration geometry, training can drift into two degenerate regimes: global consensus or spontaneous clustering. This paper provides a unified dynamical analysis: under the same budget of representative models, collaborative PFL is more expressive and achieves higher-order approximation accuracy than clustered PFL. An upper bound on disagreement further reveals two degeneration mechanisms—overly strong collaboration drives consensus (reducing to standard federated learning), while similarity-driven weight updates make the graph nearly reducible and induce self-clustering (collapsing to clustered PFL). Motivated by these findings, we propose pFedCCG. pFedCCG preserves the expressivity advantage via controlled collaboration geometry (CCG): it builds a static similarity-based collaboration template decoupled from training, optimizes a Markovian collaboration matrix with a prescribed stationary distribution via reversible parameterization and Euclidean projection, and schedules collaboration strength to avoid self-clustering. Experiments across diverse heterogeneity settings show consistent personalization gains and markedly reduced collapse and self-clustering. Code will be available at https://anonymous.4open.science/r/pFedCCG-CB88.