Personalized Additive Modeling for Multi-level Federated Learning
Abstract
Contemporary AI faces the challenge of balancing generality with user-specific personalization. In federated learning (FL), this challenge is amplified by highly heterogeneous client data with complex non-IID patterns beyond standard modeling assumptions. Many existing FL methods are designed for relatively restricted heterogeneity settings (e.g., a fixed number of clusters or a fixed form of personalization), limiting their robustness under complex structures. In this work, we study FL from a \emph{multi-level non-IID} perspective, where client similarity is approximated by multiple granularities of shared knowledge: global, subgroup, and client-specific components. This view captures coarse-to-fine relationships while requiring less prior knowledge of task boundaries. Building on this insight, we propose \emph{Federated Multi-level Additive Modeling} (FeMAM), which learns multiple levels of shareable models and constructs personalized predictors via additive composition across levels. To move beyond a fixed structure, FeMAM allows models to grow and be pruned dynamically during training, adapting to diverse federated scenarios. Despite employing multiple models, FeMAM remains cost-friendly by activating only a small subset (one level) of models for training at a time. Extensive experiments show that FeMAM effectively approximates complex non-IID structures and consistently outperforms representative clustered and personalized FL baselines.