Fair-FedMOE: Group-Fair One-Shot Federated Learning via Prototype-Guided Experts for Medical Imaging Analysis
Abstract
Group fairness can ensure equitable performance across different demographic subgroups for medical image analysis. However, the current fine-tuned foundation models (FMs) exhibit significant subgroup disparity. One-shot federated learning (OFL) can potentially mitigate this by leveraging cross-institutional data diversity within a single communication round. But heterogeneous distributions across medical institutions may cause OFL local models to diverge severely, resulting in parameter conflicts that amplify disparity upon aggregation. To address these challenges, we propose Fair-FedMOE, a group-fair OFL framework for medical FMs. During local training, Fairness-aware Expert Routing routes samples to group-specific experts via learnable prototypes, enabling subgroup-specialized learning to capture group-specific features without inter-group interference. During model aggregation, Prototype-guided Differential Aggregation computes personalized weights based on prototype similarity and applies differentiated aggregation strategies to filter conflicting updates. We propose RES-AUC, a Rawlsian justice-inspired metric based on worst-group performance that remains stable as groups increase. Comprehensive experiments on diverse retinal datasets covering different modalities and diseases, using retinal-specific and general-purpose FMs, show consistent fairness gains without sacrificing accuracy. Code available at https://anonymous.4open.science/r/Fair-FedMOE-2624.