The Hidden Risk: Membership Inference Attacks on Multimodal Federated Learning via Modality Imbalance
Abstract
Federated learning (FL) faces significant challenges from modality heterogeneity, which motivates multimodal federated learning (MFL) to leverage complementary modalities across decentralized clients for improved performance. However, modality imbalance introduces a new attack surface, making MFL more vulnerable to membership inference attacks (MIAs), an issue that remains largely unexplored. In this work, we present the first systematic study of MIAs against MFL and propose a modality-aware attack framework. We show that multimodal models are inherently more susceptible to MIAs due to heterogeneous modality contributions, and existing attacks are suboptimal as they treat multimodal parameters as a whole. By performing MIAs on individual modalities, we find that (i) attacking the dominant modality achieves comparable accuracy with lower overhead, and (ii) different modalities expose distinct membership patterns. To identify members with different patterns, we propose a modality-aware framework that exploits cross-modal performance gaps to adaptively select attack modalities and calibrate inference results. Experiments on three datasets show our approach outperforms baselines across multiple metrics.