Skip to yearly menu bar Skip to main content


Poster

FedMBridge: Bridgeable Multimodal Federated Learning

Jiayi Chen · Aidong Zhang


Abstract:

Multimodal Federated Learning (MFL) addresses the setup of multiple clients focusing on diversified modality types (e.g. image, video, text, audio) working together to improve their local personal models in a data-privacy manner. However, traditional MFL works rely on a restrictive design of compositional neural architectures to ensure information sharing to be achieved via blockwise model aggregation, which limits their applications in the real-world Architecture-personalized MFL (AMFL) scenarios, where there are diversified multimodal fusion strategies across clients and no restriction on local architecture design. Yet the challenge in AMFL is how to automatically and efficiently tackle the two heterogeneity patterns (i.e. statistical and architecture heterogeneity) while maximizing the beneficial information sharing among clients. To solve this challenge, we propose FedMBridge, which leverages a topology-aware hypernetwork to act as a bridge that automatically balances and digests the two heterogeneity patterns in a communication-efficient manner. Our experiments on four AMFL simulations demonstrate the efficiency and effectiveness of our proposed approach.

Live content is unavailable. Log in and register to view live content