Towards Rule-Based Knowledge Sharing in Federated Learning
Abstract
Federated learning often face both data and model heterogeneity, with the latter often more challenging. Architectural differences yield incompatible representation, making the knowledge-sharing carrier central to heterogeneous collaboration. Using proxy model enables distillation-based collaboration but incurs high communication and computation costs. Prototype-based carriers are lighter yet cause semantic confusion when incompatible features are mixed. Therefore, we propose rule-based federated learning (RFL) that shares interpretable, class-discriminative rules to enable heterogeneous collaboration, avoid feature confusion, and keep communication lightweight. RFL uses a rule network to unify clients’ decision features and collaborates at the rule level, avoiding forcible averaging of incompatible representations. RFL selects sparse, high-coverage, beneficial rules for broadcasting, compressing shared knowledge into an interpretable class-rule set and reducing communication and computation costs. Each client selectively activates only rules relevant to its local classes, mitigating negative transfer while preserving personalization. Across heterogeneous settings, RFL achieves a better accuracy–communication trade-off.