Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Dynamic Neural Networks

Supernet Training for Federated Image Classification

Taehyeon Kim · Se-Young Yun


Abstract:

Efficient deployment of deep neural networks across many devices and resource constraints, especially on edge devices is one of the most challenging problems in the presence of data-privacy preservation issue. Conventional approaches have evolved to either improve a single global model while keeping each local training data decentralized (i.e., data-heterogeneity) or to train an once-for-all (OFA) network that supports diverse architectural settings to address heterogeneous clients equipped with different computational capabilities (i.e., model-heterogeneity). However, little research has considered both directions simultaneously. In this work, we propose a novel framework to consider both scenarios, namely Federation of Supernet Training (FedSup) where clients send and receive a supernet that containsall possible architectures sampled from itself. It is inspired by how averaging parameters in the model aggregation step of Federated Learning is very similar to weight sharing in supernet training. Specifically, in FedSup framework, a weight sharing approach widely used in the training single shot model is combined with the averaging of Federated Learning (FedAvg). Under our framework, we present a communication-efficient algorithm(CE-FedSup) by sending the sub-model to clients in the broadcast stage. We demonstrate several strategies to enhance supernet training in FL environment and conduct extensive empirical evaluations. The resulting framework is shown to provide robustness to both data- and model- heterogeneity on several standard benchmarks and a medical dataset.

Chat is not available.