Timezone: »

 
Supernet Training for Federated Image Classification
Taehyeon Kim · Se-Young Yun

Fri Jul 22 01:45 PM -- 02:00 PM (PDT) @

Efficient deployment of deep neural networks across many devices and resource constraints, especially on edge devices is one of the most challenging problems in the presence of data-privacy preservation issue. Conventional approaches have evolved to either improve a single global model while keeping each local training data decentralized (i.e., data-heterogeneity) or to train an once-for-all (OFA) network that supports diverse architectural settings to address heterogeneous clients equipped with different computational capabilities (i.e., model-heterogeneity). However, little research has considered both directions simultaneously. In this work, we propose a novel framework to consider both scenarios, namely Federation of Supernet Training (FedSup) where clients send and receive a supernet that containsall possible architectures sampled from itself. It is inspired by how averaging parameters in the model aggregation step of Federated Learning is very similar to weight sharing in supernet training. Specifically, in FedSup framework, a weight sharing approach widely used in the training single shot model is combined with the averaging of Federated Learning (FedAvg). Under our framework, we present a communication-efficient algorithm(CE-FedSup) by sending the sub-model to clients in the broadcast stage. We demonstrate several strategies to enhance supernet training in FL environment and conduct extensive empirical evaluations. The resulting framework is shown to provide robustness to both data- and model- heterogeneity on several standard benchmarks and a medical dataset.

Author Information

Taehyeon Kim (KAIST)

I’m a Ph.D. candidate in the Graduate School of AI at Korea Advanced Institute of Science and Technology (KAIST), advised by Prof. Se-Young Yun, and a member of OSI Lab. During my study, I interned at Qualcomm AI ADAS (Seoul, South Korea, 2021). I received a B.S. in Mathematics from KAIST in 2018. My research has investigated trustworthy and real-world AI/ML challenges. Specifically, my interests include the optimization for training deep neural networks, automated neural architecture search, automated hyperparameter search, learning with noisy labels, model compression, federated learning, and precipitation nowcasting. My research has been presented at several conferences and organizations.

Se-Young Yun (KAIST)

More from the Same Authors