Timezone: »
Spotlight
K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets
Xiu Su · Shan You · Mingkai Zheng · Fei Wang · Chen Qian · Changshui Zhang · Chang Xu
In one-shot weight sharing for NAS, the weights of each operation (at each layer) are supposed to be identical for all architectures (paths) in the supernet. However, this rules out the possibility of adjusting operation weights to cater for different paths, which limits the reliability of the evaluation results. In this paper, instead of counting on a single supernet, we introduce $K$-shot supernets and take their weights for each operation as a dictionary. The operation weight for each path is represented as a convex combination of items in a dictionary with a simplex code.
This enables a matrix approximation of the stand-alone weight matrix with a higher rank ($K>1$). A \textit{simplex-net} is introduced to produce architecture-customized code for each path. As a result, all paths can adaptively learn how to share weights in the $K$-shot supernets and acquire corresponding weights for better evaluation.
$K$-shot supernets and simplex-net can be iteratively trained, and we further extend the search to the channel dimension. Extensive experiments on benchmark datasets validate that K-shot NAS significantly improves the evaluation accuracy of paths and thus brings in impressive performance improvements.
Author Information
Xiu Su (University of Sydney)
Shan You (SenseTime Research)
Mingkai Zheng (SenseTime)
Fei Wang (SenseTime)
Chen Qian (SenseTime)
Changshui Zhang (Tsinghua University)
Chang Xu (University of Sydney)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets »
Wed. Jul 21st 04:00 -- 06:00 AM Room
More from the Same Authors
-
2023 Poster: Dual Focal Loss for Calibration »
Linwei Tao · Minjing Dong · Chang Xu -
2023 Poster: PixelAsParam: A Gradient View on Diffusion Sampling with Guidance »
Anh-Dung Dinh · Daochang Liu · Chang Xu -
2022 Poster: Spatial-Channel Token Distillation for Vision MLPs »
Yanxi Li · Xinghao Chen · Minjing Dong · Yehui Tang · Yunhe Wang · Chang Xu -
2022 Spotlight: Spatial-Channel Token Distillation for Vision MLPs »
Yanxi Li · Xinghao Chen · Minjing Dong · Yehui Tang · Yunhe Wang · Chang Xu -
2021 Poster: Commutative Lie Group VAE for Disentanglement Learning »
Xinqi Zhu · Chang Xu · Dacheng Tao -
2021 Oral: Commutative Lie Group VAE for Disentanglement Learning »
Xinqi Zhu · Chang Xu · Dacheng Tao -
2021 Poster: Learning to Weight Imperfect Demonstrations »
Yunke Wang · Chang Xu · Bo Du · Honglak Lee -
2021 Spotlight: Learning to Weight Imperfect Demonstrations »
Yunke Wang · Chang Xu · Bo Du · Honglak Lee -
2020 Poster: Neural Architecture Search in A Proxy Validation Loss Landscape »
Yanxi Li · Minjing Dong · Yunhe Wang · Chang Xu -
2020 Poster: Training Binary Neural Networks through Learning with Noisy Supervision »
Kai Han · Yunhe Wang · Yixing Xu · Chunjing Xu · Enhua Wu · Chang Xu -
2020 Poster: Semismooth Newton Algorithm for Efficient Projections onto $\ell_{1, \infty}$-norm Ball »
Dejun Chu · Changshui Zhang · Shiliang Sun · Qing Tao -
2019 Poster: LegoNet: Efficient Convolutional Neural Networks with Lego Filters »
Zhaohui Yang · Yunhe Wang · Chuanjian Liu · Hanting Chen · Chunjing Xu · Boxin Shi · Chao Xu · Chang Xu -
2019 Oral: LegoNet: Efficient Convolutional Neural Networks with Lego Filters »
Zhaohui Yang · Yunhe Wang · Chuanjian Liu · Hanting Chen · Chunjing Xu · Boxin Shi · Chao Xu · Chang Xu