Timezone: »
Many existing neural architecture search (NAS) solutions rely on downstream training for architecture evaluation, which takes enormous computations. Considering that these computations bring a large carbon footprint, this paper aims to explore a green (namely environmental-friendly) NAS solution that evaluates architectures without training. Intuitively, gradients, induced by the architecture itself, directly decide the convergence and generalization results. It motivates us to propose the gradient kernel hypothesis: Gradients can be used as a coarse-grained proxy of downstream training to evaluate random-initialized networks. To support the hypothesis, we conduct a theoretical analysis and find a practical gradient kernel that has good correlations with training loss and validation performance. According to this hypothesis, we propose a new kernel based architecture search approach KNAS. Experiments show that KNAS achieves competitive results with orders of magnitude faster than ``train-then-test'' paradigms on image classification tasks. Furthermore, the extremely low search cost enables its wide applications. The searched network also outperforms strong baseline RoBERTA-large on two text classification tasks.
Author Information
Jingjing Xu (ByteDance AI Lab)
Liang Zhao (Peking Unviersity)
Junyang Lin (Alibaba Group)
Rundong Gao (Tsinghua University)
Xu SUN (Peking University)
Hongxia Yang (Alibaba Group)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: KNAS: Green Neural Architecture Search »
Tue. Jul 20th 01:30 -- 01:35 PM Room None
More from the Same Authors
-
2022 Poster: Unifying Modalities, Tasks, and Architectures Through a Simple Sequence-to-Sequence Learning Framework »
Peng Wang · An Yang · Rui Men · Junyang Lin · Shuai Bai · Zhikang Li · Jianxin Ma · Chang Zhou · Jingren Zhou · Hongxia Yang -
2022 Spotlight: Unifying Modalities, Tasks, and Architectures Through a Simple Sequence-to-Sequence Learning Framework »
Peng Wang · An Yang · Rui Men · Junyang Lin · Shuai Bai · Zhikang Li · Jianxin Ma · Chang Zhou · Jingren Zhou · Hongxia Yang -
2022 Poster: Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably) »
Yu Huang · Junyang Lin · Chang Zhou · Hongxia Yang · Longbo Huang -
2022 Spotlight: Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably) »
Yu Huang · Junyang Lin · Chang Zhou · Hongxia Yang · Longbo Huang -
2021 Poster: Learning to Rehearse in Long Sequence Memorization »
Zhu Zhang · Chang Zhou · Jianxin Ma · Zhijie Lin · Jingren Zhou · Hongxia Yang · Zhou Zhao -
2021 Spotlight: Learning to Rehearse in Long Sequence Memorization »
Zhu Zhang · Chang Zhou · Jianxin Ma · Zhijie Lin · Jingren Zhou · Hongxia Yang · Zhou Zhao -
2017 Poster: meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting »
Xu SUN · Xuancheng REN · Shuming Ma · Houfeng Wang -
2017 Talk: meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting »
Xu SUN · Xuancheng REN · Shuming Ma · Houfeng Wang