Timezone: »
One-Shot Neural Architecture Search (NAS) is a promising method to significantly reduce search time without any separate training. It can be treated as a Network Compression problem on the architecture parameters from an over-parameterized network. However, there are two issues associated with most one-shot NAS methods. First, dependencies between a node and its predecessors and successors are often disregarded which result in improper treatment over zero operations. Second, architecture parameters pruning based on their magnitude is questionable. In this paper, we employ the classic Bayesian learning approach to alleviate these two issues by modeling architecture parameters using hierarchical automatic relevance determination (HARD) priors. Unlike other NAS methods, we train the over-parameterized network for only one epoch then update the architecture. Impressively, this enabled us to find the architecture in both proxy and proxyless tasks on CIFAR-10 within only 0.2 GPU days using a single GPU. As a byproduct, our approach can be transferred directly to compress convolutional neural networks by enforcing structural sparsity which achieves extremely sparse networks without accuracy deterioration.
Author Information
Hongpeng Zhou (Delft University of Technology)
Minghao Yang (TUDelft)
Jun Wang (UCL)
Wei Pan (TUDelft)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: BayesNAS: A Bayesian Approach for Neural Architecture Search »
Tue. Jun 11th 11:20 -- 11:25 PM Room Hall A
More from the Same Authors
-
2022 Poster: Understanding Policy Gradient Algorithms: A Sensitivity-Based Approach »
Shuang Wu · Ling Shi · Jun Wang · Guangjian Tian -
2022 Poster: Plan Your Target and Learn Your Skills: Transferable State-Only Imitation Learning via Decoupled Policy Optimization »
Minghuan Liu · Zhengbang Zhu · Yuzheng Zhuang · Weinan Zhang · Jianye Hao · Yong Yu · Jun Wang -
2022 Spotlight: Understanding Policy Gradient Algorithms: A Sensitivity-Based Approach »
Shuang Wu · Ling Shi · Jun Wang · Guangjian Tian -
2022 Spotlight: Plan Your Target and Learn Your Skills: Transferable State-Only Imitation Learning via Decoupled Policy Optimization »
Minghuan Liu · Zhengbang Zhu · Yuzheng Zhuang · Weinan Zhang · Jianye Hao · Yong Yu · Jun Wang -
2021 Poster: Learning in Nonzero-Sum Stochastic Games with Potentials »
David Mguni · Yutong Wu · Yali Du · Yaodong Yang · Ziyi Wang · Minne Li · Ying Wen · Joel Jennings · Jun Wang -
2021 Poster: Modelling Behavioural Diversity for Learning in Open-Ended Games »
Nicolas Perez-Nieves · Yaodong Yang · Oliver Slumbers · David Mguni · Ying Wen · Jun Wang -
2021 Poster: Estimating $\alpha$-Rank from A Few Entries with Low Rank Matrix Completion »
Yali Du · Xue Yan · Xu Chen · Jun Wang · Haifeng Zhang -
2021 Spotlight: Learning in Nonzero-Sum Stochastic Games with Potentials »
David Mguni · Yutong Wu · Yali Du · Yaodong Yang · Ziyi Wang · Minne Li · Ying Wen · Joel Jennings · Jun Wang -
2021 Oral: Modelling Behavioural Diversity for Learning in Open-Ended Games »
Nicolas Perez-Nieves · Yaodong Yang · Oliver Slumbers · David Mguni · Ying Wen · Jun Wang -
2021 Spotlight: Estimating $\alpha$-Rank from A Few Entries with Low Rank Matrix Completion »
Yali Du · Xue Yan · Xu Chen · Jun Wang · Haifeng Zhang -
2020 Poster: Multi-Agent Determinantal Q-Learning »
Yaodong Yang · Ying Wen · Jun Wang · Liheng Chen · Kun Shao · David Mguni · Weinan Zhang -
2018 Poster: Mean Field Multi-Agent Reinforcement Learning »
Yaodong Yang · Rui Luo · Minne Li · Ming Zhou · Weinan Zhang · Jun Wang -
2018 Oral: Mean Field Multi-Agent Reinforcement Learning »
Yaodong Yang · Rui Luo · Minne Li · Ming Zhou · Weinan Zhang · Jun Wang