Timezone: »
We introduce a new function-preserving transformation for efficient neural architecture search. This network transformation allows reusing previously trained networks and existing successful architectures that improves sample efficiency. We aim to address the limitation of current network transformation operations that can only perform layer-level architecture modifications, such as adding (pruning) filters or inserting (removing) a layer, which fails to change the topology of connection paths. Our proposed path-level transformation operations enable the meta-controller to modify the path topology of the given network while keeping the merits of reusing weights, and thus allow efficiently designing effective structures with complex path topologies like Inception models. We further propose a bidirectional tree-structured reinforcement learning meta-controller to explore a simple yet highly expressive tree-structured architecture space that can be viewed as a generalization of multi-branch architectures. We experimented on the image classification datasets with limited computational resources (about 200 GPU-hours), where we observed improved parameter efficiency and better test results (97.70% test accuracy on CIFAR-10 with 14.3M parameters and 74.6% top-1 accuracy on ImageNet in the mobile setting), demonstrating the effectiveness and transferability of our designed architectures.
Author Information
Han Cai (Shanghai Jiao Tong University)
Jiacheng Yang (Shanghai Jiao Tong University)
Weinan Zhang (Shanghai Jiao Tong University)
Song Han (MIT)
Yong Yu (Shanghai Jiao Tong University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Path-Level Network Transformation for Efficient Architecture Search »
Fri. Jul 13th 09:40 -- 09:50 AM Room Victoria
More from the Same Authors
-
2023 : Panel Discussion »
Peter Kairouz · Song Han · Kamalika Chaudhuri · Florian Tramer -
2023 Poster: GEAR: A GPU-Centric Experience Replay System for Large Reinforcement Learning Models »
Hanjing Wang · Man-Kit Sit · Congjie He · Ying Wen · Weinan Zhang · Jun Wang · Yaodong Yang · Luo Mai -
2023 Poster: SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models »
Guangxuan Xiao · Ji Lin · Mickael Seznec · Hao Wu · Julien Demouth · Song Han -
2022 Poster: Plan Your Target and Learn Your Skills: Transferable State-Only Imitation Learning via Decoupled Policy Optimization »
Minghuan Liu · Zhengbang Zhu · Yuzheng Zhuang · Weinan Zhang · Jianye Hao · Yong Yu · Jun Wang -
2022 Spotlight: Plan Your Target and Learn Your Skills: Transferable State-Only Imitation Learning via Decoupled Policy Optimization »
Minghuan Liu · Zhengbang Zhu · Yuzheng Zhuang · Weinan Zhang · Jianye Hao · Yong Yu · Jun Wang -
2020 Poster: Multi-Agent Determinantal Q-Learning »
Yaodong Yang · Ying Wen · Jun Wang · Liheng Chen · Kun Shao · David Mguni · Weinan Zhang -
2020 Poster: Bidirectional Model-based Policy Optimization »
Hang Lai · Jian Shen · Weinan Zhang · Yong Yu -
2019 : Hardware Efficiency Aware Neural Architecture Search and Compression »
Song Han -
2019 Poster: Lipschitz Generative Adversarial Nets »
Zhiming Zhou · Jiadong Liang · Yuxuan Song · Lantao Yu · Hongwei Wang · Weinan Zhang · Yong Yu · Zhihua Zhang -
2019 Oral: Lipschitz Generative Adversarial Nets »
Zhiming Zhou · Jiadong Liang · Yuxuan Song · Lantao Yu · Hongwei Wang · Weinan Zhang · Yong Yu · Zhihua Zhang -
2018 Poster: Mean Field Multi-Agent Reinforcement Learning »
Yaodong Yang · Rui Luo · Minne Li · Ming Zhou · Weinan Zhang · Jun Wang -
2018 Oral: Mean Field Multi-Agent Reinforcement Learning »
Yaodong Yang · Rui Luo · Minne Li · Ming Zhou · Weinan Zhang · Jun Wang