Timezone: »
This paper aims to build efficient convolutional neural networks using a set of Lego filters. Many successful building blocks, e.g., inception and residual modules, have been designed to refresh state-of-the-art records of CNNs on visual recognition tasks. Beyond these high-level modules, we suggest that an ordinary filter in the neural network can be upgraded to a sophisticated module as well. Filter modules are established by assembling a shared set of Lego filters that are often of much lower dimensions. Weights in Lego filters and binary masks to stack Lego filters for these filter modules can be simultaneously optimized in an end-to-end manner as usual. Inspired by network engineering, we develop a split-transform-merge strategy for an efficient convolution by exploiting intermediate Lego feature maps. The compression and acceleration achieved by Lego Networks using the proposed Lego filters have been theoretically discussed. Experimental results on benchmark datasets and deep models demonstrate the advantages of the proposed Lego filters and their potential real-world applications on mobile devices.
Author Information
Zhaohui Yang (Peking University)
Yunhe Wang (Peking University)
Chuanjian Liu (Huawei Noah's Ark Lab)
Hanting Chen (Peking University)
Chunjing Xu (Huawei Noah's Ark Lab)
Boxin Shi (Peking University)
Chao Xu (Peking University)
Chang Xu (University of Sydney)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: LegoNet: Efficient Convolutional Neural Networks with Lego Filters »
Wed. Jun 12th 09:30 -- 09:35 PM Room Hall A
More from the Same Authors
-
2023 Poster: Dual Focal Loss for Calibration »
Linwei Tao · Minjing Dong · Chang Xu -
2023 Poster: PixelAsParam: A Gradient View on Diffusion Sampling with Guidance »
Anh-Dung Dinh · Daochang Liu · Chang Xu -
2022 Poster: Spatial-Channel Token Distillation for Vision MLPs »
Yanxi Li · Xinghao Chen · Minjing Dong · Yehui Tang · Yunhe Wang · Chang Xu -
2022 Spotlight: Spatial-Channel Token Distillation for Vision MLPs »
Yanxi Li · Xinghao Chen · Minjing Dong · Yehui Tang · Yunhe Wang · Chang Xu -
2022 Poster: Federated Learning with Positive and Unlabeled Data »
Xinyang Lin · Hanting Chen · Yixing Xu · Chao Xu · Xiaolin Gui · Yiping Deng · Yunhe Wang -
2022 Spotlight: Federated Learning with Positive and Unlabeled Data »
Xinyang Lin · Hanting Chen · Yixing Xu · Chao Xu · Xiaolin Gui · Yiping Deng · Yunhe Wang -
2021 Poster: Commutative Lie Group VAE for Disentanglement Learning »
Xinqi Zhu · Chang Xu · Dacheng Tao -
2021 Oral: Commutative Lie Group VAE for Disentanglement Learning »
Xinqi Zhu · Chang Xu · Dacheng Tao -
2021 Poster: Learning to Weight Imperfect Demonstrations »
Yunke Wang · Chang Xu · Bo Du · Honglak Lee -
2021 Poster: K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets »
Xiu Su · Shan You · Mingkai Zheng · Fei Wang · Chen Qian · Changshui Zhang · Chang Xu -
2021 Spotlight: K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets »
Xiu Su · Shan You · Mingkai Zheng · Fei Wang · Chen Qian · Changshui Zhang · Chang Xu -
2021 Spotlight: Learning to Weight Imperfect Demonstrations »
Yunke Wang · Chang Xu · Bo Du · Honglak Lee -
2021 Poster: Winograd Algorithm for AdderNet »
Wenshuo Li · Hanting Chen · Mingqiang Huang · Xinghao Chen · Chunjing Xu · Yunhe Wang -
2021 Spotlight: Winograd Algorithm for AdderNet »
Wenshuo Li · Hanting Chen · Mingqiang Huang · Xinghao Chen · Chunjing Xu · Yunhe Wang -
2020 Poster: Neural Architecture Search in A Proxy Validation Loss Landscape »
Yanxi Li · Minjing Dong · Yunhe Wang · Chang Xu -
2020 Poster: Training Binary Neural Networks through Learning with Noisy Supervision »
Kai Han · Yunhe Wang · Yixing Xu · Chunjing Xu · Enhua Wu · Chang Xu -
2017 Poster: Beyond Filters: Compact Feature Map for Portable Deep Model »
Yunhe Wang · Chang Xu · Chao Xu · Dacheng Tao -
2017 Talk: Beyond Filters: Compact Feature Map for Portable Deep Model »
Yunhe Wang · Chang Xu · Chao Xu · Dacheng Tao