Timezone: »
Convolutional neural networks (CNNs) have shown extraordinary performance in a number of applications, but they are usually of heavy design for the accuracy reason. Beyond compressing the filters in CNNs, this paper focuses on the redundancy in the feature maps derived from the large number of filters in a layer. We propose to extract intrinsic representation of the feature maps and preserve the discriminability of the features. Circulant matrix is employed to formulate the feature map transformation, which only requires O(dlog d) computation complexity to embed a d-dimensional feature map. The filter is then re-configured to establish the mapping from original input to the new compact feature map, and the resulting network can preserve intrinsic information of the original network with significantly fewer parameters, which not only decreases the online memory for launching CNN but also accelerates the computation speed. Experiments on benchmark image datasets demonstrate the superiority of the proposed algorithm over state-of-the-art methods.
Author Information
Yunhe Wang (Peking University)
Chang Xu (The University of Sydney)
Chao Xu (Peking University)
Dacheng Tao
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Beyond Filters: Compact Feature Map for Portable Deep Model »
Wed. Aug 9th 08:30 AM -- 12:00 PM Room Gallery #44
More from the Same Authors
-
2023 : Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning »
Guozheng Ma · · Haoyu Wang · Lu Li · Zilin Wang · Zhen Wang · Li Shen · Xueqian Wang · Dacheng Tao -
2023 Oral: Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape »
Yan Sun · Li Shen · Shixiang Chen · Liang Ding · Dacheng Tao -
2023 Oral: Tilted Sparse Additive Models »
Yingjie Wang · Hong Chen · Weifeng Liu · Fengxiang He · Tieliang Gong · YouCheng Fu · Dacheng Tao -
2023 Poster: Structured Cooperative Learning with Graphical Model Priors »
Shuangtong Li · Tianyi Zhou · Xinmei Tian · Dacheng Tao -
2023 Poster: Tilted Sparse Additive Models »
Yingjie Wang · Hong Chen · Weifeng Liu · Fengxiang He · Tieliang Gong · YouCheng Fu · Dacheng Tao -
2023 Poster: Decentralized SGD and Average-direction SAM are Asymptotically Equivalent »
Tongtian Zhu · Fengxiang He · Kaixuan Chen · Mingli Song · Dacheng Tao -
2023 Poster: Improving the Model Consistency of Decentralized Federated Learning »
Yifan Shi · Li Shen · Kang Wei · Yan Sun · Bo Yuan · Xueqian Wang · Dacheng Tao -
2023 Poster: Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape »
Yan Sun · Li Shen · Shixiang Chen · Liang Ding · Dacheng Tao -
2023 Poster: Learning to Learn from APIs: Black-Box Data-Free Meta-Learning »
Zixuan Hu · Li Shen · Zhenyi Wang · Baoyuan Wu · Chun Yuan · Dacheng Tao -
2022 Poster: Identity-Disentangled Adversarial Augmentation for Self-supervised Learning »
Kaiwen Yang · Tianyi Zhou · Xinmei Tian · Dacheng Tao -
2022 Poster: Spatial-Channel Token Distillation for Vision MLPs »
Yanxi Li · Xinghao Chen · Minjing Dong · Yehui Tang · Yunhe Wang · Chang Xu -
2022 Spotlight: Identity-Disentangled Adversarial Augmentation for Self-supervised Learning »
Kaiwen Yang · Tianyi Zhou · Xinmei Tian · Dacheng Tao -
2022 Spotlight: Spatial-Channel Token Distillation for Vision MLPs »
Yanxi Li · Xinghao Chen · Minjing Dong · Yehui Tang · Yunhe Wang · Chang Xu -
2022 Poster: Federated Learning with Positive and Unlabeled Data »
Xinyang Lin · Hanting Chen · Yixing Xu · Chao Xu · Xiaolin Gui · Yiping Deng · Yunhe Wang -
2022 Poster: DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training »
Rong Dai · Li Shen · Fengxiang He · Xinmei Tian · Dacheng Tao -
2022 Spotlight: Federated Learning with Positive and Unlabeled Data »
Xinyang Lin · Hanting Chen · Yixing Xu · Chao Xu · Xiaolin Gui · Yiping Deng · Yunhe Wang -
2022 Spotlight: DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training »
Rong Dai · Li Shen · Fengxiang He · Xinmei Tian · Dacheng Tao -
2022 Poster: Topology-aware Generalization of Decentralized SGD »
Tongtian Zhu · Fengxiang He · Lan Zhang · Zhengyang Niu · Mingli Song · Dacheng Tao -
2022 Spotlight: Topology-aware Generalization of Decentralized SGD »
Tongtian Zhu · Fengxiang He · Lan Zhang · Zhengyang Niu · Mingli Song · Dacheng Tao -
2021 Poster: Winograd Algorithm for AdderNet »
Wenshuo Li · Hanting Chen · Mingqiang Huang · Xinghao Chen · Chunjing Xu · Yunhe Wang -
2021 Spotlight: Winograd Algorithm for AdderNet »
Wenshuo Li · Hanting Chen · Mingqiang Huang · Xinghao Chen · Chunjing Xu · Yunhe Wang -
2020 Poster: Neural Architecture Search in A Proxy Validation Loss Landscape »
Yanxi Li · Minjing Dong · Yunhe Wang · Chang Xu -
2020 Poster: Training Binary Neural Networks through Learning with Noisy Supervision »
Kai Han · Yunhe Wang · Yixing Xu · Chunjing Xu · Enhua Wu · Chang Xu -
2019 Poster: LegoNet: Efficient Convolutional Neural Networks with Lego Filters »
Zhaohui Yang · Yunhe Wang · Chuanjian Liu · Hanting Chen · Chunjing Xu · Boxin Shi · Chao Xu · Chang Xu -
2019 Oral: LegoNet: Efficient Convolutional Neural Networks with Lego Filters »
Zhaohui Yang · Yunhe Wang · Chuanjian Liu · Hanting Chen · Chunjing Xu · Boxin Shi · Chao Xu · Chang Xu -
2017 Poster: Algorithmic Stability and Hypothesis Complexity »
Tongliang Liu · Gábor Lugosi · Gergely Neu · Dacheng Tao -
2017 Talk: Algorithmic Stability and Hypothesis Complexity »
Tongliang Liu · Gábor Lugosi · Gergely Neu · Dacheng Tao