Timezone: »
It is never easy to design and run Convolutional Neural Networks (CNNs) due to: 1) no one knows the optimal number of filters at each layer, given a network architecture; and 2) the computational intensity of CNNs impedes the deployment on computationally limited devices. The need for an automatic method to optimize the number of filters, i.e., the width of convolutional layers, brings us to Oracle Pruning, which is the most accurate filter pruning method but suffers from intolerant time complexity. To address this problem, we propose Approximated Oracle Filter Pruning (AOFP), a training-time filter pruning framework, which is practical on very deep CNNs. By AOFP, we can prune an existing deep CNN with acceptable time cost, negligible accuracy drop and no heuristic knowledge, or re-design a model which exerts higher accuracy and faster inference.
Author Information
XIAOHAN DING (Tsinghua University)
guiguang ding (Tsinghua University, China)
Yuchen Guo (Tsinghua University)
Jungong Han (Lancaster University)
Chenggang Yan (Hangzhou Dianzi University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Approximated Oracle Filter Pruning for Destructive CNN Width Optimization »
Thu Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom