Timezone: »

 
Poster
Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic Programming
Jinuk Kim · Yeonwoo Jeong · Deokjae Lee · Hyun Oh Song

Tue Jul 25 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #310
Recent works on neural network pruning advocate that reducing the depth of the network is more effective in reducing run-time memory usage and accelerating inference latency than reducing the width of the network through channel pruning. In this regard, some recent works propose depth compression algorithms that merge convolution layers. However, the existing algorithms have a constricted search space and rely on human-engineered heuristics. In this paper, we propose a novel depth compression algorithm which targets general convolution operations. We propose a subset selection problem that replaces inefficient activation layers with identity functions and optimally merges consecutive convolution operations into shallow equivalent convolution operations for efficient end-to-end inference latency. Since the proposed subset selection problem is NP-hard, we formulate a surrogate optimization problem that can be solved exactly via two-stage dynamic programming within a few seconds. We evaluate our methods and baselines by TensorRT for a fair inference latency comparison. Our method outperforms the baseline method with higher accuracy and faster inference speed in MobileNetV2 on the ImageNet dataset. Specifically, we achieve $1.41\times$ speed-up with $0.11$%p accuracy gain in MobileNetV2-1.0 on the ImageNet.

Author Information

Jinuk Kim (Seoul National University)
Yeonwoo Jeong (Seoul National University)
Deokjae Lee (Seoul National University)
Hyun Oh Song (Seoul National University)

More from the Same Authors