Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Fri Jul 13 09:15 AM -- 12:00 PM (PDT) @ Hall B #104
Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Junru Wu · Yue Wang · Zhenyu Wu · Zhangyang Wang · Ashok Veeraraghavan · Yingyan Lin
[ PDF

Many existing compression approaches have been focused and evaluated on convolutional neural networks (CNNs) where fully-connected layers contain the most parameters (e.g., LeNet and AlexNet). However, the current trend of pushing CNNs deeper with convolutions has created a pressing demand to achieve higher compression gains on CNNs where convolutions dominate the parameter amount (e.g., GoogLeNet, ResNet and Wide ResNet). Further, convolutional layers always account for most energy consumption in run time. To this end, this paper investigates the relatively less-explored direction of compressing convolutional layers in deep CNNs. We introduce a novel spectrally relaxed k -means regularization, that tends to approximately make hard assignments of convolutional layer weights to K learned cluster centers during re-training. Compression is then achieved through weight-sharing, by only recording K cluster centers and weight assignment indexes. Our proposed pipeline, termed Deep k -Means, has well-aligned goals between re-training and compression stages. We further propose an improved set of metrics to estimate energy consumption of CNN hardware implementations, whose estimation results are verified to be consistent with previously proposed energy estimation tool extrapolated from actual hardware measurements. We have evaluated Deep k -Means in compressing several CNN models in terms of both compression ratio and energy consumption reduction, observing promising results without incurring accuracy loss.