Timezone: »

Rate Distortion For Model Compression:From Theory To Practice
Weihao Gao · Yu-Han Liu · Chong Wang · Sewoong Oh

Tue Jun 11 03:15 PM -- 03:20 PM (PDT) @ Room 102

The enormous size of modern deep neural networks makes it challenging to deploy those models in memory and communication limited scenarios. Thus, compressing a trained model without a significant loss in performance has become an increasingly important task. Tremendous advances has been made recently, where the main technical building blocks are parameter pruning, parameter sharing (quantization), and low-rank factorization. In this paper, we propose principled approaches to improve upon the common heuristics used in those building blocks, namely pruning and quantization.

We first study the fundamental limit for model compression via rate distortion theory. We bring the rate distortion function from data compression to model compression to quantify this fundamental limit. We prove a lower bound for the rate distortion function and prove its achievability for linear models. Although this achievable compression scheme is intractable in practice, this analysis motivates a novel model compression framework. This framework provides a new objective function in model compression, which can be applied together with other classes of model compressors such as pruning or quantization. Theoretically, we prove that the proposed scheme is optimal for compressing one-hidden-layer ReLU neural networks. Empirically, we show that the proposed scheme improves upon the baseline in the compression-accuracy tradeoff.

Author Information

Weihao Gao (University of Illinois at Urbana-Champaign)
Yu-Han Liu (Google)
Chong Wang (ByteDance Inc.)
Sewoong Oh (University of Washington)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors