Variational Bayesian Quantization

Yibo Yang · Robert Bamler · Stephan Mandt

Keywords: [ Approximate Inference ] [ Bayesian Deep Learning ] [ Bayesian Methods ] [ Deep Learning - General ]

[ Abstract ]
Tue 14 Jul 7 a.m. PDT — 7:45 a.m. PDT
Tue 14 Jul 6 p.m. PDT — 6:45 p.m. PDT


We propose a novel algorithm for quantizing continuous latent representations in trained models. Our approach applies to deep probabilistic models, such as variational autoencoders (VAEs), and enables both data and model compression. Unlike current end-to-end neural compression methods that cater the model to a fixed quantization scheme, our algorithm separates model design and training from quantization. Consequently, our algorithm enables ``plug-and-play'' compression at variable rate-distortion trade-off, using a single trained model. Our algorithm can be seen as a novel extension of arithmetic coding to the continuous domain, and uses adaptive quantization accuracy based on estimates of posterior uncertainty. Our experimental results demonstrate the importance of taking into account posterior uncertainties, and show that image compression with the proposed algorithm outperforms JPEG over a wide range of bit rates using only a single standard VAE. Further experiments on Bayesian neural word embeddings demonstrate the versatility of the proposed method.

Chat is not available.