Skip to yearly menu bar Skip to main content


Poster

Variational Bayesian Quantization

Yibo Yang · Robert Bamler · Stephan Mandt

Keywords: [ Approximate Inference ] [ Bayesian Deep Learning ] [ Bayesian Methods ] [ Deep Learning - General ]


Abstract:

We propose a novel algorithm for quantizing continuous latent representations in trained models. Our approach applies to deep probabilistic models, such as variational autoencoders (VAEs), and enables both data and model compression. Unlike current end-to-end neural compression methods that cater the model to a fixed quantization scheme, our algorithm separates model design and training from quantization. Consequently, our algorithm enables ``plug-and-play'' compression at variable rate-distortion trade-off, using a single trained model. Our algorithm can be seen as a novel extension of arithmetic coding to the continuous domain, and uses adaptive quantization accuracy based on estimates of posterior uncertainty. Our experimental results demonstrate the importance of taking into account posterior uncertainties, and show that image compression with the proposed algorithm outperforms JPEG over a wide range of bit rates using only a single standard VAE. Further experiments on Bayesian neural word embeddings demonstrate the versatility of the proposed method.

Chat is not available.