Timezone: »
We propose a novel algorithm for quantizing continuous latent representations in trained models. Our approach applies to deep probabilistic models, such as variational autoencoders (VAEs), and enables both data and model compression. Unlike current end-to-end neural compression methods that cater the model to a fixed quantization scheme, our algorithm separates model design and training from quantization. Consequently, our algorithm enables ``plug-and-play'' compression at variable rate-distortion trade-off, using a single trained model. Our algorithm can be seen as a novel extension of arithmetic coding to the continuous domain, and uses adaptive quantization accuracy based on estimates of posterior uncertainty. Our experimental results demonstrate the importance of taking into account posterior uncertainties, and show that image compression with the proposed algorithm outperforms JPEG over a wide range of bit rates using only a single standard VAE. Further experiments on Bayesian neural word embeddings demonstrate the versatility of the proposed method.
Author Information
Yibo Yang (University of California, Irivine)
Robert Bamler (University of California at Irvine)
Stephan Mandt (University of California, Irivine)
Stephan Mandt is an Assistant Professor of Computer Science at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and head of the statistical machine learning group at Disney Research, first in Pittsburgh and later in Los Angeles. He held previous postdoctoral positions at Columbia University and at Princeton University. Stephan holds a PhD in Theoretical Physics from the University of Cologne. He is a Fellow of the German National Merit Foundation, a Kavli Fellow of the U.S. National Academy of Sciences, and was a visiting researcher at Google Brain. Stephan serves regularly as an Area Chair for NeurIPS, ICML, AAAI, and ICLR, and is a member of the Editorial Board of JMLR. His research is currently supported by NSF, DARPA, IBM, and Qualcomm.
More from the Same Authors
-
2023 : Lossy Image Compression with Conditional Diffusion Model »
Ruihan Yang · Stephan Mandt -
2023 : Estimating the Rate-Distortion Function by Wasserstein Gradient Descent »
Yibo Yang · Stephan Eckstein · Marcel Nutz · Stephan Mandt -
2023 : Autoencoding Implicit Neural Representations for Image Compression »
Tuan Pham · Yibo Yang · Stephan Mandt -
2023 Workshop: Neural Compression: From Information Theory to Applications »
Berivan Isik · Yibo Yang · Daniel Severo · Karen Ullrich · Robert Bamler · Stephan Mandt -
2023 Poster: Deep Anomaly Detection under Labeling Budget Constraints »
Aodong Li · Chen Qiu · Marius Kloft · Padhraic Smyth · Stephan Mandt · Maja Rudolph -
2023 Poster: Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes »
Ba-Hien Tran · Babak Shahbaba · Stephan Mandt · Maurizio Filippone -
2022 Poster: Structured Stochastic Gradient MCMC »
Antonios Alexos · Alex Boyd · Stephan Mandt -
2022 Spotlight: Structured Stochastic Gradient MCMC »
Antonios Alexos · Alex Boyd · Stephan Mandt -
2022 Poster: Latent Outlier Exposure for Anomaly Detection with Contaminated Data »
Chen Qiu · Aodong Li · Marius Kloft · Maja Rudolph · Stephan Mandt -
2022 Spotlight: Latent Outlier Exposure for Anomaly Detection with Contaminated Data »
Chen Qiu · Aodong Li · Marius Kloft · Maja Rudolph · Stephan Mandt -
2021 Poster: Neural Transformation Learning for Deep Anomaly Detection Beyond Images »
Chen Qiu · Timo Pfrommer · Marius Kloft · Stephan Mandt · Maja Rudolph -
2021 Spotlight: Neural Transformation Learning for Deep Anomaly Detection Beyond Images »
Chen Qiu · Timo Pfrommer · Marius Kloft · Stephan Mandt · Maja Rudolph -
2020 Poster: The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks »
Jakub Swiatkowski · Kevin Roth · Bastiaan Veeling · Linh Tran · Joshua V Dillon · Jasper Snoek · Stephan Mandt · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin -
2020 Poster: How Good is the Bayes Posterior in Deep Neural Networks Really? »
Florian Wenzel · Kevin Roth · Bastiaan Veeling · Jakub Swiatkowski · Linh Tran · Stephan Mandt · Jasper Snoek · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin -
2018 Poster: Iterative Amortized Inference »
Joe Marino · Yisong Yue · Stephan Mandt -
2018 Poster: Disentangled Sequential Autoencoder »
Yingzhen Li · Stephan Mandt -
2018 Oral: Disentangled Sequential Autoencoder »
Yingzhen Li · Stephan Mandt -
2018 Oral: Iterative Amortized Inference »
Joe Marino · Yisong Yue · Stephan Mandt -
2018 Poster: Quasi-Monte Carlo Variational Inference »
Alexander Buchholz · Florian Wenzel · Stephan Mandt -
2018 Poster: Improving Optimization in Models With Continuous Symmetry Breaking »
Robert Bamler · Stephan Mandt -
2018 Oral: Quasi-Monte Carlo Variational Inference »
Alexander Buchholz · Florian Wenzel · Stephan Mandt -
2018 Oral: Improving Optimization in Models With Continuous Symmetry Breaking »
Robert Bamler · Stephan Mandt -
2017 Poster: Dynamic Word Embeddings »
Robert Bamler · Stephan Mandt -
2017 Talk: Dynamic Word Embeddings »
Robert Bamler · Stephan Mandt