Poster
in
Workshop: Neural Compression: From Information Theory to Applications
Minimal Random Code Learning with Mean-KL Parameterization
Jihao Andreas Lin · Gergely Flamich · Jose Miguel Hernandez-Lobato
Abstract:
This paper studies the qualitative behavior and robustness of two variants of Minimal Random Code Learning (MIRACLE) used to compress variational Bayesian neural networks. MIRACLE implements a powerful, conditionally Gaussian variational approximation for the weight posterior Qw and uses relative entropy coding to compress a weight sample from the posterior using a Gaussian coding distribution Pw. To achieve the desired compression rate, DKL[Qw‖Pw] must be constrained, which requires a computationally expensive annealing procedure under the conventional mean-variance (Mean-Var) parameterization for Qw. Instead, we parameterize Qw by its mean and KL divergence from Pw to constrain the compression cost to the desired value by construction. We demonstrate that variational training with Mean-KL parameterization converges twice as fast and maintains predictive performance after compression. Furthermore, we show that Mean-KL leads to more meaningful variational distributions with heavier tails and compressed weight samples which are more robust to pruning.
Chat is not available.