Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Beyond Bayes: Paths Towards Universal Reasoning Systems

P32: Collapsed Inference for Bayesian Deep Learning

Zhe Zeng


Abstract:

Authors: Zhe Zeng, Guy Van den Broeck

Abstract: Bayesian deep learning performs well at providing prediction accuracy and calibrated uncertainty. Current research has been focused on scalability by imposing simplistic assumptions on posteriors and predictive distributions, which harms the prediction performances. While an accurate estimation of the posterior is critical to performance, doing so is computationally expensive and prohibitive in practice since it would require running a long Monte Carlo chain. In this paper, we explore a trade-off between reliable inference and algorithm scalability. The main idea is to use collapsed samples: while doing full Bayesian inference, we sample some of the stochastic weights and maintain tractable conditional distributions for the others, which are applicable to exact inference. This is possible by encoding the Bayesian ReLU neural networks into probabilistic Satisfiability Modulo Theories models and further leveraging a recently proposed tool that is able to perform exact inference for such models. We illustrate our proposed collapsed Bayesian deep learning algorithm on regression tasks. Empirical results show significant improvements over the existing Bayesian deep learning approaches.

Chat is not available.