Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling
Regularized KL-Divergence for Well-Defined Function-Space Variational Inference in Bayesian neural networks
Tristan Cinquin · Robert Bamler
Keywords: [ Bayesian neural network ] [ Variational Inference ] [ generalised variational inference ] [ Gaussian process prior ] [ Bayesian Deep Learning ]
Bayesian neural networks (BNN) promise to combine the predictive performance of neural networks with principled uncertainty modeling important for safety-critical systems and decision making. However, posterior uncertainty estimates depend on the choice of prior, and finding informative priors in weight-space has proven difficult. This has motivated variational inference (VI) methods that pose priors directly on the function generated by the BNN rather than on weights, thus making it possible to pose structured prior beliefs in the form of Gaussian process (GP) priors. In this paper, we address a fundamental issue with such function-space VI approaches pointed out by Burt et al. (2020), who showed that the objective function (ELBO) is negative infinite for most priors of interest. Our solution builds on generalized VI (Knoblauch et al., 2019) with the regularized KL divergence (Quang, 2019) and is, to the best of our knowledge, the first well-defined variational objective for function-space inference in BNNs with GP priors. Experiments show that our method incorporates the properties specified by the GP prior on synthetic and small real-world data sets, and provides competitive uncertainty estimates for regression, classification and out-of-distribution detection compared to BNN baselines with both function and weight-space priors.