Poster
Variational Implicit Processes
Chao Ma · Yingzhen Li · Jose Miguel Hernandez-Lobato
Pacific Ballroom #225
Keywords: [ Approximate Inference ] [ Bayesian Deep Learning ] [ Bayesian Nonparametrics ] [ Gaussian Processes ]
We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables. IPs are therefore highly flexible implicit priors over \emph{functions}, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes. A novel and efficient approximate inference algorithm for IPs, namely the variational implicit processes (VIPs), is derived using generalised wake-sleep updates. This method returns simple update equations and allows scalable hyper-parameter learning with stochastic optimization. Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.
Live content is unavailable. Log in and register to view live content