Skip to yearly menu bar Skip to main content


Oral

Variational Implicit Processes

Chao Ma · Yingzhen Li · Jose Miguel Hernandez-Lobato

Abstract:

We introduce the implicit process (IP), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables. IPs are therefore highly flexible implicit priors over \emph{functions}, with examples include data simulators, Bayesian neural networks and non-linear transformations of stochastic processes. A novel and efficient function space approximate Bayesian inference algorithm for IPs, namely the variational implicit processes (VIPs), is derived using generalised wake-sleep updates. This method returns simple update equations and allows scalable hyper-parameter learning with stochastic optimization. Experiments demonstrate that VIPs return better uncertainty estimates and superior performance over existing inference methods for challenging models such as Bayesian LSTMs, Bayesian neural networks, and Gaussian processes.

Chat is not available.