Timezone: »
We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables. IPs are therefore highly flexible implicit priors over \emph{functions}, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes. A novel and efficient approximate inference algorithm for IPs, namely the variational implicit processes (VIPs), is derived using generalised wake-sleep updates. This method returns simple update equations and allows scalable hyper-parameter learning with stochastic optimization. Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.
Author Information
Chao Ma (University of Cambridge)
Yingzhen Li (Microsoft Research Cambridge)
Jose Miguel Hernandez-Lobato (University of Cambridge)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Variational Implicit Processes »
Wed Jun 12th 09:40 -- 10:00 PM Room Room 101
More from the Same Authors
-
2020 Poster: Reinforcement Learning for Molecular Design Guided by Quantum Mechanics »
Gregor Simm · Robert Pinsler · Jose Miguel Hernandez-Lobato -
2020 Poster: A Generative Model for Molecular Distance Geometry »
Gregor Simm · Jose Miguel Hernandez-Lobato -
2019 Poster: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Oral: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Poster: Are Generative Classifiers More Robust to Adversarial Attacks? »
Yingzhen Li · John Bradshaw · Yash Sharma -
2019 Poster: EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE »
Chao Ma · Sebastian Tschiatschek · Konstantina Palla · Jose Miguel Hernandez-Lobato · Sebastian Nowozin · Cheng Zhang -
2019 Oral: EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE »
Chao Ma · Sebastian Tschiatschek · Konstantina Palla · Jose Miguel Hernandez-Lobato · Sebastian Nowozin · Cheng Zhang -
2019 Oral: Are Generative Classifiers More Robust to Adversarial Attacks? »
Yingzhen Li · John Bradshaw · Yash Sharma -
2018 Poster: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2018 Oral: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2017 Poster: Parallel and Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space »
Jose Miguel Hernandez-Lobato · James Requeima · Edward Pyzer-Knapp · alan Aspuru-Guzik -
2017 Poster: Grammar Variational Autoencoder »
Matt J. Kusner · Brooks Paige · Jose Miguel Hernandez-Lobato -
2017 Talk: Grammar Variational Autoencoder »
Matt J. Kusner · Brooks Paige · Jose Miguel Hernandez-Lobato -
2017 Talk: Parallel and Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space »
Jose Miguel Hernandez-Lobato · James Requeima · Edward Pyzer-Knapp · alan Aspuru-Guzik -
2017 Poster: Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control »
Natasha Jaques · Shixiang Gu · Dzmitry Bahdanau · Jose Miguel Hernandez-Lobato · Richard E Turner · Douglas Eck -
2017 Talk: Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control »
Natasha Jaques · Shixiang Gu · Dzmitry Bahdanau · Jose Miguel Hernandez-Lobato · Richard E Turner · Douglas Eck