Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Beyond Bayes: Paths Towards Universal Reasoning Systems

P17: Correcting Model Bias with Sparse Implicit Processes

Simon R Santana


Abstract:

Authors: Simon Rodriguez Santana, Luis A. Ortega, Daniel Hernández-Lobato, Bryan Zaldivar

Abstract: Model selection in machine learning (ML) is a crucial part of the Bayesian learning procedure. Model choice may impose strong biases on the resulting predictions, which can hinder the performance of methods such as Bayesian neural networks and neural samplers. On the other hand, newly proposed approaches for Bayesian ML exploit features of approximate inference in function space with implicit stochastic processes (a generalization of Gaussian processes). The approach of Sparse Implicit Processes (SIP) is particularly successful in this regard, since it is fully trainable and achieves flexible predictions. Here, we expand on the original experiments to show that SIP is capable of correcting model bias when the data generating mechanism differs strongly from the one implied by the model. We use synthetic datasets to show that SIP is capable of providing predictive distributions that reflect the data better than the exact predictions of the initial, but wrongly assumed model.

Chat is not available.