Timezone: »
Models that employ latent variables to capture structure in observed data lie at the heart of many current unsupervised learning algorithms, but exact maximum-likelihood learning for powerful and flexible latent-variable models is almost always intractable. Thus, state-of-the-art approaches either abandon the maximum-likelihood framework entirely, or else rely on a variety of variational approximations to the posterior distribution over the latents. Here, we propose an alternative approach that we call amortised learning. Rather than computing an approximation to the posterior over latents, we use a wake-sleep Monte-Carlo strategy to learn a function that directly estimates the maximum-likelihood parameter updates. Amortised learning is possible whenever samples of latents and observations can be simulated from the generative model, treating the model as a ``black box''. We demonstrate its effectiveness on a wide range of complex models, including those with latents that are discrete or supported on non-Euclidean spaces.
Author Information
Li Kevin Wenliang (Gatsby Unit, University College London)
Theodore Moskovitz (Gatsby Computational Neuroscience Unit)
Heishiro Kanagawa (Gatsby Unit, UCL)
Maneesh Sahani (Gatsby Unit, UCL)
More from the Same Authors
-
2023 : Prediction under Latent Subgroup Shifts with High-dimensional Observations »
William Walker · Arthur Gretton · Maneesh Sahani -
2023 Poster: Memory-Based Meta-Learning on Non-Stationary Distributions »
Tim Genewein · Gregoire Deletang · Anian Ruoss · Li Kevin Wenliang · Elliot Catt · Vincent Dutordoir · Jordi Grau-Moya · Laurent Orseau · Marcus Hutter · Joel Veness -
2023 Poster: A Kernel Stein Test of Goodness of Fit for Sequential Models »
Jerome Baum · Heishiro Kanagawa · Arthur Gretton -
2019 Poster: Learning interpretable continuous-time models of latent stochastic dynamical systems »
Lea Duncker · Gergo Bohner · Julien Boussard · Maneesh Sahani -
2019 Oral: Learning interpretable continuous-time models of latent stochastic dynamical systems »
Lea Duncker · Gergo Bohner · Julien Boussard · Maneesh Sahani -
2019 Poster: Learning deep kernels for exponential family densities »
Li Kevin Wenliang · D.J. Sutherland · Heiko Strathmann · Arthur Gretton -
2019 Oral: Learning deep kernels for exponential family densities »
Li Kevin Wenliang · D.J. Sutherland · Heiko Strathmann · Arthur Gretton