Timezone: »
Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research. Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks (GANs) for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions. Gradient based optimization/sampling methods are often applied to train these models, however without tractable densities, the objective functions often need to be approximated. In this talk I will motivate gradient estimation as another approximation approach for training implicit models and perform Monte Carlo based approximate inference. Based on this view, I will then present the Stein gradient estimator which estimates the score function of an implicit model density. I will discuss connections of this approach to score matching, kernel methods, denoising auto-encoders, etc., and show application cases including entropy regularization for GANs, and meta-learning for stochastic gradient MCMC algorithms.
Author Information
Yingzhen Li (Microsoft Research Cambridge)
More from the Same Authors
-
2019 Poster: Are Generative Classifiers More Robust to Adversarial Attacks? »
Yingzhen Li · John Bradshaw · Yash Sharma -
2019 Poster: Variational Implicit Processes »
Chao Ma · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2019 Oral: Variational Implicit Processes »
Chao Ma · Yingzhen Li · Jose Miguel Hernandez-Lobato -
2019 Oral: Are Generative Classifiers More Robust to Adversarial Attacks? »
Yingzhen Li · John Bradshaw · Yash Sharma