Iterative Amortized Inference
Joe Marino · Yisong Yue · Stephan Mandt

Fri Jul 13th 11:40 -- 11:50 AM @ A7

Inference models are a key component in scaling variational inference to deep latent variable models, most notably as encoder networks in variational auto-encoders (VAEs). By replacing conventional optimization-based inference with a learned model, inference is amortized over data examples and therefore more computationally efficient. However, standard inference models are restricted to direct mappings from data to approximate posterior estimates. The failure of these models to reach fully optimized approximate posterior estimates results in an amortization gap. We aim toward closing this gap by proposing iterative inference models, which learn to perform inference optimization through repeatedly encoding gradients. Our approach generalizes standard inference models in VAEs and provides insight into several empirical findings, including top-down inference techniques. We demonstrate the inference optimization capabilities of iterative inference models and show that they outperform standard inference models on several benchmark data sets of images and text.

Author Information

Joe Marino (Caltech)
Yisong Yue (Caltech)

Yisong Yue is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology. He was previously a research scientist at Disney Research. Before that, he was a postdoctoral researcher in the Machine Learning Department and the iLab at Carnegie Mellon University. He received a Ph.D. from Cornell University and a B.S. from the University of Illinois at Urbana-Champaign. Yisong's research interests lie primarily in the theory and application of statistical machine learning. He is particularly interested in developing novel methods for interactive machine learning and structured prediction. In the past, his research has been applied to information retrieval, recommender systems, text classification, learning from rich user interfaces, analyzing implicit human feedback, data-driven animation, behavior analysis, sports analytics, policy learning in robotics, and adaptive planning & allocation problems.

Stephan Mandt (UC Irvine)

I am a research scientist at Disney Research Pittsburgh, where I lead the statistical machine learning group. From 2014 to 2016 I was a postdoctoral researcher with David Blei at Columbia University, and a PCCM Postdoctoral Fellow at Princeton University from 2012 to 2014. I did my Ph.D. with Achim Rosch at the Institute for Theoretical Physics at the University of Cologne, where I was supported by the German National Merit Scholarship.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors