Talk
Improving Gibbs Sampler Scan Quality with DoGS
Ioannis Mitliagkas · Lester Mackey

Wed Aug 9th 11:42 AM -- 12:00 PM @ C4.9& C4.10

The pairwise influence matrix of Dobrushin has long been used as an analytical tool to bound the rate of convergence of Gibbs sampling. In this work, we use Dobrushin influence as the basis of a practical tool to certify and efficiently improve the quality of a Gibbs sampler. Our Dobrushin-optimized Gibbs samplers (DoGS) offer customized variable selection orders for a given sampling budget and variable subset of interest, explicit bounds on total variation distance to stationarity, and certifiable improvements over the standard systematic and uniform random scan Gibbs samplers. In our experiments with image segmentation, Markov chain Monte Carlo maximum likelihood estimation, and Ising model inference, DoGS consistently deliver higher-quality inferences with significantly smaller sampling budgets than standard Gibbs samplers.

Author Information

Ioannis Mitliagkas (Stanford University)

Ioannis Mitliagkas is a Postdoctoral Scholar with the departments of Statistics and Computer Science at Stanford University. He obtained his Ph.D. from the department of Electrical and Computer Engineering at The University of Texas at Austin. His research focuses on understanding and optimizing the scan order for Gibbs sampling, as well as understanding the interaction between optimization and the dynamics of large-scale learning systems. In the past he has worked on high-dimensional streaming problems and fast algorithms and computation for large graph problems.

Lester Mackey (Microsoft Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors