Skip to yearly menu bar Skip to main content


Poster

Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support

Yuan Zhou · Hongseok Yang · Yee-Whye Teh · Tom Rainforth

Keywords: [ Probabilistic Inference - Models and Probabilistic Programming ] [ Probabilistic Programming ] [ Monte Carlo Methods ] [ Bayesian Methods ] [ Approximate Inference ]


Abstract:

Universal probabilistic programming systems (PPSs) provide a powerful framework for specifying rich probabilistic models. They further attempt to automate the process of drawing inferences from these models, but doing this successfully is severely hampered by the wide range of non--standard models they can express. As a result, although one can specify complex models in a universal PPS, the provided inference engines often fall far short of what is required. In particular, we show that they produce surprisingly unsatisfactory performance for models where the support varies between executions, often doing no better than importance sampling from the prior. To address this, we introduce a new inference framework: Divide, Conquer, and Combine, which remains efficient for such models, and show how it can be implemented as an automated and generic PPS inference engine. We empirically demonstrate substantial performance improvements over existing approaches on three examples.

Chat is not available.