Skip to yearly menu bar Skip to main content


Faster Attend-Infer-Repeat with Tractable Probabilistic Models

Karl Stelzner · Robert Peharz · Kristian Kersting

Pacific Ballroom #89

Keywords: [ Unsupervised Learning ] [ Generative Models ] [ Deep Generative Models ] [ Bayesian Methods ] [ Bayesian Deep Learning ]


The recent Attend-Infer-Repeat (AIR) framework marks a milestone in structured probabilistic modeling, as it tackles the challenging problem of unsupervised scene understanding via Bayesian inference. AIR expresses the composition of visual scenes from individual objects, and uses variational autoencoders to model the appearance of those objects. However, inference in the overall model is highly intractable, which hampers its learning speed and makes it prone to suboptimal solutions. In this paper, we show that the speed and robustness of learning in AIR can be considerably improved by replacing the intractable object representations with tractable probabilistic models. In particular, we opt for sum-product networks (SPNs), expressive deep probabilistic models with a rich set of tractable inference routines. The resulting model, called SuPAIR, learns an order of magnitude faster than AIR, treats object occlusions in a consistent manner, and allows for the inclusion of a background noise model, improving the robustness of Bayesian scene understanding.

Live content is unavailable. Log in and register to view live content