Timezone: »

 
Oral
Multi-Object Representation Learning with Iterative Variational Inference
Klaus Greff · Raphael Lopez Kaufman · Rishabh Kabra · Nicholas Watters · Christopher Burgess · Daniel Zoran · Loic Matthey · Matthew Botvinick · Alexander Lerchner

Thu Jun 13 04:30 PM -- 04:35 PM (PDT) @ Hall A

Human perception is structured around objects which form the basis for our higher-level cognition and impressive systematic generalization abilities. Yet most work on representation learning focuses on feature learning without even considering multiple objects, or treats segmentation as an (often supervised) preprocessing step. Instead, we argue for the importance of learning to segment and represent objects jointly. Starting from the simple assumption that a scene is composed of entities with common features, we demonstrate that it is possible to learn to segment images into interpretable objects with disentangled representations. Our method learns - without supervision - to inpaint occluded parts, and extrapolates to objects with novel feature combinations. We also show that, because our method is based on iterative variational inference, our system is able to learn multi-modal posteriors for ambiguous inputs and extends naturally to sequential data.

Author Information

Klaus Greff (IDSIA)
Raphael Lopez Kaufman (Deepmind)
Rishabh Kabra (DeepMind)
Nicholas Watters (DeepMind)
Christopher Burgess (DeepMind)
Daniel Zoran (DeepMind)
Loic Matthey (DeepMind)
Matthew Botvinick (DeepMind)
Alexander Lerchner (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors