Skip to yearly menu bar Skip to main content


Unsupervised Image Representation Learning with Deep Latent Particles

Tal Daniel · Aviv Tamar

Hall E #304

Keywords: [ DL: Other Representation Learning ] [ DL: Everything Else ] [ DL: Generative Models and Autoencoders ]


We propose a new representation of visual data that disentangles object position from appearance. Our method, termed Deep Latent Particles (DLP), decomposes the visual input into low-dimensional latent ``particles'', where each particle is described by its spatial location and features of its surrounding region. To drive learning of such representations, we follow a VAE-based based approach and introduce a prior for particle positions based on a spatial-Softmax architecture, and a modification of the evidence lower bound loss inspired by the Chamfer distance between particles. We demonstrate that our DLP representations are useful for downstream tasks such as unsupervised keypoint (KP) detection, image manipulation, and video prediction for scenes composed of multiple dynamic objects. In addition, we show that our probabilistic interpretation of the problem naturally provides uncertainty estimates for particle locations, which can be used for model selection, among other tasks.

Chat is not available.