Poster
Object Permanence Emerges in a Random Walk along Memory
Pavel Tokmakov · Allan Jabri · Jie Li · Adrien Gaidon
Hall E #212
Keywords: [ MISC: Unsupervised and Semi-supervised Learning ] [ DL: Self-Supervised Learning ] [ DL: Sequential Models, Time series ] [ DL: Recurrent Networks ] [ APP: Computer Vision ]
This paper proposes a self-supervised objective for learning representations that localize objects under occlusion - a property known as object permanence. A central question is the choice of learning signal in cases of total occlusion. Rather than directly supervising the locations of invisible objects, we propose a self-supervised objective that requires neither human annotation, nor assumptions about object dynamics. We show that object permanence can emerge by optimizing for temporal coherence of memory: we fit a Markov walk along a space-time graph of memories, where the states in each time step are non-Markovian features from a sequence encoder. This leads to a memory representation that stores occluded objects and predicts their motion, to better localize them. The resulting model outperforms existing approaches on several datasets of increasing complexity and realism, despite requiring minimal supervision, and hence being broadly applicable.