Poster
Stochastic Latent Residual Video Prediction
Jean-Yves Franceschi · Edouard Delasalles · Mickael Chen · Sylvain Lamprier · Patrick Gallinari
Virtual
Keywords: [ Computer Vision ] [ Deep Generative Models ] [ Deep Sequence Models ] [ Representation Learning ] [ Sequential, Network, and Time-Series Modeling ]
Designing video prediction models that account for the inherent uncertainty of the future is challenging. Most works in the literature are based on stochastic image-autoregressive recurrent networks, which raises several performance and applicability issues. An alternative is to use fully latent temporal models which untie frame synthesis and temporal dynamics. However, no such model for stochastic video prediction has been proposed in the literature yet, due to design and training difficulties. In this paper, we overcome these difficulties by introducing a novel stochastic temporal model whose dynamics are governed in a latent space by a residual update rule. This first-order scheme is motivated by discretization schemes of differential equations. It naturally models video dynamics as it allows our simpler, more interpretable, latent model to outperform prior state-of-the-art methods on challenging datasets.