Video Prediction via Example Guidance

Jingwei Xu, Harry (Huazhe) Xu, Bingbing Ni, Xiaokang Yang, Prof. Darrell,

Abstract Paper

Thu Jul 16 9 a.m. PDT [iCal] [ Join Zoom ]
Thu Jul 16 8 p.m. PDT [iCal] [ Join Zoom ]
Please do not share or post zoom links

Abstract:

In video prediction tasks, one major challenge is to capture the multi-modal nature of future contents and dynamics. In this work, we propose a simple yet effective framework that can efficiently predict plausible future states. The key insight is that the potential distribution of a sequence could be approximated with analogous ones in a repertoire of training pool, namely, expert examples. By further incorporating a novel optimization scheme into the training procedure, plausible predictions can be sampled efficiently from distribution constructed from the retrieved examples. Meanwhile, our method could be seamlessly integrated with existing stochastic predictive models; significant enhancement is observed with comprehensive experiments in both quantitative and qualitative aspects. We also demonstrate the generalization ability to predict the motion of unseen class, i.e., without access to corresponding data during training phase.

Chat is not available.