Timezone: »

Learning to Generate Long-term Future via Hierarchical Prediction
Ruben Villegas · Jimei Yang · Yuliang Zou · Sungryull Sohn · Xunyu Lin · Honglak Lee

Tue Aug 08 01:30 AM -- 05:00 AM (PDT) @ Gallery #106

We propose a hierarchical approach for making long-term predictions of future frames. To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions. Long-term video prediction is difficult to perform by recurrently observing the predicted frames because the small errors in pixel space exponentially amplify as predictions are made deeper into the future. Our approach prevents pixel-level error propagation from happening by removing the need to observe the predicted frames. Our model is built with a combination of LSTM and analogy based encoder-decoder convolutional neural networks, which independently predict the video structure and generate the future frames, respectively. In experiments, our model is evaluated on the Human3.6M and Penn Action datasets on the task of long-term pixel-level video prediction of humans performing actions and demonstrate significantly better results than the state-of-the-art.

Author Information

Ruben Villegas (University of Michigan)
Jimei Yang (Adobe Research)
Yuliang Zou (University of Michigan)
Sungryull Sohn (University of Michigan)
Xunyu Lin
Honglak Lee (Google / U. Michigan)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors