Timezone: »

 
Poster
Disentangled Sequential Autoencoder
Yingzhen Li · Stephan Mandt

Fri Jul 13 09:15 AM -- 12:00 PM (PDT) @ Hall B #55

We present a VAE architecture for encoding and generating high dimensional sequential data, such as video or audio. Our deep generative model learns a latent representation of the data which is split into a static and dynamic part, allowing us to approximately disentangle latent time-dependent features (dynamics) from features which are preserved over time (content). This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features. In our experiments on artificially generated cartoon video clips and voice recordings, we show that we can convert the content of a given sequence into another one by such content swapping. For audio, this allows us to convert a male speaker into a female speaker and vice versa, while for video we can separately manipulate shapes and dynamics. Furthermore, we give empirical evidence for the hypothesis that stochastic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.

Author Information

Yingzhen Li (Microsoft Research Cambridge)
Stephan Mandt (UC Irvine)

I am a research scientist at Disney Research Pittsburgh, where I lead the statistical machine learning group. From 2014 to 2016 I was a postdoctoral researcher with David Blei at Columbia University, and a PCCM Postdoctoral Fellow at Princeton University from 2012 to 2014. I did my Ph.D. with Achim Rosch at the Institute for Theoretical Physics at the University of Cologne, where I was supported by the German National Merit Scholarship.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors