Timezone: »
We present a VAE architecture for encoding and generating high dimensional sequential data, such as video or audio. Our deep generative model learns a latent representation of the data which is split into a static and dynamic part, allowing us to approximately disentangle latent time-dependent features (dynamics) from features which are preserved over time (content). This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features. In our experiments on artificially generated cartoon video clips and voice recordings, we show that we can convert the content of a given sequence into another one by such content swapping. For audio, this allows us to convert a male speaker into a female speaker and vice versa, while for video we can separately manipulate shapes and dynamics. Furthermore, we give empirical evidence for the hypothesis that stochastic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.
Author Information
Yingzhen Li (Microsoft Research Cambridge)
Stephan Mandt (UC Irvine)
I am a research scientist at Disney Research Pittsburgh, where I lead the statistical machine learning group. From 2014 to 2016 I was a postdoctoral researcher with David Blei at Columbia University, and a PCCM Postdoctoral Fellow at Princeton University from 2012 to 2014. I did my Ph.D. with Achim Rosch at the Institute for Theoretical Physics at the University of Cologne, where I was supported by the German National Merit Scholarship.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Disentangled Sequential Autoencoder »
Fri. Jul 13th 02:50 -- 03:00 PM Room A7
More from the Same Authors
-
2018 Poster: Iterative Amortized Inference »
Joe Marino · Yisong Yue · Stephan Mandt -
2018 Oral: Iterative Amortized Inference »
Joe Marino · Yisong Yue · Stephan Mandt -
2018 Poster: Quasi-Monte Carlo Variational Inference »
Alexander Buchholz · Florian Wenzel · Stephan Mandt -
2018 Poster: Improving Optimization in Models With Continuous Symmetry Breaking »
Robert Bamler · Stephan Mandt -
2018 Oral: Quasi-Monte Carlo Variational Inference »
Alexander Buchholz · Florian Wenzel · Stephan Mandt -
2018 Oral: Improving Optimization in Models With Continuous Symmetry Breaking »
Robert Bamler · Stephan Mandt -
2017 Poster: Dynamic Word Embeddings »
Robert Bamler · Stephan Mandt -
2017 Poster: Dropout Inference in Bayesian Neural Networks with Alpha-divergences »
Yingzhen Li · Yarin Gal -
2017 Talk: Dropout Inference in Bayesian Neural Networks with Alpha-divergences »
Yingzhen Li · Yarin Gal -
2017 Talk: Dynamic Word Embeddings »
Robert Bamler · Stephan Mandt