Timezone: »

 
Poster
Educating Text Autoencoders: Latent Representation Guidance via Denoising
Tianxiao Shen · Jonas Mueller · Regina Barzilay · Tommi Jaakkola

Thu Jul 16 07:00 AM -- 07:45 AM & Thu Jul 16 06:00 PM -- 06:45 PM (PDT) @ Virtual #None

Generative autoencoders offer a promising approach for controllable text generation by leveraging their learned sentence representations. However, current models struggle to maintain coherent latent spaces required to perform meaningful text manipulations via latent vector operations. Specifically, we demonstrate by example that neural encoders do not necessarily map similar sentences to nearby latent vectors. A theoretical explanation for this phenomenon establishes that high-capacity autoencoders can learn an arbitrary mapping between sequences and associated latent representations. To remedy this issue, we augment adversarial autoencoders with a denoising objective where original sentences are reconstructed from perturbed versions (referred to as DAAE). We prove that this simple modification guides the latent space geometry of the resulting model by encouraging the encoder to map similar texts to similar latent representations. In empirical comparisons with various types of autoencoders, our model provides the best trade-off between generation quality and reconstruction capacity. Moreover, the improved geometry of the DAAE latent space enables \textit{zero-shot} text style transfer via simple latent vector arithmetic.

Author Information

Tianxiao Shen (MIT)
Jonas Mueller (Amazon Web Services)
Regina Barzilay (MIT CSAIL)
Tommi Jaakkola (MIT)

More from the Same Authors