Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Generative semi-supervised learning with a neural seq2seq noisy channel

Soroosh Mariooryad · Matt Shannon · Siyuan Ma · Tom Bagby · David Kao · Daisy Stanton · Eric Battenberg · RJ Skerry-Ryan

Keywords: [ semi-supervised ] [ seq2seq ] [ parallel data ] [ noisy channel ] [ generative modeling ] [ paired data ]


Abstract:

We use a neural noisy channel generative model to learn the relationship between two sequences, for example text and speech, from little paired data. We identify time locality as a key assumption which is restrictive enough to support semi-supervised learning but general enough to be widely applicable. Experimentally we show that our approach is capable of recovering the relationship between written and spoken language (represented as graphemes and phonemes) from only 5 minutes of paired data. Our results pave the way for more widespread adoption of generative semi-supervised learning for seq2seq tasks.

Chat is not available.