Timezone: »

 
Oral
Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis
Yuxuan Wang · Daisy Stanton · Yu Zhang · RJ-Skerry Ryan · Eric Battenberg · Joel Shor · Ying Xiao · Ye Jia · Fei Ren · Rif Saurous

Thu Jul 12 07:20 AM -- 07:40 AM (PDT) @ A3

In this work, we propose global style tokens'' (GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-to-end speech synthesis system. The embeddings are trained with no explicit labels, yet learn to model a large range of acoustic expressiveness. GSTs lead to a rich set of significant results. The soft interpretablelabels'' they generate can be used to control synthesis in novel ways, such as varying speed and speaking style -- independently of the text content. They can also be used for style transfer, replicating the speaking style of a single audio clip across an entire long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn to factorize noise and speaker identity, providing a path towards highly scalable but robust speech synthesis.

Author Information

Yuxuan Wang (Google)
Daisy Stanton
Yu Zhang (Google)
RJ-Skerry Ryan
Eric Battenberg
Joel Shor (Google)
Ying Xiao (Google Inc)
Ye Jia (Google)
Fei Ren (Google)
Rif Saurous

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors