Skip to yearly menu bar Skip to main content


Poster

Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance

Heeseung Kim · Sungwon Kim · Sungroh Yoon

Hall E #116

Keywords: [ DL: Generative Models and Autoencoders ] [ APP: Language, Speech and Dialog ]


Abstract:

We propose Guided-TTS, a high-quality text-to-speech (TTS) model that does not require any transcript of target speaker using classifier guidance. Guided-TTS combines an unconditional diffusion probabilistic model with a separately trained phoneme classifier for classifier guidance. Our unconditional diffusion model learns to generate speech without any context from untranscribed speech data. For TTS synthesis, we guide the generative process of the diffusion model with a phoneme classifier trained on a large-scale speech recognition dataset. We present a norm-based scaling method that reduces the pronunciation errors of classifier guidance in Guided-TTS. We show that Guided-TTS achieves a performance comparable to that of the state-of-the-art TTS model, Grad-TTS, without any transcript for LJSpeech. We further demonstrate that Guided-TTS performs well on diverse datasets including a long-form untranscribed dataset.

Chat is not available.