Skip to yearly menu bar Skip to main content


Poster

Factorized Diffusion Models are Natural and Zero-shot Speech Synthesizers

Zeqian Ju · Yuancheng Wang · Kai Shen · Xu Tan · Detai Xin · Dongchao Yang · Eric Liu · Yichong Leng · Kaitao Song · Siliang Tang · Zhizheng Wu · Tao Qin · Xiangyang Li · Wei Ye · Shikun Zhang · Jiang Bian · Lei He · Jinyu Li · sheng zhao


Abstract:

While recent large-scale text-to-speech (TTS) models have achieved significant progress, they still fall shorts in speech quality, similarity, and prosody. Considering that speech intricately encompasses various attributes (e.g., content, prosody, timbre, and acoustic details) that pose significant challenges for generation, a natural idea is to factorize speech into individual subspaces representing different attributes and generate them individually. Motivated by it, we propose a TTS system with novel factorized diffusion models to generate natural speech in a zero-shot way. Specifically, 1) we design a neural codec with factorized vector quantization (FVQ) to disentangle speech waveform into subspaces of content, prosody, timbre, and acoustic details; 2) we propose a factorized diffusion model, which generates attributes in each subspace following its corresponding prompt. With this factorization design, our method can effectively and efficiently model the intricate speech with disentangled subspaces in a divide-and-conquer way. Experimental results show that our method outperforms the state-of-the-art TTS systems on quality, similarity, prosody, and intelligibility.

Live content is unavailable. Log in and register to view live content