Timezone: »

 
Poster
Global Prosody Style Transfer Without Text Transcriptions
Kaizhi Qian · Yang Zhang · Shiyu Chang · Jinjun Xiong · Chuang Gan · David Cox · Mark Hasegawa-Johnson

Thu Jul 22 09:00 PM -- 11:00 PM (PDT) @ None #None

Prosody plays an important role in characterizing the style of a speaker or an emotion, but most non-parallel voice or emotion style transfer algorithms do not convert any prosody information. Two major components of prosody are pitch and rhythm. Disentangling the prosody information, particularly the rhythm component, from the speech is challenging because it involves breaking the synchrony between the input speech and the disentangled speech representation. As a result, most existing prosody style transfer algorithms would need to rely on some form of text transcriptions to identify the content information, which confines their application to high-resource languages only. Recently, SpeechSplit has made sizeable progress towards unsupervised prosody style transfer, but it is unable to extract high-level global prosody style in an unsupervised manner. In this paper, we propose AutoPST, which can disentangle global prosody style from speech without relying on any text transcriptions. AutoPST is an Autoencoder-based Prosody Style Transfer framework with a thorough rhythm removal module guided by the self-expressive representation learning. Experiments on different style transfer tasks show that AutoPST can effectively convert prosody that correctly reflects the styles of the target domains.

Author Information

Kaizhi Qian (MIT-IBM Watson AI Lab)
Yang Zhang (MIT-IBM Watson AI Lab)
Shiyu Chang (UCSB)
Jinjun Xiong (IBM Thomas J. Watson Research Center)
Chuang Gan (MIT-IBM Watson AI Lab)
David Cox (MIT-IBM Watson AI Lab)
Mark Hasegawa-Johnson (University of Illinois)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors