Timezone: »

 
Poster
Text Generation with Diffusion Language Models: A Pre-training Approach with Continuous Paragraph Denoise
Zhenghao Lin · Yeyun Gong · Yelong Shen · Tong Wu · Zhihao Fan · Chen Lin · Nan Duan · Weizhu Chen

Thu Jul 27 04:30 PM -- 06:00 PM (PDT) @ Exhibit Hall 1 #101

In this paper, we introduce a novel dIffusion language modEl pre-training framework for text generation, which we call GENIE. GENIE is a large-scale pre-trained diffusion language model that consists of an encoder and a diffusion-based decoder, which can generate text by gradually transforming a random noise sequence into a coherent text sequence. To pre-train GENIE on a large-scale language corpus, we design a new continuous paragraph denoise objective, which encourages the diffusion-decoder to reconstruct a clean text paragraph from a corrupted version, while preserving the semantic and syntactic coherence. We evaluate GENIE on four downstream text generation benchmarks, namely XSum, CNN/DailyMail, Gigaword, and CommonGen. Our experimental results show that GENIE achieves comparable performance with the state-of-the-art autoregressive models on these benchmarks, and generates more diverse text samples. The code and models of GENIE are available at https://github.com/microsoft/ProphetNet/tree/master/GENIE.

Author Information

Zhenghao Lin (Xiamen University)
Yeyun Gong (Microsoft Research Asia)
Yelong Shen (microsoft)
Tong Wu
Zhihao Fan (Fudan University)
Chen Lin (Xiamen University)
Nan Duan (Microsoft Research)
Weizhu Chen (Microsoft)

More from the Same Authors