Timezone: »

 
Oral
CoT: Cooperative Training for Generative Modeling of Discrete Data
Sidi Lu · Lantao Yu · Siyuan Feng · Yaoming Zhu · Weinan Zhang

Thu Jun 13 11:30 AM -- 11:35 AM (PDT) @ Hall B

To tackle the distribution shifting problem inherent in Maximum Likelihood Estimation, a.k.a. exposure bias, researchers mainly focused on introducing auxiliary adversarial training to penalize the unrealistic generated samples. To exploit the supervision signal from the discriminator, most previous models, typically language GANs, leverage REINFORCE to address the non-differentiable problem of discrete sequential data. In this paper, we propose a novel approach called Cooperative Training to improve the training of sequence generative models. Our algorithm transforms the minimax game of GANs into the form of a joint maximization problem and manages to explicitly estimate and optimize Jensen-Shannon divergence. In the experiments, compared to existing state-of-the-art methods, our model shows superior performance in both sample quality and diversity, as well as training stability. Unlike previous methods, our approach works without the necessity of pre-training via Maximum Likelihood Estimation, which is crucial to the success of previous methods.

Author Information

Sidi Lu (Shanghai Jiao Tong University)
Lantao Yu (Stanford University)
Siyuan Feng (Apex Data & Knowledge Management Lab, Shanghai Jiao Tong University)
Yaoming Zhu (Apex Data & Knowledge Management Lab, Shanghai Jiao Tong University)
Weinan Zhang (Apex Data & Knowledge Management Lab, Shanghai Jiao Tong University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors