Timezone: »

 
Workshop
Learning to Generate Natural Language
Yishu Miao · Wang Ling · Tsung-Hsien Wen · Kris Cao · Daniela Gerz · Phil Blunsom · Chris Dyer

Wed Aug 09 03:30 PM -- 12:30 AM (PDT) @ C4.11
Event URL: https://sites.google.com/site/langgen17/home »

Research on natural language generation is rapidly growing due to the increasing demand for human-machine communication in natural language. This workshop aims to promote the discussion, exchange, and dissemination of ideas on the topic of text generation, touching several important aspects in this modality: learning schemes and evaluation, model design and structures, advanced decoding strategies, and natural language generation applications. This workshop aims to be a venue for the exchange of ideas regarding data-driven machine learning approaches for text generation, including mainstream tasks such as dialogue generation, instruction generation, and summarization; and for establishing new directions and ideas with potential for impact in the fields of machine learning, deep learning, and NLP.

Wed 3:30 p.m. - 4:15 p.m. [iCal]

Document labelling (incl. multimodal objects) is widely used in NLP and ML, in forms include classic document categorisation, single-document summarisation and image captioning. In this talk, I consider the question of what, intrinsically, is a suitable "label" for a given document type, and then discuss some recent work on automatically generating multimodal labels for textual topics.

Wed 4:15 p.m. - 5:00 p.m. [iCal]

Invited Talk 2

Wed 5:00 p.m. - 5:30 p.m. [iCal]

Coffee Break & Poster session

Wed 5:30 p.m. - 6:15 p.m. [iCal]

In the first part of the talk, I will propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities. After deriving its properties, I will show how its Jacobian can be efficiently computed, enabling its use in a network trained with backpropagation. Then, I will propose a new smooth and convex loss function which is the sparsemax analogue of the logistic loss, revealing an unexpected connection with the Huber classification loss. I will show promising empirical results in multi-label classification problems and in attention-based neural networks for natural language inference.

In the second part, I will introduce constrained softmax, another activation function that allows imposing upper bound constraints on attention probabilities. Based on this activation, I will introduce a novel neural end-to-end differentiable easy-first decoder that learns to solve sequence tagging tasks in a flexible order. The decoder iteratively updates a "sketch" of the predictions over the sequence. The proposed models compare favourably to BILSTM taggers on three sequence tagging tasks.

This is joint work with Ramon Astudillo and Julia Kreutzer.

Wed 6:15 p.m. - 7:00 p.m. [iCal]

Workshop Paper Presentation

Wed 7:00 p.m. - 9:00 p.m. [iCal]

Lunch Break & Poster session

Author Information

Yishu Miao (University of Oxford)
Wang Ling (DeepMind)
Tsung-Hsien Wen (University of Cambridge)
Kris Cao (University of Cambridge)
Daniela Gerz (University of Cambridge)
Phil Blunsom (DeepMind and Oxford University)
Chris Dyer (DeepMind)

More from the Same Authors