Timezone: »
Research on natural language generation is rapidly growing due to the increasing demand for human-machine communication in natural language. This workshop aims to promote the discussion, exchange, and dissemination of ideas on the topic of text generation, touching several important aspects in this modality: learning schemes and evaluation, model design and structures, advanced decoding strategies, and natural language generation applications. This workshop aims to be a venue for the exchange of ideas regarding data-driven machine learning approaches for text generation, including mainstream tasks such as dialogue generation, instruction generation, and summarization; and for establishing new directions and ideas with potential for impact in the fields of machine learning, deep learning, and NLP.
Wed 3:30 p.m. - 4:15 p.m.
|
Tim Baldwin: Learning to Label Documents
(Invited Talk)
»
Document labelling (incl. multimodal objects) is widely used in NLP and ML, in forms include classic document categorisation, single-document summarisation and image captioning. In this talk, I consider the question of what, intrinsically, is a suitable "label" for a given document type, and then discuss some recent work on automatically generating multimodal labels for textual topics. |
|
Wed 4:15 p.m. - 5:00 p.m.
|
Dani Yogatama
(Invited Talk)
»
Invited Talk 2 |
|
Wed 5:00 p.m. - 5:30 p.m.
|
Coffee Break & Poster session 1
»
Coffee Break & Poster session |
|
Wed 5:30 p.m. - 6:15 p.m.
|
Andre Martins: Beyond Softmax: Sparsemax, Constrained Softmax, Differentiable Easy-First
(Invited Talk)
»
In the first part of the talk, I will propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities. After deriving its properties, I will show how its Jacobian can be efficiently computed, enabling its use in a network trained with backpropagation. Then, I will propose a new smooth and convex loss function which is the sparsemax analogue of the logistic loss, revealing an unexpected connection with the Huber classification loss. I will show promising empirical results in multi-label classification problems and in attention-based neural networks for natural language inference. In the second part, I will introduce constrained softmax, another activation function that allows imposing upper bound constraints on attention probabilities. Based on this activation, I will introduce a novel neural end-to-end differentiable easy-first decoder that learns to solve sequence tagging tasks in a flexible order. The decoder iteratively updates a "sketch" of the predictions over the sequence. The proposed models compare favourably to BILSTM taggers on three sequence tagging tasks. This is joint work with Ramon Astudillo and Julia Kreutzer. |
|
Wed 6:15 p.m. - 7:00 p.m.
|
Spotlight Paper Presentation
(Presentation)
»
Workshop Paper Presentation |
|
Wed 7:00 p.m. - 9:00 p.m.
|
Lunch Break & Poster session 2
»
Lunch Break & Poster session |
Author Information
Yishu Miao (University of Oxford)
Wang Ling (DeepMind)
Tsung-Hsien Wen (University of Cambridge)
Kris Cao (University of Cambridge)
Daniela Gerz (University of Cambridge)
Phil Blunsom (DeepMind and Oxford University)
Chris Dyer (DeepMind)
More from the Same Authors
-
2018 Poster: Fast Parametric Learning with Activation Memorization »
Jack Rae · Chris Dyer · Peter Dayan · Timothy Lillicrap -
2018 Oral: Fast Parametric Learning with Activation Memorization »
Jack Rae · Chris Dyer · Peter Dayan · Timothy Lillicrap -
2017 Poster: Latent Intention Dialogue Models »
Tsung-Hsien Wen · Yishu Miao · Phil Blunsom · Stephen J Young -
2017 Poster: Discovering Discrete Latent Topics with Neural Variational Inference »
Yishu Miao · Edward Grefenstette · Phil Blunsom -
2017 Talk: Discovering Discrete Latent Topics with Neural Variational Inference »
Yishu Miao · Edward Grefenstette · Phil Blunsom -
2017 Talk: Latent Intention Dialogue Models »
Tsung-Hsien Wen · Yishu Miao · Phil Blunsom · Stephen J Young