Skip to yearly menu bar Skip to main content


Talk
in
Affinity Workshop: Women in Machine Learning (WiML) Un-Workshop

Invited Talk #4 - Errors are a crucial part of dialogue


Abstract:

Collaborative grounding is a fundamental aspect of human-human dialogue which allows people to negotiate meaning; in this talk, I argue that current deep learning approaches to dialogue systems don’t deal with it adequately. Making errors, and being able to recover from them collaboratively, is a key ingredient in grounding meaning, but current dialogue systems can’t do this. I will illustrate the pitfalls of being unable to ground collaboratively, discuss what can be learned from the language acquisition and dialog systems literature, and reflect on how to move forward. I will urge the community to proceed by addressing a research gap: how clarification mechanisms can be learned from data. Novel research methodologies which highlight the importance of the role of clarification mechanisms are needed for this. I will present an annotation methodology, based on a theoretical analysis of clarification requests, which unifies a number of previous accounts. Dialogue clarification mechanisms are an understudied research problem and a key missing piece in the giant jigsaw puzzle of natural language understanding. I will conclude this talk with an open call for collaborators that share the vision presented.