Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Conversational AI Workshop - What’s left to TEACH (Trustworthy, Enhanced, Adaptable, Capable and Human-centric) chatbots?

Robustness through Loss Consistency Regularization

Tianjian Huang · Shaunak A Halbe · Chinnadhurai Sankar · Pooyan Amini · Satwik Kottur · Alborz Geramifard · Meisam Razaviyayn · Ahmad Beirami


Abstract:

In the continually evolving landscape of Natural Language Processing (NLP), enhancing the robustness and resilience of deep learning models is critical. Traditional models employ Empirical Risk Minimization (ERM), but its susceptibility to distribution shifts and adversarial attacks undermines its efficacy. To address these limitations, many utilize Data Augmentation followed by ERM (DA-ERM) and consistency regularization. Unfortunately, these methods are not applicable to covariant data augmentation, where the label of the augmented data hinges on the augmentation process itself and hence cannot work on generative models. In this paper, we present a novel technique called Data Augmented Loss Invariant Regularization (DAIR), which operates directly at the loss level, circumventing the restrictions of conventional methods and extending its applicability to covariant data augmentation. Importantly, DAIR's robustness is independent of network architecture, problem setup, or task, thereby expanding its suitability for a broad range of NLP challenges. Finally, our experiments on Task-Oriented Dialog highlight DAIR's superiority over conventional methods, setting new benchmarks in NLP tasks with minimal extra computational cost.

Chat is not available.