Skip to yearly menu bar Skip to main content


Oral

Towards a Deep and Unified Understanding of Deep Neural Models in NLP

Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie

Abstract:

We define a unified information-based measure to provide quantitative explanations on how intermediate layers of deep Natural Language Processing (NLP) models leverage the information of input words.Our method advances existing explanation methods by addressing their exhibited issues in coherency and generality. The explanations generated by using our method are consistent and faithful across different timestamps, layers, and models. We show how our method can be used to understand four widely used models in NLP and explain their performances on three real-world benchmark datasets.

Chat is not available.