Skip to yearly menu bar Skip to main content


Poster

Pretrained Generalized Autoregressive Model with Adaptive Probabilistic Label Clusters for Extreme Multi-label Text Classification

Hui Ye · Zhiyu Chen · Da-Han Wang · Brian D Davison

Virtual

Keywords: [ Architectures ] [ Large Scale Learning and Big Data ] [ Supervised Learning ] [ Natural Language Processing / Dialogue ] [ Deep Learning - General ]


Abstract:

Extreme multi-label text classification (XMTC) is a task for tagging a given text with the most relevant labels from an extremely large label set. We propose a novel deep learning method called APLC-XLNet. Our approach fine-tunes the recently released generalized autoregressive pretrained model (XLNet) to learn a dense representation for the input text. We propose Adaptive Probabilistic Label Clusters (APLC) to approximate the cross entropy loss by exploiting the unbalanced label distribution to form clusters that explicitly reduce the computational time. Our experiments, carried out on five benchmark datasets, show that our approach significantly outperforms existing state-of-the-art methods. Our source code is available publicly at https://github.com/huiyegit/APLC_XLNet.

Chat is not available.