Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity

Charlie Hou · Kiran Thekumparampil · Michael Shavlovsky · Giulia Fanti · Sujay Sanghavi

Keywords: [ learning-to-rank ] [ Structured Prediction ] [ label scarcity ] [ unsupervised pretraining ]


Abstract: We study the structured prediction problem of ordering a set of items (learning-to-rank), each represented by tabular features. On tabular data, a significant body of literature has shown that current deep learning (DL) models perform at best similarly to Gradient Boosted Decision Trees (GBDTs), while significantly underperforming them on outlier data (Gorishniy et al., 2021; Rubachev et al., 2022; McElfresh et al., 2023). We identify a natural tabular data setting where DL models can outperform GBDTs: tabular Learning-to-Rank (LTR) under label scarcity. Tabular LTR applications, including search and recommendation, often have an abundance of unlabeled data, and *scarce* labeled data. We show that DL rankers can utilize unsupervised pretraining to exploit this unlabeled data. In extensive experiments over both public and proprietary datasets, we show that pretrained DL rankers consistently outperform GBDT rankers on ranking metrics---sometimes by as much as $38\%$---both overall and on outliers.

Chat is not available.