Timezone: »

Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity
Charlie Hou · Kiran Thekumparampil · Michael Shavlovsky · Giulia Fanti · Yesh Dattatreya · Sujay Sanghavi

Fri Jul 28 02:50 PM -- 03:05 PM (PDT) @
Event URL: https://openreview.net/forum?id=y13NK7QJ0m »

While deep learning (DL) models are state-of-the-art in text and image domains, they have not yet consistently outperformed Gradient Boosted Decision Trees (GBDTs) on tabular Learning-To-Rank (LTR) problems. Most of the recent performance gains attained by DL models in text and image tasks have used unsupervised pretraining, which exploits orders of magnitude more unlabeled data than labeled data. To the best of our knowledge, unsupervised pretraining has not been applied to the LTR problem, which often produces vast amounts of unlabeled data.In this work, we study whether unsupervised pretraining can improve LTR performance over GBDTs and other non-pretrained models. Using simple design choices--including SimCLR-Rank, our ranking-specific modification of SimCLR (an unsupervised pretraining method for images)--we produce pretrained deep learning models that soundly outperform GBDTs (and other non-pretrained models) in the case where labeled data is vastly outnumbered by unlabeled data. We also show that pretrained models also often achieve significantly better robustness than non-pretrained models (GBDTs or DL models) in ranking outlier data.

Author Information

Charlie Hou (Carnegie Mellon University)
Kiran Thekumparampil (Amazon)
Michael Shavlovsky (University of California, Santa Cruz)
Giulia Fanti (CMU)
Yesh Dattatreya (Georgia Institute of Technology)
Sujay Sanghavi (UT Austin)

More from the Same Authors