Which Tricks are Important for Learning to Rank?

Ivan Lyzhin · Aleksei Ustimenko · Andrey Gulin · Liudmila Prokhorenkova

Exhibit Hall 1 #831
[ Abstract ]
[ PDF [ Poster
Tue 25 Jul 5 p.m. PDT — 6:30 p.m. PDT


Nowadays, state-of-the-art learning-to-rank methods are based on gradient-boosted decision trees (GBDT). The most well-known algorithm is LambdaMART which was proposed more than a decade ago. Recently, several other GBDT-based ranking algorithms were proposed. In this paper, we thoroughly analyze these methods in a unified setup. In particular, we address the following questions. Is direct optimization of a smoothed ranking loss preferable over optimizing a convex surrogate? How to properly construct and smooth surrogate ranking losses? To address these questions, we compare LambdaMART with YetiRank and StochasticRank methods and their modifications. We also propose a simple improvement of the YetiRank approach that allows for optimizing specific ranking loss functions. As a result, we gain insights into learning-to-rank techniques and obtain a new state-of-the-art algorithm.

Chat is not available.