Session
Ranking and Preference Learning 1
The Limits of Maxing, Ranking, and Preference Learning
Moein Falahatgar · Ayush Jain · Alon Orlitsky · Venkatadheeraj Pichapati · Vaishakh Ravindrakumar
We present a comprehensive understanding of three important problemsin PAC preference learning: maximum selection (maxing), ranking, andestimating \emph{all} pairwise preference probabilities, in theadaptive setting. With just Weak Stochastic Transitivity, we show thatmaxing requires $\Omega(n^2)$ comparisons and with slightly morerestrictive Medium Stochastic Transitivity, we present a linearcomplexity maxing algorithm. With Strong Stochastic Transitivity andStochastic Triangle Inequality, we derive a ranking algorithm withoptimal $\mathcal{O}(n\log n)$ complexity and an optimal algorithmthat estimates all pairwise preference probabilities.
Learning a Mixture of Two Multinomial Logits
Flavio Chierichetti · Ravi Kumar · Andrew Tomkins
The classical Multinomial Logit (MNL) is a behavioral model for user choice. In this model, a user is offered a slate of choices (a subset of a finite universe of $n$ items), and selects exactly one item from the slate, each with probability proportional to its (positive) weight. Given a set of observed slates and choices, the likelihood-maximizing item weights are easy to learn at scale, and easy to interpret. However, the model fails to represent common real-world behavior. As a result, researchers in user choice often turn to mixtures of MNLs, which are known to approximate a large class of models of rational user behavior. Unfortunately, the only known algorithms for this problem have been heuristic in nature. In this paper we give the first polynomial-time algorithms for exact learning of uniform mixtures of two MNLs. Interestingly, the parameters of the model can be learned for any $n$ by sampling the behavior of random users only on slates of sizes 2 and 3; in contrast, we show that slates of size 2 are insufficient by themselves.
The Weighted Kendall and High-order Kernels for Permutations
Yunlong Jiao · Jean-Philippe Vert
We propose new positive definite kernels for permutations. First we introduce a weighted version of the Kendall kernel, which allows to weight unequally the contributions of different item pairs in the permutations depending on their ranks. Like the Kendall kernel, we show that the weighted version is invariant to relabeling of items and can be computed efficiently in O(n ln(n)) operations, where n is the number of items in the permutation. Second, we propose a supervised approach to learn the weights by jointly optimizing them with the function estimated by a kernel machine. Third, while the Kendall kernel considers pairwise comparison between items, we extend it by considering higher-order comparisons among tuples of items and show that the supervised approach of learning the weights can be systematically generalized to higher-order permutation kernels.
Parameterized Algorithms for the Matrix Completion Problem
Robert Ganian · DePaul Iyad Kanj · Sebastian Ordyniak · Stefan Szeider
We consider two matrix completion problems, in which we are given a matrix with missing entries and the task is to complete the matrix in a way that (1) minimizes the rank, or (2) minimizes the number of distinct rows. We study the parameterized complexity of the two aforementioned problems with respect to several parameters of interest, including the minimum number of matrix rows, columns, and rows plus columns needed to cover all missing entries. We obtain new algorithmic results showing that, for the bounded domain case, both problems are fixed-parameter tractable with respect to all aforementioned parameters. We complement these results with a lower-bound result for the unbounded domain case that rules out fixed-parameter tractability w.r.t. some of the parameters under consideration.