Pseudo-Mallows for Efficient Probabilistic Preference Learning
Sylvia Liu ⋅ Valeria Vitelli ⋅ Carlo Mannino ⋅ Arnoldo Frigessi ⋅ Ida Scheel
Abstract
We propose the Pseudo-Mallows distribution over the set of all permutations of $n$ items, to approximate the posterior distribution of the Bayesian Mallows model. The Bayesian Mallows model has been successfully used for recommender systems to learn personal preferences from highly incomplete users data. However current inference algorithms do not scale, preventing its use in real-time applications. The Pseudo-Mallows distribution is a product of univariate discrete Mallows-like distributions, where the quality of the approximation depends on the order of the $n$ items in the factorization sequence. In a variational setting, we optimize the variational order parameter by minimising a marginalized KL-divergence, conjecturing a certain form of the optimal variational order that depends on the data, and proposing an approximation algorithm for this discrete optimization. Empirical evidence and some theory support our conjecture. We demonstrate on clicking data that variational inference via the Pseudo-Mallows distribution allows much faster probabilistic preference learning compared to alternative MCMC-based options.
Successful Page Load