Poster
in
Workshop: The Many Facets of Preference-Based Learning
Learning Populations of Preferences via Pairwise Comparison Queries
Gokcan Tatli · Yi Chen · Ramya Vinayak
Abstract:
Ideal point based preference learning using pairwise comparisons of type "Do you prefer $a$ or $b$?" has emerged as a powerful tool for understanding how we make preferences which plays a key role in many areas. Existing preference learning approaches assume homogeneity and focus on learning preference on average over the population or require a large number of queries per individual to localize individual preferences. However, in practical scenarios with heterogeneous preferences and limited availability of responses, these approaches are impractical. Therefore, we introduce the problem of learning the distribution of preferences over a population via pairwise comparisons using only one response per individual. In this scenario, learning each individual's preference is impossible. Hence the question of interest is: what can we learn about the distribution of preferences over the population? Due to binary answers from comparison queries, we focus on learning the mass of the underlying distribution in the regions (polytopes) created by the intersection of bisecting hyperplanes between queried pairs of points. We investigate this fundamental question in both 1-D and higher dimensional settings with noiseless response to comparison queries. We show that the problem is identifiable in 1-D setting and provide recovery guarantees. We also show that the problem is not identifiable for higher dimensional settings. We propose using a regularized recovery for higher dimensional settings and provide guarantees on the total variation distance between the true mass in each of the regions and the distribution learned via regularized constrained optimization problem. We validate our findings through simulations and experiments on real datasets. We also introduce a new dataset for this task collected on a real crowdsourcing platform.
Chat is not available.