Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact

Measuring Fairness in Large-Scale Recommendation Systems with Missing Labels

Yulong Dong · Kun Jin · Xinghai Hu · Yang Liu


Abstract:

Despite the commercial success of large-scale recommendation systems, people have recently raised concerns about the social responsibility of them, where fairness is one of the most important aspects. Accurate measurement of fairness metrics is vital for trustworthy fairness monitoring and diagnosis. But since most of these large recommendation systems do not have ground truths on the users' preferences on items never recommended to them, the systems suffer from the prevalence of missing ground truth labels on user-item pairs, and it poses significant challenges to accurate fairness metric measurements. Our work proposes a natural and efficient approach that addresses these issues caused by such missing labels, where we leverage the random traffic as a probe to the dataset with missing labels. We show both theoretically and numerically on real-world data that our approach is efficient and necessary.

Chat is not available.