Timezone: »
We consider the problem of selecting a strong pool of individuals from several populations with incomparable skills (e.g. soccer players, mathematicians, and singers) in a fair manner. The quality of an individual is defined to be their relative rank (by cumulative distribution value) within their own population, which permits cross-population comparisons. We study algorithms which attempt to select the highest quality subset despite the fact that true CDF values are not known, and can only be estimated from the finite pool of candidates. Specifically, we quantify the regret in quality imposed by "meritocratic" notions of fairness, which require that individuals are selected with probability that is monotonically increasing in their true quality. We give algorithms with provable fairness and regret guarantees, as well as lower bounds, and provide empirical results which suggest that our algorithms perform better than the theory suggests. We extend our results to a sequential batch setting, in which an algorithm must repeatedly select subsets of individuals from new pools of applicants, but has the benefit of being able to compare them to the accumulated data from previous rounds.
Author Information
Michael Kearns (University of Pennsylvania)
Aaron Roth (University of Pennsylvania)
Steven Wu (Microsoft Research & U. of Pennsylvania)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Meritocratic Fairness for Cross-Population Selection »
Wed. Aug 9th 08:30 AM -- 12:00 PM Room Gallery #3
More from the Same Authors
-
2021 : Adaptive Machine Unlearning »
Varun Gupta · Christopher Jung · Seth Neel · Aaron Roth · Saeed Sharifi-Malvajerdi · Chris Waites -
2022 : Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2022 : Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2023 Poster: Characterizing Multicalibration via Property Elicitation »
Georgy Noarov · Aaron Roth -
2023 Poster: Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2023 Poster: Multicalibration as Boosting for Regression »
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell -
2023 Oral: Multicalibration as Boosting for Regression »
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell -
2021 Poster: Differentially Private Query Release Through Adaptive Projection »
Sergul Aydore · William Brown · Michael Kearns · Krishnaram Kenthapadi · Luca Melis · Aaron Roth · Ankit Siva -
2021 Oral: Differentially Private Query Release Through Adaptive Projection »
Sergul Aydore · William Brown · Michael Kearns · Krishnaram Kenthapadi · Luca Melis · Aaron Roth · Ankit Siva -
2019 Poster: Differentially Private Fair Learning »
Matthew Jagielski · Michael Kearns · Jieming Mao · Alina Oprea · Aaron Roth · Saeed Sharifi-Malvajerdi · Jonathan Ullman -
2019 Oral: Differentially Private Fair Learning »
Matthew Jagielski · Michael Kearns · Jieming Mao · Alina Oprea · Aaron Roth · Saeed Sharifi-Malvajerdi · Jonathan Ullman -
2018 Poster: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness »
Michael Kearns · Seth Neel · Aaron Roth · Steven Wu -
2018 Oral: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness »
Michael Kearns · Seth Neel · Aaron Roth · Steven Wu -
2018 Poster: Mitigating Bias in Adaptive Data Gathering via Differential Privacy »
Seth Neel · Aaron Roth -
2018 Oral: Mitigating Bias in Adaptive Data Gathering via Differential Privacy »
Seth Neel · Aaron Roth -
2017 Poster: Fairness in Reinforcement Learning »
Shahin Jabbari · Matthew Joseph · Michael Kearns · Jamie Morgenstern · Aaron Roth -
2017 Talk: Fairness in Reinforcement Learning »
Shahin Jabbari · Matthew Joseph · Michael Kearns · Jamie Morgenstern · Aaron Roth