Timezone: »
We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances. On each round, k instances arrive and receive classification outcomes according to a randomized policy deployed by the learner, whose goal is to maximize accuracy while deploying individually fair policies. We first extend the framework of Bechavod et al. (2020), which relies on the existence of a human fairness auditor for detecting fairness violations, to instead incorporate feedback from dynamically-selected panels of multiple, possibly inconsistent, auditors. We then construct an efficient reduction from our problem of online learning with one-sided feedback and a panel reporting fairness violations to the contextual combinatorial semi-bandit problem (Cesa-Bianchi & Lugosi, 2009, György et al., 2007). Finally, we show how to leverage the guarantees of two algorithms in the contextual combinatorial semi-bandit setting: Exp2 (Bubeck et al., 2012) and the oracle-efficient Context-Semi-Bandit-FTPL (Syrgkanis et al., 2016), to provide multi-criteria no regret guarantees simultaneously for accuracy and fairness. Our results eliminate two potential sources of bias from prior work: the "hidden outcomes" that are not available to an algorithm operating in the full information setting, and human biases that might be present in any single human auditor, but can be mitigated by selecting a well chosen panel.
Author Information
Yahav Bechavod (Hebrew University)
Aaron Roth (University of Pennsylvania)
More from the Same Authors
-
2020 : Contributed Talk: Causal Feature Discovery through Strategic Modification »
Yahav Bechavod · Steven Wu · Juba Ziani -
2021 : Adaptive Machine Unlearning »
Varun Gupta · Christopher Jung · Seth Neel · Aaron Roth · Saeed Sharifi-Malvajerdi · Chris Waites -
2022 : Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2023 Oral: Multicalibration as Boosting for Regression »
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell -
2023 Poster: Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod · Aaron Roth -
2023 Poster: The Statistical Scope of Multicalibration »
Georgy Noarov · Aaron Roth -
2023 Poster: Multicalibration as Boosting for Regression »
Ira Globus-Harris · Declan Harrison · Michael Kearns · Aaron Roth · Jessica Sorrell -
2022 : Individually Fair Learning with One-Sided Feedback »
Yahav Bechavod -
2022 Poster: Information Discrepancy in Strategic Learning »
Yahav Bechavod · Chara Podimata · Steven Wu · Juba Ziani -
2022 Spotlight: Information Discrepancy in Strategic Learning »
Yahav Bechavod · Chara Podimata · Steven Wu · Juba Ziani -
2021 Poster: Differentially Private Query Release Through Adaptive Projection »
Sergul Aydore · William Brown · Michael Kearns · Krishnaram Kenthapadi · Luca Melis · Aaron Roth · Ankit Siva -
2021 Oral: Differentially Private Query Release Through Adaptive Projection »
Sergul Aydore · William Brown · Michael Kearns · Krishnaram Kenthapadi · Luca Melis · Aaron Roth · Ankit Siva -
2019 Poster: Differentially Private Fair Learning »
Matthew Jagielski · Michael Kearns · Jieming Mao · Alina Oprea · Aaron Roth · Saeed Sharifi-Malvajerdi · Jonathan Ullman -
2019 Oral: Differentially Private Fair Learning »
Matthew Jagielski · Michael Kearns · Jieming Mao · Alina Oprea · Aaron Roth · Saeed Sharifi-Malvajerdi · Jonathan Ullman -
2018 Poster: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness »
Michael Kearns · Seth Neel · Aaron Roth · Steven Wu -
2018 Oral: Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness »
Michael Kearns · Seth Neel · Aaron Roth · Steven Wu -
2018 Poster: Mitigating Bias in Adaptive Data Gathering via Differential Privacy »
Seth Neel · Aaron Roth -
2018 Oral: Mitigating Bias in Adaptive Data Gathering via Differential Privacy »
Seth Neel · Aaron Roth -
2017 Poster: Meritocratic Fairness for Cross-Population Selection »
Michael Kearns · Aaron Roth · Steven Wu -
2017 Talk: Meritocratic Fairness for Cross-Population Selection »
Michael Kearns · Aaron Roth · Steven Wu -
2017 Poster: Fairness in Reinforcement Learning »
Shahin Jabbari · Matthew Joseph · Michael Kearns · Jamie Morgenstern · Aaron Roth -
2017 Talk: Fairness in Reinforcement Learning »
Shahin Jabbari · Matthew Joseph · Michael Kearns · Jamie Morgenstern · Aaron Roth