Timezone: »
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of modern machine learning models. Here, we discuss these connections through a causal lens, focusing on anti-causal prediction tasks, where the input to a classifier (e.g., an image) is assumed to be generated as a function of the target label and the protected attribute. By taking this perspective, we draw explicit connections between a common fairness criterion---separation---and a common notion of robustness---risk invariance. These connections provide new motivation for applying the separation criterion in anticausal settings, and show that fairness can be motivated entirely on the basis of achieving better performance. In addition, our findings suggest that robustness-motivated approaches can be used to enforce separation, and that they often work better in practice than methods designed to directly enforce separation. Using a medical dataset, we empirically validate our findings on the task of detecting pneumonia from X-rays, in a setting where differences in prevalence across sex groups motivates a fairness mitigation. Our findings highlight the importance of considering causal structure when choosing and enforcing fairness criteria.
Author Information
Maggie Makar (University of Michigan)
Alexander D'Amour (Google Brain)
More from the Same Authors
-
2022 : Causally motivated multi-shortcut identification and removal »
Jiayun Zheng · Maggie Makar -
2022 : Fairness and robustness in anti-causal prediction »
Maggie Makar · Alexander D'Amour -
2022 : Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare »
Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens -
2022 : Adapting to Shifts in Latent Confounders via Observed Concepts and Proxies »
Matt Kusner · Ibrahim Alabdulmohsin · Stephen Pfohl · Olawale Salaudeen · Arthur Gretton · Sanmi Koyejo · Jessica Schrouff · Alexander D'Amour -
2023 Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability »
Yoav Wald · Claudia Shi · Aahlad Puli · Amir Feder · Limor Gultchin · Mark Goldstein · Maggie Makar · Victor Veitch · Uri Shalit -
2022 Workshop: Spurious correlations, Invariance, and Stability (SCIS) »
Aahlad Puli · Maggie Makar · Victor Veitch · Yoav Wald · Mark Goldstein · Limor Gultchin · Angela Zhou · Uri Shalit · Suchi Saria -
2021 Workshop: The Neglected Assumptions In Causal Inference »
Niki Kilbertus · Lily Hu · Laura Balzer · Uri Shalit · Alexander D'Amour · Razieh Nabi