Timezone: »
We introduce Social Norm Bias (SNoB), a subtle but consequential type of discrimination that may be exhibited by machine learning classification algorithms, even when these systems achieve group fairness objectives. This work illuminates the gap between definitions of algorithmic group fairness and concerns of harm based on adherence to social norms. We study this issue through the lens of gender bias in occupation classification from online biographies. We quantify SNoB by measuring how an algorithm's predictions are associated with masculine and feminine gender norms. This framework reveals that for classification tasks related to male-dominated occupations, fairness-aware classifiers favor biographies whose language aligns with masculine gender norms. We compare SNoB across fairness intervention techniques, finding that post-processing interventions do not mitigate this bias at all.
Author Information
Myra Cheng (California Institute of Technology)
Maria De-Arteaga (University of Texas at Austin)
Lester Mackey (Microsoft Research)
Adam Tauman Kalai (Microsoft Research)
More from the Same Authors
-
2021 : Are You Man Enough? Even Fair Algorithms Conform to Societal Norms »
Myra Cheng · Maria De-Arteaga · Lester Mackey · Adam Tauman Kalai -
2023 : Mitigating Label Bias via Decoupled Confident Learning »
Yunyi Li · Maria De-Arteaga · Maytal Saar-Tsechansky -
2023 : Adaptive Bias Correction for Improved Subseasonal Forecasting »
Soukayna Mouatadid · Paulo Orenstein · Genevieve Flaspohler · Judah Cohen · Miruna Oprescu · Ernest Fraenkel · Lester Mackey -
2023 Oral: Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies »
Gati Aher · Rosa I. Arriaga · Adam Tauman Kalai -
2023 Poster: Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies »
Gati Aher · Rosa I. Arriaga · Adam Tauman Kalai -
2022 Workshop: Workshop on Human-Machine Collaboration and Teaming »
Umang Bhatt · Katie Collins · Maria De-Arteaga · Bradley Love · Adrian Weller -
2022 Poster: Scalable Spike-and-Slab »
Niloy Biswas · Lester Mackey · Xiao-Li Meng -
2022 Spotlight: Scalable Spike-and-Slab »
Niloy Biswas · Lester Mackey · Xiao-Li Meng -
2022 : Three Frontiers of Responsible Machine Learning »
Maria De-Arteaga -
2021 : Lester Mackey: Online Learning with Optimism and Delay »
Lester Mackey -
2021 : Spotlight: SNoB: Social Norm Bias of “Fair” Algorithms »
Myra Cheng -
2019 Workshop: Stein’s Method for Machine Learning and Statistics »
Francois-Xavier Briol · Lester Mackey · Chris Oates · Qiang Liu · Larry Goldstein · Larry Goldstein -
2019 Poster: Stein Point Markov Chain Monte Carlo »
Wilson Ye Chen · Alessandro Barp · Francois-Xavier Briol · Jackson Gorham · Mark Girolami · Lester Mackey · Chris Oates -
2019 Oral: Stein Point Markov Chain Monte Carlo »
Wilson Ye Chen · Alessandro Barp · Francois-Xavier Briol · Jackson Gorham · Mark Girolami · Lester Mackey · Chris Oates -
2018 Poster: Accurate Inference for Adaptive Linear Models »
Yash Deshpande · Lester Mackey · Vasilis Syrgkanis · Matt Taddy -
2018 Poster: Stein Points »
Wilson Ye Chen · Lester Mackey · Jackson Gorham · Francois-Xavier Briol · Chris J Oates -
2018 Poster: Orthogonal Machine Learning: Power and Limitations »
Ilias Zadik · Lester Mackey · Vasilis Syrgkanis -
2018 Oral: Accurate Inference for Adaptive Linear Models »
Yash Deshpande · Lester Mackey · Vasilis Syrgkanis · Matt Taddy -
2018 Oral: Stein Points »
Wilson Ye Chen · Lester Mackey · Jackson Gorham · Francois-Xavier Briol · Chris J Oates -
2018 Oral: Orthogonal Machine Learning: Power and Limitations »
Ilias Zadik · Lester Mackey · Vasilis Syrgkanis