Timezone: »
Bias in machine learning has manifested injustice in several areas, such as medicine, hiring, and criminal justice. In response, computer scientists have developed myriad definitions of fairness to correct this bias in fielded algorithms. While some definitions are based on established legal and ethical norms, others are largely mathematical. It is unclear whether the general public agrees with these fairness definitions, and perhaps more importantly, whether they understand these definitions. We take initial steps toward bridging this gap between ML researchers and the public, by addressing the question: does a lay audience understand a basic definition of ML fairness? We develop a metric to measure comprehension of three such definitions--demographic parity, equal opportunity, and equalized odds. We evaluate this metric using an online survey, and investigate the relationship between comprehension and sentiment, demographics, and the definition itself.
Author Information
Debjani Saha (University of Maryland)
Candice Schumann (University of Maryland)
Duncan McElfresh (University of Maryland)
John P Dickerson (University of Maryland)
Michelle Mazurek (University of Maryland)
Michael Tschantz (International Computer Science Institute)
More from the Same Authors
-
2021 : PreferenceNet: Encoding Human Preferences in Auction Design »
Neehar Peri · Michael Curry · Samuel Dooley · John P Dickerson -
2022 : Centralized vs Individual Models for Decision Making in Interconnected Infrastructure »
Stephanie Allen · John P Dickerson · Steven Gabriel -
2022 : Planning to Fairly Allocate: Probabilistic Fairness in the Restless Bandit Setting »
Christine Herlihy · Aviva Prins · Aravind Srinivasan · John P Dickerson -
2023 Poster: Generalized Reductions: Making any Hierarchical Clustering Fair and Balanced with Low Cost »
Marina Knittel · Max Springer · John P Dickerson · MohammadTaghi Hajiaghayi -
2022 Poster: Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments »
Ryan Sullivan · Jordan Terry · Benjamin Black · John P Dickerson -
2022 Poster: Measuring Representational Robustness of Neural Networks Through Shared Invariances »
Vedant Nanda · Till Speicher · Camila Kolling · John P Dickerson · Krishna Gummadi · Adrian Weller -
2022 Spotlight: Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments »
Ryan Sullivan · Jordan Terry · Benjamin Black · John P Dickerson -
2022 Oral: Measuring Representational Robustness of Neural Networks Through Shared Invariances »
Vedant Nanda · Till Speicher · Camila Kolling · John P Dickerson · Krishna Gummadi · Adrian Weller -
2022 Poster: Certified Neural Network Watermarks with Randomized Smoothing »
Arpit Bansal · Ping-yeh Chiang · Michael Curry · Rajiv Jain · Curtis Wigington · Varun Manjunatha · John P Dickerson · Tom Goldstein -
2022 Spotlight: Certified Neural Network Watermarks with Randomized Smoothing »
Arpit Bansal · Ping-yeh Chiang · Michael Curry · Rajiv Jain · Curtis Wigington · Varun Manjunatha · John P Dickerson · Tom Goldstein -
2021 Poster: Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks »
Avi Schwarzschild · Micah Goldblum · Arjun Gupta · John P Dickerson · Tom Goldstein -
2021 Spotlight: Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks »
Avi Schwarzschild · Micah Goldblum · Arjun Gupta · John P Dickerson · Tom Goldstein -
2020 Poster: A Pairwise Fair and Community-preserving Approach to k-Center Clustering »
Brian Brubach · Darshan Chakrabarti · John P Dickerson · Samir Khuller · Aravind Srinivasan · Leonidas Tsepenekas