Poster
in
Workshop: Next Generation of AI Safety
Fairness through partial awareness: Evaluation of the addition of demographic information for bias mitigation methods
Chung Peng Lee · Rachel Hong · Jamie Morgenstern
Keywords: [ Proxy Fairness ] [ Data Scarcity ] [ Machine Learning Fairness ] [ Bias Mitigation ] [ robustness ]
Models that effectively mitigate demographic biases have been explored in two common settings: either requiring full access to demographic information in training or omitting demographic information for legal or privacy reasons. Yet in practice, data can be collected in stages or composed of different sources, so data access can be rather flexible, instead of following the two extremes of complete or a lack of access to demographic annotations. We investigate the fairness impact of disclosing more demographic information and find that demographic-unaware methods come at a clear cost to certain fairness metrics in comparison to demographic-aware methods. We then empirically show the benefits of a partially-demographic-aware setup: collecting only a small number of new samples (0.1\% of the full set) with demographics for an over-parameterized model can significantly amend this cost (40\% gain in worst-group accuracy). Our findings illustrate that simple data collection efforts may effectively close fairness gaps for models trained on data without demographic information.