Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact
Uncertainty-Aware Fair Regularization Under Datasets With Incomplete Sensitive Information
Andreas Athanasopoulos · Christos Dimitrakakis
We consider the challenge of algorithmic fairness for datasets with partially annotated sensitive information. Many existing methods simply use imputation models to infer sensitive attributes as a preprocessing step. We argue that the inherent uncertainty in imputation significantly influences the bias mitigation process, particularly in scenarios with limited annotations. We adopt a Bayesian viewpoint and propose two methods based on common fairness metrics. The first minimises the expected deviation from fairness under the current belief. The second instead uses the epistemic value-at-risk, in order to robustify the algorithm's fairness properties. In practice, we implement this approach through an ensemble of neural networks. The results show that explicitly incorporating uncertainty about the individual imputed labels as well as the imputation models leads to significantly improved fairness properties and overall performance.