Skip to yearly menu bar Skip to main content


Morning Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

Mitigating Label Bias via Decoupled Confident Learning

Yunyi Li · Maria De-Arteaga · Maytal Saar-Tsechansky


Abstract:

There has been growing attention to algorithmic fairness in contexts where ML algorithms inform consequential decisions for humans. This has led to a surge in methodologies to mitigate algorithmic bias. However, such methodologies largely assume that observed labels in training data are correct. Yet, bias in labels is pervasive across important domains, including healthcare, hiring, and social media content moderation. In particular, human-generated labels are prone to encoding societal biases. While the presence of labeling bias has been discussed conceptually, there is a lack of methodologies to address this problem. We propose a pruning method--Decoupled Confident Learning (DeCoLe)--specifically designed to mitigate label bias. After illustrating its performance on a synthetic dataset, we apply DeCoLe in the context of hate speech detection, where label bias has been recognized as an important challenge, and show that it successfully identifies biased labels and outperforms competing approaches.

Chat is not available.