Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Data: Automated Creation, Privacy, Bias

Adversarial Stacked Auto-Encoders for Fair Representation Learning

Patrik Joslin Kenfack · Adil Khan · Rasheed Hussain


Abstract:

Training machine learning models with the ultimate goal of maximizing only the accuracy could results in learning biases from data, making the learned model discriminatory towards certain groups. One approach to mitigate this problem is to find a representation which is more likely to yield fair outcomes using fair representation learning. In this paper, we propose a new fair representation leaning approach that leverages different level of representation of data to tighten the fairness bounds of the learned representation. Our results show that stacking different auto encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.

Chat is not available.