Skip to yearly menu bar Skip to main content

Workshop: Workshop on Socially Responsible Machine Learning

Adversarial Stacked Auto-Encoders for Fair Representation Learning

Patrik Joslin Kenfack · Adil Khan · Rasheed Hussain


Training machine learning models with the ultimate goal of maximizing only the accuracy could results in learning biases from data, making the learned model discriminatory towards certain groups. One approach to mitigate this problem is to find a representation which is more likely to yield fair outcomes using fair representation learning. In this paper, we propose a new fair representation leaning approach that leverages different level of representation of data to tighten the fairness bounds of the learned representation. Our results show that stacking different auto encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.

Chat is not available.