Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Data: Automated Creation, Privacy, Bias

Measuring Fairness in Generative Models

Christopher Teo · Ngai-Man Cheung


Abstract:

Deep generative models have made much progress in improving training stability and quality of generated data. Recently there has been increased interest in the fairness of deep-generated data. Fairness is important in many applications, e.g law enforcement. Central to fair data generation are the fairness metrics for the assessment and evaluation of different generative models. In this paper, we first review fairness metrics proposed in previous works and highlight potential weaknesses. We then discuss a performance benchmark framework along with the assessment of alternatives metrics.

Chat is not available.