Skip to yearly menu bar Skip to main content


Poster

Two Simple Ways to Learn Individual Fairness Metrics from Data

Debarghya Mukherjee · Mikhail Yurochkin · Moulinath Banerjee · Yuekai Sun

Keywords: [ Metric Learning ] [ Fairness, Equity and Justice ] [ Fairness, Equity, Justice, and Safety ]


Abstract:

Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness. Despite its benefits, it depends on a task specific fair metric that encodes our intuition of what is fair and unfair for the ML task at hand, and the lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness. In this paper, we present two simple ways to learn fair metrics from a variety of data types. We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases. We also provide theoretical guarantees on the statistical performance of both approaches.

Chat is not available.