We address the problem of algorithmic fairness: ensuring that the outcome of a classifier is not biased towards certain values of sensitive variables such as age, race or gender. As common fairness conditions can be expressed as (conditional) independence between variables, we propose to use the R\'enyi maximum correlation coefficient to generalize fairness measurement to continuous variables. We exploit the Witsenhausen's characterization of the R\'enyi coefficient to propose a differentiable implementation linked to $f$-divergences. This allows us to generalize fairness-aware learning to continuous variables by using a penalty that upper bounds this coefficient. This penalty can be estimated on mini-batches allowing to use deep nets. Experiments show a favorable comparison to state of the art on binary variables and prove the ability to protect continuous instances.
Jeremie Mary (CRITEO)
Clément Calauzènes (Criteo AI Lab)
Noureddine El Karoui (Criteo AI Lab and UC, Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
2019 Poster: Fairness-Aware Learning for Continuous Attributes and Treatments »
Thu Jun 13th 06:30 -- 09:00 PM Room Pacific Ballroom