Timezone: »

Mitigating Gender Bias in Face Recognition using the von Mises-Fisher Mixture Model
Jean-Rémy Conti · Nathan NOIRY · Stephan Clemencon · Vincent Despiegel · Stéphane Gentric

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #1106

In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, BFAR and BFRR, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.

Author Information

Jean-Rémy Conti (Télécom Paris Idemia)
Nathan NOIRY (Telecom Paris)
Stephan Clemencon (Telecom ParisTech)
Vincent Despiegel (Idemia)
Stéphane Gentric (IDEMIA)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors