Skip to yearly menu bar Skip to main content


Differential Privacy has Bounded Impact on Fairness in Classification

Paul Mangold · Michaël Perrot · Aurélien Bellet · Marc Tommasi

Exhibit Hall 1 #312
[ ]
[ PDF [ Poster


We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use this Lipschitz property to prove a non-asymptotic bound showing that, as the number of samples increases, the fairness level of private models gets closer to the one of their non-private counterparts. This bound also highlights the importance of the confidence margin of a model on the disparate impact of differential privacy.

Chat is not available.