Skip to yearly menu bar Skip to main content


Poster

Generalized Disparate Impact for Configurable Fairness Solutions in ML

Luca Giuliani · Eleonora Misino · Michele Lombardi

Exhibit Hall 1 #322
[ ]
[ PDF [ Poster

Abstract:

We make two contributions in the field of AI fairness over continuous protected attributes. First, we show that the Hirschfeld-Gebelein-Renyi (HGR) indicator (the only one currently available for such a case) is valuable but subject to a few crucial limitations regarding semantics, interpretability, and robustness. Second, we introduce a family of indicators that are: 1) complementary to HGR in terms of semantics; 2) fully interpretable and transparent; 3) robust over finite samples; 4) configurable to suit specific applications. Our approach also allows us to define fine-grained constraints to permit certain types of dependence and forbid others selectively. By expanding the available options for continuous protected attributes, our approach represents a significant contribution to the area of fair artificial intelligence.

Chat is not available.