Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact

Exploring Desiderata for Individual Fairness

Shai Ben-David · Tosca Lechner · Ruth Urner


Abstract:

Algorithmic fairness for automated decision making systems has received much attention in recent years, with studies falling broadly into one of two camps: notions of (statistical) group fairness (GF), and notions of individual fairness (IF) - fairness as a right to be guaranteed to individuals. In this work, we review the latter notion for classification tasks and propose a formal framework for distinguishing individual from group fairness notions. We take an "axiomatic" approach, and identify a list of desirable properties for such a notion. We analyze relationships between these requirements, showing that some of them are mutually exclusive. We discuss some of the existing approaches to individual fairness from the perspective of our framework.In particular, we address the common view of IF as a Lipschitzness requirement ("similar individuals should be treated similarly") and discuss some of its concerning drawbacks.

Chat is not available.