Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact

Risk Scores in Algorithmic Decision-Making as Statistical Fatalism

Sebastian Zezulka · Konstantin Genin


Abstract:

A fundamental problem in algorithmic fairness is determining whether machine learning algorithms will reproduce or exacerbate structural inequalities reflected in their training data. Addressing this challenge requires two key steps. First, we must evaluate fairness interventions on predictions in algorithmic decision-making by examining the causal effect their deployment has on the distribution of relevant social goods. Second, we propose the framework of \textit{prospective fairness}, which necessitates anticipating these effects before implementing algorithmic policies. Extending this line of work, we advocate shifting the focus from predicting (fair) risk scores to estimating \textit{potential outcomes} under available policy decisions.

Chat is not available.