Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact

Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets

Eleni Straitouri · Suhas Thejaswi · Manuel Gomez-Rodriguez


Abstract:

Decision support systems based on prediction sets help humans solve multiclass classification tasks by narrowing down the set of potential label values to a subset of them, namely a prediction set, and asking them to always predict label values from the prediction sets. While this type of systems have been proven to be effective at improving the average accuracy of the predictions made by humans, by restricting human agency, they may cause harm—a human who has succeeded at predicting the ground-truth label of an instance on their own may have failed had they used these systems. In this paper, our goal is to control how frequently a decision support system based on prediction sets may cause harm, by design. To this end, we start by characterizing the above notion of harm using the theoretical framework of structural causal models. Then, we show that, under a natural monotonicity assumption, we can estimate how frequently a system may cause harm using only predictions made by humans on their own. Building upon this assumption, we introduce a computational framework to design decision support systems based on prediction sets that are guaranteed to cause harm less frequently than a user-specified value using conformal risk control. We validate our framework using real human predictions from a human subject study and show that, in decision support systems based on prediction sets, there is a trade-off between accuracy and counterfactual harm.

Chat is not available.