Poster
in
Workshop: Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact
Designing Experimental Evaluations of Algorithmic Interventions with Human Decision Makers In Mind
Inioluwa Raji · Lydia T. Liu
Automated decision systems (ADS) are broadly deployed to inform or support human decision-making across a wide range of consequential contexts. An emerging approach to the assessment of such systems is through experimental evaluation, which aims to measure the causal impacts of the ADS deployment on decision making and outcomes. However, various context-specific details complicate the goal of establishing meaningful experimental evaluations for algorithmic interventions. Notably, current experimental designs rely on simplifying assumptions about human decision making in order to derive causal estimates. In reality, cognitive biases of human decision makers induced by experimental design choices may significantly alter the observed effect sizes of the algorithmic intervention. In this paper, we formalize and investigate various models of human decision-making in the presence of a predictive algorithmic aid. We show that each of these behavioral models produces dependencies across decision subjects and results in the violation of existing assumptions, with consequences for treatment effect estimation.