Towards Robust Human-AI Complementarity under Uncertainty
Abstract
Machine learning models are often intended to augment rather than replace human decision-makers, by providing information that is complementary to human judgement. Yet, in practice, human decision makers routinely fail to realize such complementary gains, even when models provide useful signal. In this work, we study how asymmetric information about the quality of information available to a human decision maker vs. an AI impacts the ability of a decision maker to extract complementary value from AI predictions. We show that a key factor is the error correlation structure between human and AI predictions. In particular, when the AI's prediction errors are \textit{negatively correlated} with those of the human, the decision-maker can construct robust strategies which guarantee improvements in expected utility. We empirically investigate whether these conditions for complementarity arise in practice, using real-world forecasting benchmarks.