Position: Uncertainty is a Strategic Signal in Human–AI Decision Making
Abstract
AI-assisted decision-making is subject to AI model uncertainty. Prior works proposed to make this uncertainty explicit for increasing trust and transparency, but its behavioral role was rarely treated. This position paper argues, from a game-theoretic perspective, that human–AI decision support should be viewed as a repeated mechanism in which AI uncertainty functions as a strategic signal that shapes how users adopt reliance policies over time. We formalize a framework in which the interface specifies uncertainty signals, user response such as accepting versus verifying, and the resulting policy-shaping consequences. These repeated steps are used to characterize near-separating reliance regimes. A first pilot study conducted with 180 participants supports our proposition: Our game-theoretic mechanism increased verification and sharply reduced blind acceptance of wrong AI outputs. These initial results support treating human–AI interaction as a game-theoretic mechanism with uncertainty as a strategic signal, rather than a static model property or purely informational label.