Position: Safe Models Do Not Guarantee Safe Societies: The Case for Sociopolitical Risk
Abstract
Sociopolitical AI risks are threats to collective self-determination: a society's capacity to articulate its interests and realize them through institutions. We argue that sociopolitical AI risks emerge when general-purpose AI systems are integrated into society in ways that disproportionately amplify the scale, speed, and opacity of institutional operations, thereby degrading their capacity to function. Unlike model-level harms (toxicity, bias, discrimination), sociopolitical risks arise from widespread deployment rather than individual outputs. And unlike existential risks involving loss of control or complete labor automation, they manifest with current AI capabilities where AI augments rather than replaces human activity. In this position paper, we analyze how AI alters the conditions of governance: flooding government agencies with paralyzing volumes of input, concentrating control of infrastructure that threatens sovereignty, and flattening public debate into artificial agreement while reinforcing existing biases.