Position: Solipsistic superintelligence is unlikely to be cooperative
Abstract
AI's central challenge is shifting from capability to coexistence. The dominant paradigm in AI research focuses on developing powerful agents under stationary-environment assumptions, treating the world as an exogenous source of feedback. This position paper argues that a solipsistic superintelligence---an extremely capable solver of stationary problems---is unlikely to be cooperative. Deployment induces endogenous nonstationarity: other agents adapt, producing best-response dynamics that reshape the environment the AI was trained to navigate. The result is a train--test--deploy gap where historical distributions diverge from deployment realities; the more aggressively a solipsistic superintelligence exploits historical regularities, the faster it renders them obsolete. Cooperation is therefore not an added capability but an equilibrium property that solipsistic superintelligence cannot guarantee. We call for a multi-agent-first research paradigm treating strategic interdependence as a core design principle, alongside dynamic evaluation: testbeds where distributions are generated by adaptive counterparties, and metrics prioritizing equilibrium stability over single-score task success.