Position: Metaphysical Concepts in AI Should Be Judged by Their Consequences
Abstract
This position paper argues that answers to metaphysical puzzles in AI (such as ``Can LLMs be conscious?'' or ''What is AGI?'') should be judged by their practical consequences rather than their supposed truth. Our key position is that metaphysical concepts earn their value through the new research directions they open. Drawing on Pragmatism, we propose a two-step framework–productive confusion–to navigate conceptual confusions: first, clarify the different meanings a metaphysical concept has in ordinary language, then use this understanding to invent new empirical research programs. We illustrate our framework with numerous examples and show how it inspires progress for cutting-edge AI research. We contrast our position with Scientific Realism (which supposes science reveals ultimate truths) and Quietism (which brushes aside metaphysical puzzles as useless). We end with a call to action that operationalizes our position for multiple stakeholders in the AI community including researchers, decision makers and reviewers.