AtomWorld: A Benchmark for Evaluating Spatial Reasoning in Large Language Models on Material Structures
Abstract
Large language models (LLMs) have shown promising potential in materials science, enabling tasks ranging from knowledge retrieval to property prediction. Existing materials science benchmarks mainly focus on perceptual or knowledge-based tasks, largely ignoring the structure modelling tasks, a core challenge in real scientific workflows. In practice, constructing and manipulating atomic structures is one of the most creative and least automated steps in materials research. In this work, we introduce AtomWorld, a benchmark designed to evaluate the abilities of LLMs on structure modifications. The benchmark includes ten fundamental actions under four widely used modelling categories, enabling verifiable evaluation metrics. We find that Gemini 2.5 Pro generally performs the best. While the success rate decreases markedly with increasing modelling complexity, with particularly low success rates (below 12\% for rotation) for operations involving complex spatial relations. Our results suggest that contemporary LLMs are better suited as copilots for materials structure modelling rather than fully unsupervised autonomous scientific agents. Beyond evaluation, AtomWorld also serves as a testbed and playground for developing future structure-aware models, including reinforcement learning and agentic approaches.