Position: Prioritize Identifying Structure, Not Complex Models, for Scientific Discovery
Abstract
Modern Machine Learning (ML) and Artificial Intelligence (AI) models, especially large language models (LLM)s, are increasingly used to generate scientific hypotheses and mechanistic explanations from observational data. This position paper argues that in the high-dimensional proxy regimes where ML excels, mechanistic learning is generically underdetermined: many incompatible mechanisms induce essentially the same observational relationships on the support of the data, so predictive success and coherent explanations are insufficient evidence of mechanism discovery. This underdetermination becomes uniquely hazardous with LLMs, which tend to collapse large equivalence classes of explanations into a single fluent narrative. We propose concrete standards for ``mechanistic ML'': mechanistic claims must (i) declare identifying assumptions, (ii) pass mechanism-discriminating evaluations (interventions, invariances, derivative constraints), or (iii) report the surviving multiplicity, including explicit falsifiers and sensitivity to assumptions. These norms are necessary if LLM-centered workflows are to support science rather than merely simulate it.