Position: The Case for Theory-Level Autoformalization
Marcus Min ⋅ Deyuan He ⋅ Zhaoyu Li ⋅ Zixuan Yi ⋅ Sharad Malik ⋅ Aarti Gupta ⋅ Xujie Si ⋅ Osbert Bastani
Abstract
Autoformalization, translating informal natural language into formal, machine-verifiable languages, has been framed as a tool to generate training data for neural theorem provers, with most work focusing on individual statements. This position paper argues for theory-level autoformalization: formalizing complete theories, including axioms, definitions, theorems, proofs, tactics, and their inter-dependencies as structured libraries. We examine the significance of this shift, address 3 alternative views, identify 5 open challenges, and propose 3 promising paths forward.
Successful Page Load