Learning to Prove Theorems via Interacting with Proof Assistants
Kaiyu Yang · Jia Deng

Tue Jun 11th 12:05 -- 12:10 PM @ Room 201

Humans prove theorems by relying on substantial high-level reasoning and problem-specific insights. Proof assistants offer a formalism that resembles human mathematical reasoning, representing theorems in higher-order logic and proofs as high-level tactics. However, human experts have to construct proofs manually by entering tactics into the proof assistant. In this paper, we study the problem of using machine learning to automate the interaction with proof assistants. We construct CoqGym, a large-scale dataset and learning environment containing 71K human-written proofs from 123 projects developed with the Coq proof assistant. We develop ASTactic, a deep learning-based model that generates tactics as programs in the form of abstract syntax trees (ASTs). Experiments show that ASTactic trained on CoqGym can generate effective tactics and can be used to prove new theorems not previously provable by automated methods. Code is available at https://github.com/princeton-vl/CoqGym.

Author Information

Kaiyu Yang (Princeton University)

Kaiyu Yang is a Ph.D. student in the Department of Computer Science at Princeton University, where he works with Prof. Jia Deng in the Princeton Vision & Learning Lab. He is interested in computer vision and artificial intelligence in general. His current research focuses on how machine learning techniques can be applied to solve theorem proving — a long-standing research area that was dominated by formal methods. Prior to that, He worked on human pose estimation, action detection, and visual relationship understanding. He received his master’s degree from the University of Michigan and his bachelor’s degree from Tsinghua University.

Jia Deng (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)