Timezone: »
Traditional automated theorem proving systems for first-order logic depend on speed-optimized search and many handcrafted heuristics designed to work over a wide range of domains. Machine learning approaches in the literature either depend on these traditional provers to bootstrap themselves, by leveraging these heuristics, or can struggle due to limited existing proof data. The latter issue can be explained by the lack of a smooth difficulty gradient in theorem proving datasets; large gaps in difficulty between different theorems can make training harder or even impossible. In this paper, we adapt the idea of hindsight experience replay from reinforcement learning to the automated theorem proving domain, so as to use the intermediate data generated during unsuccessful proof attempts. We build a first-order logic prover by disabling all the smart clause-scoring heuristics of the state-of-the-art E prover and replacing them with a clause-scoring neural network learned by using hindsight experience replay in an incremental learning setting. Clauses are represented as graphs and presented to transformer networks with spectral features. We show that provers trained in this way can outperform previous machine learning approaches and compete with the state of the art heuristic-based theorem prover E in its best configuration, on the popular benchmarks MPTP2078, M2k and Mizar40. The proofs generated by our algorithm are also almost always significantly shorter than E’s proofs.
Author Information
Eser Aygün (DeepMind)
Ankit Anand (DeepMind)
Laurent Orseau (DeepMind)
Xavier Glorot (DeepMind)
Stephen McAleer (CMU)
Vlad Firoiu (DeepMind)
Lei Zhang (DeepMind)
Doina Precup (DeepMind)
Shibl Mourad (DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Proving Theorems using Incremental Learning and Hindsight Experience Replay »
Thu. Jul 21st 06:45 -- 06:50 PM Room Hall G
More from the Same Authors
-
2021 : Gradient Starvation: A Learning Proclivity in Neural Networks »
Mohammad Pezeshki · Sékou-Oumar Kaba · Yoshua Bengio · Aaron Courville · Doina Precup · Guillaume Lajoie -
2022 : P25: Proving Theorems using Incremental Learning and Hindsight Experience Replay »
Shibl Mourad -
2020 Poster: What can I do here? A Theory of Affordances in Reinforcement Learning »
Khimya Khetarpal · Zafarali Ahmed · Gheorghe Comanici · David Abel · Doina Precup -
2019 Poster: An Investigation of Model-Free Planning »
Arthur Guez · Mehdi Mirza · Karol Gregor · Rishabh Kabra · Sebastien Racaniere · Theophane Weber · David Raposo · Adam Santoro · Laurent Orseau · Tom Eccles · Greg Wayne · David Silver · Timothy Lillicrap -
2019 Poster: Per-Decision Option Discounting »
Anna Harutyunyan · Peter Vrancx · Philippe Hamel · Ann Nowe · Doina Precup -
2019 Oral: Per-Decision Option Discounting »
Anna Harutyunyan · Peter Vrancx · Philippe Hamel · Ann Nowe · Doina Precup -
2019 Oral: An Investigation of Model-Free Planning »
Arthur Guez · Mehdi Mirza · Karol Gregor · Rishabh Kabra · Sebastien Racaniere · Theophane Weber · David Raposo · Adam Santoro · Laurent Orseau · Tom Eccles · Greg Wayne · David Silver · Timothy Lillicrap -
2018 Poster: IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures »
Lasse Espeholt · Hubert Soyer · Remi Munos · Karen Simonyan · Vlad Mnih · Tom Ward · Yotam Doron · Vlad Firoiu · Tim Harley · Iain Dunning · Shane Legg · Koray Kavukcuoglu -
2018 Oral: IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures »
Lasse Espeholt · Hubert Soyer · Remi Munos · Karen Simonyan · Vlad Mnih · Tom Ward · Yotam Doron · Vlad Firoiu · Tim Harley · Iain Dunning · Shane Legg · Koray Kavukcuoglu