Zulip Chat Archive
Stream: general
Topic: AI and theorem proving continued
Jason Rute (Jan 04 2020 at 21:26):
I’m starting a new AI and Theorem Proving Topic. The previous topic got a bit large. There has been a lot of work over the last few months that I haven’t yet summarized, so here goes...
Jason Rute (Jan 04 2020 at 21:27):
IBM research has got into machine learning and theorem proving which I think is great. A Deep Reinforcement Learning based Approach to Learning Transferable Proof Guidance Strategies is a way to train a machine learning agent on a corpus of formal mathematics which uses one vocabulary, and then apply the trained algorithm to another corpus which uses different vocabulary. (Right now the logic has to be the same, but you can train it on, say, an analysis library and apply it to an algebra library.) The results look promising.
Jason Rute (Jan 04 2020 at 21:27):
Improving Graph Neural Network Representations of Logical Formulae with Subgraph Pooling, also by IBM research, is on neural architectures for formulas. If your input data is highly structured then it is best to use a neural network architecture to match. For example images are two-dimensional arrays which are usually invariant under small translations. Hence convolution neural networks are good to use for them. Similarly, it is generally known that the current best neural networks for working with formulas (or terms) are either graph neural networks or tree neural networks since they both capture certain aspects of the formula structure. This paper combines graph and tree neural networks to give an even better encoding of formulas. They used these new networks to get the state of the art on two standard premise-selection benchmarks (HOLStep and Mizar). They plan to test this approach on formal theorem proving in the future.
Jason Rute (Jan 04 2020 at 21:28):
There are five papers from the Innsbruck and Prague labs (which include Cezary Kaliszyk and Josef Urban). (I haven’t read completely through these papers, so my summaries will be light.)
Jason Rute (Jan 04 2020 at 21:28):
Deep Reinforcement Learning in HOL4 by Gauthier looks at a series of machine learning tasks for HOL4. None of these tasks are end-to-end theorem proving, but instead sub-tasks that would be used in theorem proving.
Jason Rute (Jan 04 2020 at 21:28):
Self-Learned Formula Synthesis in Set Theory is about synthesizing set theory terms which satisfy certain formulas. Besides proof guidance and premise selection, an important task in theorem proving is coming up with term. For example, to prove an existential statement, one has to give a term which satisfies the formula.
Jason Rute (Jan 04 2020 at 21:28):
Can Neural Networks Learn Symbolic Rewriting? tries to use neural machine translation algorithms to learn symbolic rewriting.
Jason Rute (Jan 04 2020 at 21:28):
Property Invariant Embedding for Automated Reasoning is another paper on encoding formulas via neural networks. Again, the goal is to not have the encoding be dependent on the current vocabulary.
Jason Rute (Jan 04 2020 at 21:28):
Exploration of Neural Machine Translation in Autoformalization of Mathematics in Mizar is about auto-formalization (translating informal math into formal math). They develop a library of informal and formal math, both aligned and non-aligned, and then apply state of the art machine translation models to see how well they do. This is a worthy goal and I imagine a lot more work will be done on this in the future.
Jason Rute (Jan 04 2020 at 21:29):
Next is something more directly usable to this group. I have been trying to understand the how the DeepHOL neural prover communicates with the HOList implementation of HOL Light. In the GitHub repository Example communicating with HOList I explain how the API works and walk through the gRPC interface in a Jupyter Notebook. This would be useful to anyone trying to reimplement HOList for a different back end prover (like Lean in place of HOL Light).
Jason Rute (Jan 04 2020 at 21:29):
Similarly, a group at the University of Central Florida showed how to extend DeepMath/HOList with different machine learning algorithms. They provide a Jupyter notebook walking through how to implement one’s own machine learning algorithm and they also provide a tutorial paper.
Last updated: Dec 20 2023 at 11:08 UTC