Zulip Chat Archive

Stream: Machine Learning for Theorem Proving

Topic: Learning new tactics


Patrick Nicodemus (Apr 03 2023 at 00:02):

Crossposted from the Coq server -
In machine learning, there is a notion of "inductive bias" where the structure of the permitted or assumed form of the solution has a huge effect on how easy/hard the pattern is to recognize. You want your space of functions or hypotheses to be as small as possible, but not so small that you exclude the desired solution.
In theorem proving and proof search, I think this means trying to configure your arsenal of tactics in a way that's most equipped for the class of problems you want to solve.

How do we appropriately choose a basic set of tactics so they are most adequate for solving our problems?

Two more specific questions in this vein:

  • Has anybody ever tried a proof driven approach to creating tactics, where one takes an existing library of formal proofs (as terms) and then tries to work backwards from them to construct a (nontrivial) tactic script to construct them? (in any tactic language that can be used to build terms) One can always just do "refine (term)", but I am hoping for a more interesting answer. What I'm imagining is an attempt to identify a library of basic tactics in a "data driven way", figuring out a concise set of tactics from which the proofs in Coq or Lean can be constructed.
  • In reinforcement learning, as the theorem prover learns, I think it should also reformulate its existing set of tactics so that it is most equipped to attack the problems it has solved so far. How can we teach a machine learning algorithm to invent new tactics from basic building blocks so that its learned skills can be extracted in a symbolic way? In a logic programming language, I think that the kind of refactoring I am talking about would take the form of identifying the most prominent paths through the search tree and extracting them and isolating them as standalone programs, or something like this - how can we restructure the search-tree itself as we learn so that search becomes faster?

Fabian Glöckle (Apr 03 2023 at 10:48):

You may be interested in the literature on "inductive program synthesis" (e.g. https://people.csail.mit.edu/asolar/SynthesisCourse/index.htm) coupled with Machine Learning (like https://www.cs.cornell.edu/~ellisk/documents/dreamcoder_with_supplement.pdf) and applications of the same to reasoning domains (like https://arxiv.org/abs/2211.08671). But generally, what you describe is to a large extent still an open question!

Patrick Nicodemus (Apr 05 2023 at 20:30):

Thank you very much!

Patrick Nicodemus (Apr 07 2023 at 02:20):

Program Synthesis and Semantic Parsing
with Learned Code Idioms
https://arxiv.org/pdf/1906.10816.pdf


Last updated: Dec 20 2023 at 11:08 UTC