Zulip Chat Archive

Stream: Machine Learning for Theorem Proving

Topic: Incorporating neural networks into type inhabitation


Roman Bars (Mar 07 2021 at 09:41):

Suppose we want to build an "expert system" input to which is a type and output is a term of that type or its negation. How could we incorporate a neural network (or generally machine learning) into such a system?

My issue is that generally the minimum possible length of a string defining term of a type cannot be bounded by a computable function of the length of the string defining the type (or else type inhabitation would be decidable). So an appropriate neural architecture is not clear to me.

Selsam and some other people I think built a neural network that could solve Boolean satisfiability reasonably well. It was a graph neural network that took in the SAT problem to be solved and then assigned weights to each node and iterated message passing several times (generally the more times it iterates the higher likelihood of finding a solution). But there the size of the output was clearly bounded by the size of the input which is not the case in our setting.

Do any existing first-order/higher-order automated provers address this issue?

Jason Rute (Mar 07 2021 at 13:47):

There are a number of ways to use neural networks to solve type inhabitation. But before I get into them, here are a few basic observations:

  1. Neural network are "good guessers" with "good intuition". Text-based neural networks also have a sort of "stream of conscience" like output. Given that, you should not expect neural networks to be perfect oracles for type inference, but (in the presence of good training data), they should give reasonable guesses.
  2. NN can be used to solve the problem in one shot, or used as part of a larger solution. I'll mention some examples below. If you solve it in one shot, you will likely need a Transformer (or maybe a GNN with a transformer-like attention mechanism).
  3. Type inhabitation (at least in DTT) is just theorem proving, and conversely, any paper about NNs and theorem proving is probably applicable here.
  4. Type inhabitation is also just program search, so any paper about NNs and program search is probably applicable here.
  5. NNs (and other machine learning algorithms) train with data from a distribution. So if all your training data is randomly generated types, then the algorithm may solve those well, but not solve long types with simple solutions like ℕ → ℕ → ℕ → ℕ → ℕ → ℕ → ℕ → ℕ → ℕ → ℕ → ℕ. Conversely, if your data is from human proof data, it will perform better on similar examples.
  6. Getting training data can possibly be tricky. Your description is much too vague to know what you need, but one can either get human curated training data (from a proof library), synthetically generated data, or use reinforcement learning in creative ways to build up a training dataset.

Jason Rute (Mar 07 2021 at 13:47):

Now, onto examples. As I said, all of the theorem proving and program search literature is applicable. But let me break things down into a few categories.

  1. Tools which use a NN to guide a proof search (or in your case a term search). There are a number of papers which do this. Just look for recent papers by Josef Urban, Christian Szegedy, Markus Rabe, and Stan Polu on machine learning and theorem proving. (Also, I can't resist mentioning our GPTf/PACT paper which I think already solves your problem to some degree in Lean. I'll mention more below.) Basically, in all these algorithms one searches for a proof via a tree (or graph) search. However, to make the problem more tractable one uses a neural network to guide the search and suggest next steps. Many of these systems use tactics, which if you were working in say Lean, would be sufficient to construct a term in the end. Nonetheless, you could also have your Neural network construct term proofs directly. Another related topic is neurally guided program search. For example, DreamCoder uses a neural network (actually two networks) to guide the construction of a computer program solving a particular task. Since terms are just computer programs this is the same idea.
  2. Other tools which use a NN to guide a solver. As you already mentioned, NeuroSAT by Selsam et al. uses message passing in a graph neural network to solve satisfiability. It is like a search algorithm, but in this case the searching happens implicitly inside the neural network instead of explicitly in an external tree search.
  3. NNs which just solve the problem in one-shot. Until recently, I don't think people thought this was reasonable to ask a NN to solve a problem like this directly, but we know now that large language models based on Transformers can indeed solve certain types of problems of this kind. For example, in Teaching Temporal Logics to Neural Networks, they show that a Transformer can directly solve SAT problems. (I would be curious to here Dan Selsam's thoughts on this since he has previously said he didn't think this was possible. One possibility is that this work isn't as good as NeuroSAT. Another possibility is that the message passing inside a transformer is similar to the message passing in NeuroSAT with the caveat that the transformer, unlike NeuroSAT, has a bounded size on the passing, but maybe that is still good enough.) Also, I think it was OpenAI that showed that a Transformer could write python programs just from the type signature, name of the program, and doc string. Last, I'll show below that our Lean gptf tactic is capable of some one-shot term generation.

Jason Rute (Mar 07 2021 at 13:47):

As I've said, in many ways I think the Lean gptf tactic is what you are looking for or a step in that direction. It is a tool which given a goal in Lean will suggest tactics. When put into a tree search, it can find whole proofs. I also just checked and it is capable of also finding "proofs" for types like ℕ → ℕ → ℕ, (ℕ × empty) → empty, etc. Also, while this isn't the intent of GPT-f, it can also be used to generate term proofs in one-shot by using gptf {pfx := "exact"}. In that case, it will output an exact tactic which is the same as a term proof. It often can find shorter proofs this way. I would encourage you to play with it yourself (and discuss it on the #lean-gptf stream).

Jason Rute (Mar 07 2021 at 13:59):

Also, gptf is decent at finding simple extensional witnesses, such as solving ∀ (n : ℕ), ∃ (m : ℕ), m > n or ∃ n : ℕ, 8 = n*2.

Jason Rute (Mar 07 2021 at 14:13):

@Roman Bars Looking back on your question, at first I thought it was more theoretical, but now I think it is more practical. It would help to know more about what you are looking for. What is the type theory you are working with? What kinds of types are you looking to find inhabitants for? That would help, but even without that information, here are four ideas:

  • If you are doing this in Lean or another ITP, I would encourage you to leverage the available tactics and tactic data for training your network. You can look at how the papers by Urban, Szegedy, Polu, and us do it.
  • If you are working in your own type theory, then you can consider making a simple tactic language. Every term constructor would need a corresponding tactic, e.g. intro for lambda terms, apply for function application, etc. Then you can use the tactics to partially build your terms and have the neural network guide them. (Again, look at the papers for machine learning in ITPs.)
  • If you want to build terms directly, I'd look at the program synthesis research. They have techniques for using a NN to build partial programs which are then completed into whole programs.
  • If your terms are reasonably small and you have a lot of training data, then consider using a Transformer to solve them directly.

Last updated: Dec 20 2023 at 11:08 UTC