Zulip Chat Archive

Stream: general

Topic: Has anyone tried making a "grammar" out of known proofs?


Daniel Donnelly (Apr 24 2021 at 09:21):

Out of all the work done on creating Lean proofs manually, has anyone yet created a state machine or a grammar that probablistically describes "what comes next" in an elegant proof? If not, what is the closest concept that someone has worked on?

Here is a Qt Widgets C++ app I'm designing. The purpose is for automated theorem discovery (eventually, but that's way down the road :))

image.png

It will build a state machine where edges are Regular expressions. So for instance matching a variable is done transparently / parametrically, so that when code uses the ith occuring variable (named by a coder), the system knows this and so if the state machine is used as a proof generator, it will know to use the ith variable. Yes, this is very naive, and maybe it should be ith variable of the jth type, or something. But I'm not at implementing that part yet, so don't judge me on this description.

Please be kind and only give positive or constructive feedback. Thanks.

Eric Wieser (Apr 24 2021 at 09:26):

If you haven't already heard of it, you may be interested in #lean-gptf, for which there is a dedicated stream.

Daniel Donnelly (Apr 24 2021 at 09:35):

Here is a video on GPTF: https://www.youtube.com/watch?v=zgGYyGkmWSw

Jason Rute (Apr 24 2021 at 10:11):

Also, more generally check out the stream #Machine Learning for Theorem Proving

Jason Rute (Apr 24 2021 at 10:14):

@Daniel Donnelly I recommend instead watching the newer video. It has the results in the paper and a demo of gptf. https://m.youtube.com/watch?v=EXpmbAfBNnw

Jason Rute (Apr 24 2021 at 10:16):

Also, here is the paper: https://arxiv.org/abs/2102.06203

Jason Rute (Apr 24 2021 at 10:26):

GPTf is the closest thing to what you have described, since it is a pure language model predicting the tactic which best matches a given goal. However, I should mention that @Daniel Selsam is working on a very general tactic framework which is able to combine tactics together to automatically discover proofs. I have a lot of trouble describing his work, because I don’t fully understand it (and it isn’t all published), but I’ll post a list of resources.

Jason Rute (Apr 24 2021 at 10:29):

I’m on my phone so I can’t link the exact list of resources, but it is in the middle of #Machine Learning for Theorem Proving>reinforcement learning

Joachim Hauge (Apr 24 2021 at 10:53):

Wait does the bot prove what the user asks it to prove or does it have its own desires?

Jason Rute (Apr 24 2021 at 11:01):

The former. This is not science fiction.

Jason Rute (Apr 24 2021 at 11:02):

(Although, a better bot would be able to prove that a false goal is actually false, and suggest useful modifications which would make it true.)

Daniel Donnelly (Apr 24 2021 at 11:09):

What about groupoid approaches to language transformation? I thought one of type theories' tenets is that "Mathematics is a Groupoid" or something like that. I know for a fact that Neural Nets can't outperform a naive algorithm at least for certain problems such as integer factorization. Thus, it begs the question, why not use a deterministic algorithm for a generative language model?

Joachim Hauge (Apr 24 2021 at 11:19):

Jason Rute said:

The former. This is not science fiction.

No I mean they say the bot continues what was already written so I was not sure if it has to inhabit a given type or it is picking its own goals.

Daniel Donnelly (Apr 24 2021 at 11:24):

@Joachim Hauge ideally it does both things, but when you're starting out coding such a thing, you're not sure which comes first, the chicken or the egg.

Jason Rute (Apr 24 2021 at 11:28):

@Joachim Hauge sorry. I think I overreacted to what I though was a question about Skynet. :) I would encourage you to look at the talk (the one I posted above). The objective of the agent is for a given goal state (local context plus goal) to predict the full tactic command string which comes next. It does this by next word prediction. It uses the same tools commonly used by machine translation models.

Jason Rute (Apr 24 2021 at 11:40):

@Daniel Donnelly Theorem proving is hard. It essentially boils down to a very complicated search procedure. You need good heuristics, tricks, algorithms, etc to guide this search. There is a vast literature out their on hand-written deterministic ATP algorithms. Most are for first order logic, but there are hammer systems which can turn Lean’s dependent type theory into first order logic. Even then one starts to realize that theorem proving in practice is as much of an art as a science. One has to chose which few previously proven facts out of the thousands in the library to pass to the ATP. This is called premise selection and it was where ML was first used in theorem proving to my knowledge. We have since discovered that ML works pretty well as an end-to-end solution for theorem proving. Of course even GPTf is utilizing carefully written hand-crafted tactics. Other approaches such as Daniel Selsam’s suggest that the user supplies high level ideas and tactics and lets the computer fill in the rest. I think time will tell what approaches are best, but I’m more optimistic about machine learning than completely deterministic hand-written ATP systems for doing high level math. Factorization is a special case and is not what most of interactive theorem proving is like.

Daniel Donnelly (Apr 24 2021 at 11:50):

@Jason Rute why are there no theorems on how to generate new theorems?

Scott Morrison (Apr 24 2021 at 11:51):

Depending on what you mean, the answer is either "there are, but they are boring", or "because it's art".

Daniel Donnelly (Apr 24 2021 at 11:57):

@Scott Morrison it is art I agree. But there is also an art to creating new art

Relatedly. Suppose that you took a reduced single-string context-free grammar of statement. That is not smallest grammar - as those are highly likely to be hard to compute. But you just reduce the statement string like so: S -> aaaaaa gets reduced to the grammar G = {S -> AAA, A -> aa}. In a reduced grammar, every substring occurs uniquely, you cannot find two occurences of ab for example, since we defined reduced to mean that B -> ab had to be introduced to reduce the string. Now that each substring occurs uniquely, a morphism of the statement (a morphism of the grammar) can be well-defined to be a substring substitution. For example "x + y" => "y + x". Every substitution is invertible. So what you get is a groupoid where objects are single-string grammars, and morphisms are substring substitutions. Just a thought...

The substitution maps themselves can be thought of grammars though not context-free. So what you get is a nice recursive use of grammar. You can have morphisms of the morphisms of grammars and so on...

@Jason Rute what are your thoughts on these grammar groupoids?

Jason Rute (Apr 24 2021 at 12:03):

@Daniel Donnelly There are a lot of interesting decision procedures for various fragments of math like additive groups. Sometimes with a lot of work these get turned into tactics like omega. This takes a lot of effort to do. For some problems, like integer reasoning this effort is worth it. For others, it is better to let a computer automatically discover the rules. (I’m also excited for a future where the computer will automatically discover the algorithm for the rules, but that is further off.)

Daniel Donnelly (Apr 24 2021 at 12:07):

@Jason Rute the time is now. Time to crack this boiled egg open :D

Jason Rute (Apr 24 2021 at 12:12):

@Daniel Donnelly As for your grammar, it looks like a mathematical formalism of rewriting or something similar, but in general rewriting is really hard. There are special cases where we have explicit deterministic procedures for how to exactly rewrite one statement into another (like integer addition), but in general it is more of an art. Does your theory provide ways to discover the order to apply rewriting rules in practice?

Daniel Donnelly (Apr 24 2021 at 12:14):

@Jason Rute the shortest route is usually preferred, as are shorter more elegant proofs. Good question though, working on it !

Daniel Donnelly (Apr 24 2021 at 12:25):

Firstly, you transform the problem statement into variables that don't collide with any variables of lemmas you want to use. Then say you're given, not the statement but for sake of simplicity, the expression S -> x + x + x + x. Reduced that looks like: G = {S -> A + A, A -> x + x}. Apply a variable substituion into the variables of lemmas etc: G' = {S -> A + A, A -> a + a}. An notation definition says that a + a = 2a. Treat that as a grammar transformation to get G'' = {S-> A + A, A -> 2a}. Then apply inverse of a <-> x to get back into x land: G''' = {S -> A + A, A -> 2x}, now do the same thing with A's and you get G'''' = {S -> 2A, A ->2x}, but that grammar should be expanded since there is only one occurence of A. You get G5 = {S -> 22x}, but literals get computed and you get G6 = {S -> 4*x}. That doesn't answer your question though, so let me think for a bit.

Joachim Hauge (Apr 24 2021 at 12:50):

Daniel Donnelly said:

Jason Rute why are there no theorems on how to generate new theorems?

You can certainly enumerate all the true statements in reasonable logical theories. The issue I think is to identify when you have redundancy in your "database" of true statements.

Eric Wieser (Apr 24 2021 at 12:51):

You could argue that mathlib is full of "theorem-generating" statements; for instance, add_comm generates theorems about commutativity for the naturals, integers, reals, ...

Daniel Donnelly (Apr 24 2021 at 14:46):

Take as example the statement that group identity is unique. We have (e: ident G) :=e: G, forall x:G, x = ex, x = xe, similarly for (d: ident G). So the statement S : (e : ident G) (d : ident G) => e = d first gets its involved definitions expanded. Then apply grammar reduction to get g = {S -> (e I e J e) (d I dJd), B -> forall x H, C -> x = , D -> B C, F -> "H," , H -> :G, I -> F D, J -> x, Cx}. Now, from the reduced grammar, computing the statement symmetry d <-> e is more efficient. But how do we conclude that d = e? That's what I'm stuck on. But I think we should treat each equality involved as grammar transformation. That's my next guess.

Jason Rute (Apr 24 2021 at 15:23):

@Daniel Donnelly you may be interested in https://en.m.wikipedia.org/wiki/Knuth–Bendix_completion_algorithm

Jason Rute (Apr 24 2021 at 15:25):

(Note that for many special cases there are good algorithms, but in general the word problem for groups is undecidable.)

Kevin Buzzard (Apr 24 2021 at 16:41):

@Daniel Donnelly to stop Zulip eating all your *s and turning them into emphasis, making your text mathematics unreadable, enclose all code in ` 's (if inline) or in ``` 's (if a code block).

Johan Commelin (Apr 24 2021 at 16:51):

See also #backticks


Last updated: Dec 20 2023 at 11:08 UTC