## Stream: general

### Topic: Lean GPT-f beta

#### Jesse Michael Han (Jan 18 2021 at 16:38):

Hi everyone! We (the LeanStep team) are excited to announce that we're releasing the interactive gptf tactic for beta testing by the Lean community: https://github.com/jesse-michael-han/lean-gptf. It was demo'd (link, link) during Jason's talk and is already responsible for some commits to mathlib (link, link). We'd love to hear your feedback on how to make this tool better for Lean users.

To participate in the beta, please register here to receive an OpenAI API key, and head over to the new #lean-gptf stream for discussion/troubleshooting/questions.

#### Jesse Michael Han (Jan 28 2021 at 06:12):

We found a way to train a noticeably better (and smaller) model, which is responsible for a dozen new PRs. The lean-gptf beta now supports using this new model by default. This one seems to really like chaining together lemma applications and tactics with semicolons, sometimes to surprising effect.

If you haven't already, come sign up for an API key and join the beta to check it out!

#### Jesse Michael Han (Feb 15 2021 at 19:44):

We describe in a new paper our novel training methodology, instrumentation of Lean, and evaluation results on mathlib. We introduce PACT, a method for co-training on abundant data (e.g. synthetic tactic states, apply/exact steps, elaboration, etc.) extracted from proof terms in mathlib. We show that PACT boosts theorem proving performance from 32% to 48% on a held out test split of theorems from mathlib. We hit a success rate of 37% on a collection of 2.8K theorems added after we last extracted our training data (between commit 33483a3de6 and 95454452f6). We also verified that PACT improves the ability of our models to successfully predict multiple tactic steps at once using the semicolon combinator, which is remarkable because the semicolon idiom doesn't show up in the PACT data whatsoever. The new models are responsible for another dozen or so PRs, and so far there are 36 lean-gptf co-authored commits to mathlib.

The new models are available in the lean-gptf beta. If you haven't already, come sign up, subscribe to the #lean-gptf stream, and help us develop the future of interactive theorem proving! :robot:

#### Rob Lewis (Feb 15 2021 at 20:04):

Nice paper!

Lean’s mathlib is one of the most active open-source software projects in the world

Surely this is a bit of an exaggeration? :smile:

#### Johan Commelin (Feb 15 2021 at 20:10):

I've wondered whether there are more detailed stats at how mathlib ranks here. I know that kubernetes is considered large, fast-moving, fast-growing. It seems that their main repo grows at a rate about 5x - 7x that of mathlib

#### Jesse Michael Han (Feb 15 2021 at 20:14):

good point, if we change "software projects" to "formalized mathematics software projects" then it should almost certainly be true :smile:

#### Rob Lewis (Feb 15 2021 at 20:17):

There are so few, "one of" might be true by default, hah. But yeah, I'd buy that

#### Rob Lewis (Feb 15 2021 at 20:17):

The AFP grows at ~300k loc/year: https://www.isa-afp.org/statistics.html

#### Rob Lewis (Feb 15 2021 at 20:18):

mathlib did ~260k in the past year

#### Rob Lewis (Feb 15 2021 at 20:19):

But in general there are so many variables in how you measure "growth" that I don't know how to make a fair comparison

Last updated: May 08 2021 at 04:14 UTC