Zulip Chat Archive
Stream: general
Topic: Are LLMs being trained with knowledge of tactic state?
Dan Abramov (Feb 22 2025 at 20:58):
This isn't in reference to any specific LLM — just curious what the state of the art is.
I've tried using LLMs a little bit (only online ones like Claude Sonnet and o3-mini-high) and they haven't been great. I'm assuming their knowledge of Lean is kind of general, based on whatever's available on the internet.
There are a couple things I'm curious about:
- Do we know of any LLMs being specifically trained or fine-tuned on Lean code
- Do such LLMs get the "tactic state" as one of the inputs into consideration
The second question is particularly interesting to me. When I'm working on a proof in Lean, seeing "what's in the right panel" at every line is essential for my reasoning:
Screenshot 2025-02-22 at 20.56.22.png
I might be wrong here but intuitively it seems that LLM output would be better if the input it was trained on included the tactic state of every line for the proofs it's being trained on.
Does my assumption make sense? Has anyone tried training LLMs on Lean code? Does including the tactic state like I described seem like a good idea, and has anyone tried that?
Kyle Miller (Feb 23 2025 at 01:56):
off topic math tip
Kyle Miller (Feb 23 2025 at 01:57):
There's a #Machine Learning for Theorem Proving channel you might be interested in.
Kyle Miller (Feb 23 2025 at 02:00):
I'm working with people that have been training LLMs that guess tactics to use based on the current tactic state. I don't have anything to point you toward though yet.
Eric Taucher (Feb 23 2025 at 12:48):
This presentation by @Jason Rute for Lean Together 2025 is worth watching.
The Q&A topic is here.
Last updated: May 02 2025 at 03:31 UTC