Zulip Chat Archive
Stream: metaprogramming / tactics
Topic: Incremental elaboration of commands/caching
Ghilain Bergeron (Dec 04 2025 at 09:20):
Hey everyone.
I have been toying around with an idea of a command generating multiple theorems, each of which is proven by hand by the user.
Here's the gist of it (very simplified to reflect the user experience problems that I am facing):
import Lean
open Lean Expr Elab
elab tk:"my_command " tacs:tacticSeq+ : command => do
let thms : Array (Name × Expr) := #[
(`x, mkConst ``True),
(`y, mkConst ``True)
]
-- Simulate doing some stuff before generating the theorems
-- How could we cache this? And is it a good idea?
IO.sleep 3000
let mut i := 0
let mut missing := 0
while _h : i < thms.size do
let ⟨n, thm⟩ := thms[i]
if let .some tac := tacs[i]? then
let thmDecl : Declaration := .thmDecl {
name := n
levelParams := []
type := thm
value := ← Command.liftTermElabM do
let e ← Term.elabTerm (← `(term| by $tac)) (.some thm) (catchExPostpone := false)
Term.synthesizeSyntheticMVarsNoPostponing
instantiateMVars e
}
Command.liftCoreM <| addDecl thmDecl false
else
missing := missing + 1
i := i + 1
if missing > 0 then
throwErrorAt tk "There still are goals to discharge"
else if tacs.size > thms.size then
throwErrorAt tacs[i]! "No more goals to discharge"
my_command
{
apply True.intro
}
{
iterate 250 apply id
apply True.intro
}
Here's the crux of the problem. When editing one of the proof scripts after my_command, it seems like the whole command is re-run, hence every edit of a proof script results in a 3s slow down.
Is there any way to "cache" what the command is doing (thereby paying the cost of what the command does only once, not in subsequent edits)?
Maybe the tactics could be separated into their own commands? But in that case, how would I go about reporting errors about missing tactics/too many tactics?
Tate Rowney (Dec 05 2025 at 16:16):
Hi! I'm not sure there's a convenient way to do this in general, since modifying any part of a command in the editor will recompute the entire thing. If you were ok with splitting it into multiple commands (like have a my_command before each of the tacticSeqs), you could probably save some time by sharing a value of a simple type between them; just by adding the cached value as a constant to the environment. For example if you wanted to just store a string:
import Qq.Macro
open Lean Tactic Elab Term Expr Command Qq
def cache (value : CommandElabM String) (name : Name := `_cached_value) : CommandElabM String := do
if let some decl := (← getEnv).find? name then
match decl.value? with
| some (lit (Literal.strVal s)) => return s
| _ => throwError "wrong type :("
else
let val ← value
let decl := Declaration.defnDecl {
name := name
type := q(String)
value := (fun (s : String) => q($s)) val
levelParams := []
hints := default
safety := DefinitionSafety.safe
}
liftCoreM <| addDecl decl
return val
Then within your command you could just do something like...
let cached_value : String ← cache (do -- thing that take a long time
IO.sleep 3000
return "value you want to cache")
...which would only compute it the first time around. You could also keep track of the value of missing and such by updating the environment similarly.
The problem with using this for more general types is that at the meta level you get the cached constant back as an Expr, so recovering its actual value (as with line 7 above) gets to be kind of annoying, but definitely possible.
I hope this helps!
Mirek Olšák (Dec 06 2025 at 20:07):
Some time ago, I was looking into that here:
https://github.com/Human-Oriented-ATP/lean-humanproof/blob/main/HumanProof/Incremental.lean
Mirek Olšák (Dec 06 2025 at 20:09):
There should be a thread about it here, trying to find
Mirek Olšák (Dec 06 2025 at 20:12):
Mirek Olšák (Dec 06 2025 at 20:57):
Perhaps I should have made the PR to core myself, I tried to push @Jovan Gerbscheid for it (as he had more contribution experience) but he was not too motivated :sweat_smile:
Mirek Olšák (Dec 06 2025 at 22:55):
Mirek Olšák said:
Some time ago, I was looking into that here:
https://github.com/Human-Oriented-ATP/lean-humanproof/blob/main/HumanProof/Incremental.lean
Let me know if it still works (we were doing this about half-year ago), and if it can fit to your project. You should also look at IncrementalTest.lean, and CustomEval.lean at that repository. I see that in your case, you are not exactly in a proof environment, so I am not sure if it works the same way but it could.
Ghilain Bergeron (Dec 09 2025 at 14:09):
Tate Rowney said:
Hi! I'm not sure there's a convenient way to do this in general, since modifying any part of a command in the editor will recompute the entire thing. If you were ok with splitting it into multiple commands (like have a
my_commandbefore each of thetacticSeqs), you could probably save some time by sharing a value of a simple type between them; just by adding the cached value as a constant to the environment. For example if you wanted to just store a string:
[...]
Unfortunately, splitting the command into multiple subcommands doesn't seem possible to me. Since the commands generates some theorems internally, but waits for the user proof scripts to actually add them to the environment (via addDecl), it seems that the user may simple never put any tacticSeqs next, thus have nothing to prove and get the false sense that everything's fine (but it is not: the internal theorems have not been proven, and in fact not even generated!).
Maybe the UX of the command isn't the right one for this job (at least in Lean), but I can't think of another one that would solve all issues that I have (while still forcing users to discharge all internally generated theorems).
Ghilain Bergeron (Dec 09 2025 at 14:12):
Mirek Olšák said:
Mirek Olšák said:
Some time ago, I was looking into that here:
https://github.com/Human-Oriented-ATP/lean-humanproof/blob/main/HumanProof/Incremental.leanLet me know if it still works (we were doing this about half-year ago), and if it can fit to your project. You should also look at
IncrementalTest.lean, andCustomEval.leanat that repository. I see that in your case, you are not exactly in a proof environment, so I am not sure if it works the same way but it could.
I am unsure how to apply this solution to my problem, unfortunately. In a tactic sequence, the snapshot seems to "simply" look up whether the syntax of a tactic changed, in which case it will either re-execute it or replay the saved state.
In my case, the registered syntax of the whole command would change after a simple change in one of the proof scripts, replaying the whole command anyway.
I also tried to trick Lean by registering an environment extension (just as you did), but have not been successful at all (maybe because of the issue above?), quite sadly. My initial try with a simple IO.Ref did not succeed either.
Mirek Olšák (Dec 09 2025 at 14:53):
Can you split your code so that you have separate functions?
- That does all the preprocessing, and save it into an environment extension
- That processes a single tactic (possibly interacting with the environment extension)
Honestly, to me this is all also a bit of black magic, @Sebastian Ullrich could be the only one who understands properly what is going on, on the other hand, I don't see a fundamental reason why my approach should not be working.
Ghilain Bergeron (Dec 10 2025 at 12:57):
Mirek Olšák said:
Can you split your code so that you have separate functions?
- That does all the preprocessing, and save it into an environment extension
- That processes a single tactic (possibly interacting with the environment extension)
Yes, this is basically what I ended up doing. Instead of a single my_command, I have two commands: one that does the heavy lifting and saves everything in an EnvExtension, and the other that simply tries to discharge all goals (read from the EnvExtension) with the given tactic sequences.
This is still a bit slow on big proof scripts, but I may be able to reuse some of your code so that each tactic sequence gets processed incrementally!
Last updated: Dec 20 2025 at 21:32 UTC