Zulip Chat Archive

Stream: Is there code for X?

Topic: calcify tactic?


Joachim Breitner (Sep 17 2023 at 15:47):

I really like calc proofs, they are pretty, easy to understand and easier to maintain (as you know what step broke when you change something). But they are sometimes a bit tedious to write out.

Do we have a tactic that works a bit like show_term, but produces a nice sequence of calc steps?
(I could imagine a tactic that mangles the proof term to right-associate all trans rules, distributes cong into trans, and finally writes out the top-level sequence of trans steps a explicit calc steps.)
If not, who would find it useful?

Eric Wieser (Sep 17 2023 at 15:50):

What do you mean by "distributes cong into trans"?

Joachim Breitner (Sep 17 2023 at 15:51):

something like congr_arg f (Eq.trans h1 h2) = Eq.trans (congr_arg f h1) (congr_arg f h2).
Or in prose: A proof that takes two rewriting steps under some expression can be turned into a proof that takes two top-level rewriting steps, each under some expression.

Joachim Breitner (Sep 17 2023 at 15:54):

Of course, things can become arbitrary complicated when the relation isn’t equality, or when there are dependent motives, I expect… but I could imagine it would work well with rewriting simp or rw calls to more explicit forms.
Very much anti-golfing, of course :-)

Joachim Breitner (Dec 27 2023 at 17:27):

I cam back to this a bit, and just wanted to jot down some notes, mostly for my own reference later.

The bulk of the work seems to be to massage the proof terms produced by simp and rw into long chain of Eq.trans. I started doing that with a manual traversal of the Expr syntax tree. This works somewhat:

/--
info: Try this: calc
  0 + n
  _ = n := (Nat.zero_add n)
  _ = 1 * n := (Eq.symm (Nat.one_mul n))
  _ = 1 * 1 * n := Eq.symm (congrFun (congrArg HMul.hMul (Nat.mul_one 1)) n)
-/
#guard_msgs in
example (n : Nat) : 0 + n = 1 * 1 * n := by
  calcify simp

but writing such code is rather tedious and error-prone.

It would be nice to have a more convenient engine for applying rewrites to (proof) terms. We almost have that in the form of simp, and wouldn’t it be nice if I could just declare a bunch of rules like

@[simp]
theorem un_of_eq_true (α : Type _) (a b : α) (h : a = b) :
    of_eq_true (Eq.trans (congrFun (congrArg Eq h) b) (eq_self b)) = h := rfl

and then run simp on a proof term to simplify it it.

But naturally, simp isn't used to work on proof terms, and usually there is little point, so it normally refuses to do anything there. I applied a little hack (branch joachim/simp-prop) of the form

--- a/src/Lean/Meta/Tactic/Simp/Main.lean
+++ b/src/Lean/Meta/Tactic/Simp/Main.lean
@@ -273,7 +273,7 @@ def getSimpLetCase (n : Name) (t : Expr) (b : Expr) : MetaM SimpLetCase := do
 partial def simp (e : Expr) : M Result := withIncRecDepth do
   checkSystem "simp"
   let cfg ← getConfig
-  if (← isProof e) then
+  if ! cfg.simpProofs && (← isProof e) then
     return { expr := e }
   if cfg.memoize then
     if let some result := (← get).cache.find? e then

With that flag set, it at least tries to simplify a proof term like

of_eq_true
         (Eq.trans (congrFun (congrArg Eq (Eq.trans (congrArg (HAdd.hAdd 0) (Nat.zero_add x)) (Nat.zero_add x))) x)
          (eq_self x))

but fails with

[Meta.Tactic.simp.discharge] un_of_eq_true, failed to discharge hypotheses
      0 + (0 + x) = x

I believe the the problem is somewhere between near

private def tryTheoremCore (lhs : Expr) (xs : Array Expr) (bis : Array BinderInfo) (val : Expr) (type : Expr) (e : Expr) (thm : SimpTheorem) (numExtraArgs : Nat) (discharge? : Expr  SimpM (Option Expr)) : SimpM (Option Result) := do
  let rec go (e : Expr) : SimpM (Option Result) := do
    if ( isDefEq lhs e) then
      unless ( synthesizeArgs thm.origin xs bis discharge?) do
        return none

and the fact that isDefEq does not assign assign some proof metavariables (according to docs of docs#Lean.Meta.isAbstractedUnassignedMVar), because despite

[Meta.isDefEq]  of_eq_true
      (Eq.trans (congrFun (congrArg Eq ?h) ?b)
        (eq_self
          ?b)) =?= of_eq_true
      (Eq.trans (congrFun (congrArg Eq (Eq.trans (congrArg (HAdd.hAdd 0) (Nat.zero_add x)) (Nat.zero_add x))) x)
        (eq_self x)) 

it still thinks it has to solve the ?h hypothesis of the lemma, when I would expect the isDefEq to assign that metavariable.

Enough tinkering for now, maybe I’ll return to this in a few months when maybe I understand all this better.

Mario Carneiro (Dec 27 2023 at 17:46):

it still thinks it has to solve the ?h hypothesis of the lemma, when I would expect the isDefEq to assign that metavariable.

isDefEq doesn't assign proof variables, this is a known issue

Mario Carneiro (Dec 27 2023 at 17:49):

on the one hand it makes sense because of proof irrelevance, the answer is true without even assigning the variable, but- on the other hand assigning proof metavariables is also good to do since otherwise they come back as subgoals and the like, and because of proof irrelevance the only downside to assigning a proof metavariable is that this may make the metavariable depend on a local constant you did not want it to

Joachim Breitner (Dec 27 2023 at 18:25):

Thanks!

Joachim Breitner (Dec 27 2023 at 18:39):

Are there other use cases that would benefit from assigning proof meta variables and might warrant a flag for this behavior?

Joachim Breitner (Jan 02 2024 at 11:35):

I tried to make a #mwe (with an unpatched Lean, so simplifying non-Prop-terms) to see how simp lemmas are applied when proof variables aren’t assigned, but at least in this example they are:

opaque P : Nat  Prop
axiom Q (n : Nat) (h : P n) : Nat
axiom simp_lemma (n : Nat) (h : P n) : Q n h = n

set_option trace.Meta.Tactic.simp true
set_option trace.Meta.isDefEq true
set_option trace.Meta.isDefEq.assign true
example (h : P 0) (k : Nat): Q 0 h + k = k:= by
  simp [simp_lemma]

prints

[Meta.isDefEq]  Q ?n ?h =?= Q 0 h 
  []  ?n =?= 0 
    [] ?n [assignable] =?= 0 [nonassignable]
    [assign]  ?n := 0 
  []  ?h =?= h 
    [] ?h [assignable] =?= h [nonassignable]
    [assign]  ?h := h 
      [beforeMkLambda] ?h [] := h
      [checkTypes]  (?h : P 0) := (h : P 0) 
        []  P 0 =?= P 0 
          []  0 =?= 0

So either this is too simple, compared to the example above, in some crucial way.

Joachim Breitner (Jan 02 2024 at 11:59):

Ah, indeed, the example is too simple; this is better. With

opaque P : Nat  Prop

axiom foo {n : Nat} (h : P n) : Nat
axiom P_of_succ {n : Nat} (h : P n) : P (n + 1)
axiom foo_eq {n : Nat} (h : P n) : foo (P_of_succ h) = 5
set_option trace.Meta.Tactic.simp true
set_option trace.Meta.isDefEq true
set_option trace.Meta.isDefEq.assign true
set_option pp.proofs true
example (h : P n) : foo (P_of_succ h) = 10 := by
  simp only [foo_eq]

the simp lemma is not applied, because it cannot discharge P n, even though that could fall out of

[Meta.isDefEq]  foo (P_of_succ ?h) =?= foo (P_of_succ h) 
  []  ?n + 1 =?= n + 1 

Is it correct to say that in some (rare?) cases not unifying proofs prevents useful things from happening, but overall the benefits of not doing that outweigh that?

Joachim Breitner (Mar 01 2024 at 16:40):

I now have a prototype at https://github.com/nomeata/lean-calcify

Isak Colboubrani (Mar 01 2024 at 21:34):

This is really nice!

I was thinking about requesting exactly this feature the other day (I had missed this thread). It's great to see it implemented!

I hope to see this feature incorporated into Mathlib in the future.

Joachim Breitner (Mar 02 2024 at 07:36):

If you play around with it and help find the flaws, that would be great!

Joachim Breitner (Mar 12 2024 at 22:39):

I added support for ite_cong, so that it can do

/--
info: Try this: calc
    if 0 + 1 * x = 0 then x + (2 * x + n) else 0 + n
    _ = if 0 + x = 0 then x + (2 * x + n) else 0 + n :=
    _ = if x = 0 then x + (2 * x + n) else 0 + n :=
    _ = if x = 0 then 0 + (2 * x + n) else 0 + n :=
    _ = if x = 0 then 0 + (2 * 0 + n) else 0 + n :=
    _ = if x = 0 then 0 + n else 0 + n :=
    _ = if x = 0 then n else 0 + n := (ite_congr (Eq.refl (x = 0)) (fun a => Nat.zero_add n) fun a => Eq.refl (0 + n))
    _ = if x = 0 then n else n := (ite_congr (Eq.refl (x = 0)) (fun a => Eq.refl n) fun a => Nat.zero_add n)
    _ = n := ite_self n
-/
#guard_msgs in
example (x n : Nat) : (if 0 + (1 * x) = 0 then x + ((2 * x) + n) else 0 + n) = n := by
  calcify (simp (config := {contextual := true}))

(I removed some of the proofs in this paste because they are quite verbose.)

The implementation is very unsatisfying; lots of code for just one congruence lemma, and ideally it should support general congrence lemmas. But that seems quite a tough metaprogramming puzzle…

Isak Colboubrani (Mar 20 2025 at 23:37):

Are there any plans to include calcify in Mathlib? I find this tactic incredibly useful, and I suspect it would be widely adopted—especially by newcomers like me who favor the readability of calc-style proofs.

Kim Morrison (Mar 21 2025 at 00:24):

I don't think it needs to be in mathlib. It just needs to be added as a dependency. I'm not sure how @Joachim Breitner feels about the state of the code, from memory it was a quick proof of concept and may not be ready for Mathlib to depend on. Or it might be? Joachim is pretty busy already, so I would guess that if anyone wants to spend some time polishing the implementation, and getting it ready for a state that downstream projects could reliably depend on, he would be very happy.

Joachim Breitner (Mar 21 2025 at 10:03):

Given that its nice when it works, but not a disaster when it doesn’t, I think it may already be useful.

I was just hoping it wouldn’t have to be a hard dependency of a project like mathlib. Instead, I am more or less waiting until it be comes easy for users to locally add dev dependencies to their setup. I think this is on the lake roadmap, so maybe by the end of this year hopefully?

Otherwise I’d say the most important thing to add to calcify is to do what try? and exact? do these days: Use expose_names if necessary, and also check if the syntax does successfully re-elaborate, and if it doesn’t, give a more helpful message.

Joachim Breitner (Mar 21 2025 at 10:03):

Isak Colboubrani said:

I find this tactic incredibly useful

In theory, or are you actually using it? Does it work reliably enough for you?


Last updated: May 02 2025 at 03:31 UTC