Zulip Chat Archive

Stream: new members

Topic: let rec in a proof using choose


LB (Mar 18 2024 at 07:29):

Hi. Let's say I need some auxiliary sequence to prove some theorem in Lean. This sequence would be defined inductively using choose.

Here is a minimal example, where, given a binary relation P, I try to build a sequence T with P Nat.zero (T Nat.zero), and P (T n) (T n.succ) for all n: Nat:

import Init.Classical

open Classical

axiom P (m n: Nat) : Prop
axiom HP (n: Nat) :  p, P n p

noncomputable section
example : False := by

  let rec T (n: Nat) := match n with
  | Nat.zero => choose (HP Nat.zero)
  | Nat.succ m => choose (HP (T m))

  have T0 : T Nat.zero = choose (HP Nat.zero) := by
    -- rfl doesn't work
    sorry

  have T1 : P Nat.zero (T Nat.zero) := by
    -- this doesn't work either
    -- have := choose_spec (HP Nat.zero)
    -- exact this
    sorry

  sorry

end section

As you see, I am unable to prove even the most basic assertions about T. For instance, although T Nat.zero is basically given the definition of choose (HP Nat.zero), I cannot prove that simple fact.

You could say that it would be better to build the sequence outside the proof, but in the actual proof, T depends on a large number of local hypotheses (some of which come from a by_cases) that it would very cumbersome to reproduce in a global context. So let's pretend that I really need to build T inside the proof.

What's the correct way to do this, assuming there is one?

Note: I don't want to use mathlib4, the main point of my work being to develop an alternative to mathlib for basic undergraduate calculus (using different ways of doing it). I may copy the mathlib way, if needed.

Bonus question: there is no noncomputable let rec, as far as I know. This forced me to put all the stuff above inside a noncomputable section. What is the equivalent of noncomputable def, but for a let?

Kyle Miller (Mar 18 2024 at 07:34):

You're looking for the choose tactic I think:

example : False := by
  choose T h using HP
  /-
  T : ℕ → ℕ
  h : ∀ (n : ℕ), P n (T n)
  -/

Kyle Miller (Mar 18 2024 at 07:35):

You're iterating T though, so it might take some more work than this. (Maybe it's just docs#Nat.iterate applied to T, evaluated at 0?)

LB (Mar 18 2024 at 07:42):

Kyle Miller said:

You're looking for the choose tactic I think:

example : False := by
  choose T h using HP
  /-
  T : ℕ → ℕ
  h : ∀ (n : ℕ), P n (T n)
  -/

Thank you. I'll have a look at that.

Kyle Miller (Mar 18 2024 at 07:43):

This fails your "no mathlib" requirement since choose is a mathlib tactic though.

LB (Mar 18 2024 at 07:43):

Kyle Miller said:

This fails your "no mathlib" requirement since choose is a mathlib tactic though.

Not really a problem, and not the first time I find myself copying the mathlib way of doing things in certain circumstances...

I have literally hundreds of lines of code borrowed from mathlib in my library... :)

Kyle Miller (Mar 18 2024 at 07:47):

(Is there any way you can make your library a layer on top of mathlib? It could save a lot of work if you don't have to do everything from scratch.)

Kyle Miller (Mar 18 2024 at 07:47):

Also, let rec inside of proofs doesn't work. You don't have access to the definition until after the theorem is proved unfortunately.

LB (Mar 18 2024 at 07:52):

Okay. I may have missed that. I was hoping it would, as let does work (cumbersome, but it works).

Yes, you're right. I just tried a simpler version of T without choose, and it didn't work, as you said.

I guess I'm in for a very thorough study of inductive constructors...

Kyle Miller (Mar 18 2024 at 07:55):

By the way, here's something equivalent to choose T' h' using HP using only (I think) core tactics:

    have T', h' :  (T' : Nat  Nat),  n, P n (T' n) := by
      exists (fun n => choose (HP n))
      intro n
      apply choose_spec

LB (Mar 18 2024 at 07:58):

Kyle Miller said:

(Is there any way you can make your library a layer on top of mathlib? It could save a lot of work if you don't have to do everything from scratch.)

I've been thinking about that for a while... But I'm (at this point) too afraid of collisions. It seems dangerous to use tactics I have not defined myself, cause they would be based on theorems I have not proved in my library. Maybe when I'm secure enough in my knowledge of lean 4.

LB (Mar 18 2024 at 07:59):

Kyle Miller said:

By the way, here's something equivalent to choose T' h' using HP using only (I think) core tactics:

    have T', h' :  (T' : Nat  Nat),  n, P n (T' n) := by
      exists (fun n => choose (HP n))
      intro n
      apply choose_spec

Yes! That's the sort of things I was envisioning. Thank you for the idea! I'll try to work this out (later, now I'm off to see my students...).

LB (Mar 18 2024 at 08:01):

It's becoming clearer everyday that I should avoid using let in my future endeavours, and prefer to use have with existentials.

Kyle Miller (Mar 18 2024 at 08:03):

Note that let rec is a special construct, very different from let. It constructs an auxiliary definition while capturing the current scope, and it supports mutual recursion.

Kyle Miller (Mar 18 2024 at 08:04):

cause they would be based on theorems I have not proved in my library

You don't have to justify your project, but what's the goal? If you want a library that's useful for doing basic undergrad calculus, then there's no harm in using mathlib theorems to set it up (students wouldn't see the mathlib theorems). But if you're wanting to make a library whose proofs are all formed in a certain way, maybe so students can learn math from the construction of the library, then I could see you wanting to be more careful.

LB (Mar 18 2024 at 11:56):

Kyle Miller said:

cause they would be based on theorems I have not proved in my library

You don't have to justify your project, but what's the goal? If you want a library that's useful for doing basic undergrad calculus, then there's no harm in using mathlib theorems to set it up (students wouldn't see the mathlib theorems). But if you're wanting to make a library whose proofs are all formed in a certain way, maybe so students can learn math from the construction of the library, then I could see you wanting to be more careful.

My goal, or maybe I should say my goals, are a little different. For the last ten years or so, I have been teaching undergrad math to students with a technical mind. Moreover, the students changed, in part due to covid restrictions, in part because the new generation of students is less willing to learn things. Lately I found myself simplifying things a lot for my students, and basically changing the course matter while keeping the original goals.

Last year was a turning point, and I rewrote the whole theoretical base, to stay on solid ground. It was about the same time I heard about lean (I had dabbled in coq before).

There, you have my primary goal: to use lean to check that all the material on which I'm basing my teaching, is still sound. Or change the material, if it is not.

I decided that I would stop the project as soon as I manage to prove that 3.14 < π < 3.15. One of the challenges is to find a definition for π. No hand-waving, no RIemann integration (primitives of continuous functions are allowed, integrals of continuous functions are allowed), no uniform continuity (I still do uniform continuity locally in the library), no Cauchy sequence. No uniform convergence. No monotone convergence, no dominated convergence. Powers series are allowed, at zero only (no holomorphic functions), and you can differentiate power series on the open real domain of convergence only. All the stuff should be proved.

I also changed the way I teach limits: uniqueness isn't a requirement anymore. I have good reasons to think that the theory is working. This has to be checked, too.

Secondary goals: have fun + be able to contribute to mathlib at some point.

LB (Mar 18 2024 at 11:59):

(deleted)

LB (Mar 18 2024 at 12:01):

(deleted)

Kyle Miller (Mar 18 2024 at 15:13):

Interesting motivation, and it's nice to see Lean being used as a tool for thought.

In my thesis, I had included some theory of filters as background material, and there was some part of how I was developing it that I wasn't sure about. I reached for Lean to work through (and correct) some of the reasoning.

Mario Carneiro (Mar 18 2024 at 17:06):

By the way, my favorite definition for π (used in the metamath database) is "the first positive zero of the sin function", or (slightly easier to prove the key theorems about) "twice the first positive zero of the cos function", where sin and cos are defined as power series. You can use some simple bounds on the power series, plus the double angle formulas, to show that cos 1 > 0 and cos 2 < 0 (or sin 2 > 0 and sin 4 < 0) and then use the intermediate value theorem to prove that a root exists and 2 < π < 4.

Mario Carneiro (Mar 18 2024 at 17:10):

(Just checked mathlib, and it's using my favorite definition already. Coincidence?)

Mario Carneiro (Mar 18 2024 at 17:16):

I think that satisfies all your criteria. The main difference from mathlib is that you would probably define sin directly instead of in terms of the exp function if you want to avoid complex numbers

LB (Mar 18 2024 at 17:20):

Mine is the limit (at infinity) of f = 2*arctan, which is interesting in itself as :

  • it is easily built as a primitive of 2/(1+x^2)
  • cos and sin are easily built from the inverse of f
  • one easily gets that π=f(1/3) + f(1/2), which leads to easy approximations using the fact that for x0x\ge 0, one has f(x)2k=0n(1)kx2k+12k+1f(x)\le 2\sum\limits_{k=0}^{n}\frac{(-1)^k\,x^{2k+1}}{2k+1}, nn even and f(x)2k=0n(1)kx2k+12k+1f(x)\ge 2\sum\limits_{k=0}^{n}\frac{(-1)^k\,x^{2k+1}}{2k+1}, nn odd (which can be proved inductively without any theory of integration or power series, because the difference is monotonic)

LB (Mar 18 2024 at 17:21):

That's the one I use.

Mario Carneiro (Mar 18 2024 at 17:29):

(I suspect we have different definitions of "easy" :smile: ) That's certainly a theory I have not seen before. Seems difficult to prove stuff like the angle addition formula with cos defined as some algebra on an inverse of a primitive of another inverse function...

Mario Carneiro (Mar 18 2024 at 17:31):

Actually I am interested to know how specifically you build cos and sin from the inverse of f. Wouldn't you have to paste a bunch of curve sections together and fill in the holes?

Mario Carneiro (Mar 18 2024 at 17:33):

re: "without any theory of integration", didn't you just use primitives in the definition? FTR that's the thing that makes me disprefer this definition, you need a theory of derivatives and/or integrals, rather than just power series and limits

LB (Mar 18 2024 at 17:43):

The inverse function of f is defined on (π,π)(-\pi,\pi). cos\cos and sin\sin are built on top of that. You also get some algebraic properties using the definition of f. Some of them give cos(πt)=cos(t)\cos(\pi-t)=-\cos(t), sin(πt)=sin(t)\sin(\pi-t)=\sin(t), sin(t)>0\sin(t)>0 for t(0,π)t\in(0,\pi). Those can be extended to the periodic extensions, and provide us with a way to prove differentiability at π-\pi and π\pi using differentiability at 00.

I'm not saying those definitions are good in a general sense, but I find them good in the restricted context of the course I give to my students (which starts with a theory of derivatives)... Moreover, my students know the formulas giving cos(t)\cos(t) and sin(t)\sin(t) in terms of tan(t2)\tan\bigl(\frac t 2\bigr) because they have to learn them for the computations of certain integrals.

Actually, addition formulas are difficult to find if you don't have a theory of derivatives, even for power series: formula exp(a+b)=exp(a)×exp(b)\exp(a+b)=\exp(a)\times\exp(b) isn't so easy because of technicalities when taking limits.

Mario Carneiro (Mar 18 2024 at 17:47):

IIRC you need Mertens' theorem to prove that ex+y=exeye^{x+y}=e^xe^y with the power series definition

LB (Mar 18 2024 at 17:53):

Mario Carneiro said:

IIRC you need Mertens' theorem to prove that ex+y=exeye^{x+y}=e^xe^y with the power series definition

Yes, those are the technicalities I was talking about...

Using derivatives, things are much easier: you first show (without any theory of differential equations) that any real function yy st y+y=0y''+y=0 must be a linear combination of cos\cos and sin\sin (differentiate z2+z2z^2+z'^2, where z=yacosbsinz=y-a\cos-b\sin for suitable aa and bb). Then you use that fact with y:tcos(a+t)y:t\mapsto\cos(a+t).

LB (Mar 19 2024 at 22:12):

Update on my question: I didn't manage to build my own version of inductive local definitions inside tactic proofs using Nat.recOn and Nat.brecOn.

So I decided finally to build my auxiliary sequence T outside the proof, as aprivate noncomputable def T with all the hypotheses included as parameters of T, then use it inside the proof while passing the real hypotheses as arguments of T.

It works but I still would like it very much if someone could give me an explanation of why the let rec approach doesn't work. And perhaps tell me why, when you define a let rec in that type of context, the resulting term doesn't appear in my infoview, while the let definitions do appear.

That sort of feedback would be very useful, just to be sure that the definition is sound and syntactically correct, for instance.

Should I create an issue in the repo? Or is there something I fail to understand about let rec s?

Kyle Miller (Mar 20 2024 at 03:07):

Kyle Miller said:

Note that let rec is a special construct, very different from let. It constructs an auxiliary definition while capturing the current scope, and it supports mutual recursion.

The let tactic creates a local definition using the let expression. When you do let x := v it's sort of like doing refine let x := v; ?_ where ?_ becomes your new goal with the new binding.

The let rec tactic creates an auxiliary global definition (that's why it doesn't appear in your context). It can be mutually recursive with the theorem you're proving itself. There is a separate step once the proof/definition is done where the "equation compiler" works out how the (mutual) recursion should be turned expressions using the underlying recursors.

Kyle Miller (Mar 20 2024 at 03:26):

Maybe if a let rec in a proof doesn't refer to the theorem, and you try unfolding it, it could try to turn the so-called "pre-definition" into a definition via that equation compiler?

In any case, there's no worry about soundness. Everything will get checked by the kernel once the let rec has been completely processed.

LB (Mar 20 2024 at 17:38):

Thank you very much for your thorough explanation. I must admit that I was a bit bewildered by the exact meaning of "capturing the current scope".

That clarifies my secondary problem.

Yet, the principal one remains. How do you even use the let rec term? I cannot find any tactic that manages to prove anything about it.

When I define it "outside" a proof like this:

noncomputable def T (n: Nat) := match n with
  | Nat.zero => choose (HP Nat.zero)
  | Nat.succ m => choose (HP (T m))

I can prove that T 0 = choose (HP Nat.zero) for instance:

example: T 0 = choose (HP 0) := by rfl

or that P 0 (T 0):

example: P 0 (T 0) := by apply choose_spec (HP 0)

Both proofs work perfectly. None of them work when I define T as a let rec inside the proof of a(nother) theorem. I suspect that, again, it has something to do with "capturing the scope", like some sort of snake biting its own tail: in the context of finding a proof of the scoping theorem, using an object that is defined from / in relation to the theorem itself would lead to a loophole / not well-founded recursion, but I'm not sure... Kyle says that this comes from the fact that the object is only finalized after the proof, but the examples that are given in the documentation seem to use the construct without any problem (which is a good point, otherwise let rec couldn't be used for anything, at least inside a proof). Admittedly as a builder of terms (applied), and not as an object of study (destructed?), but what is the quality of let recs that makes them suitable for one use (while still not finalized) but not the other?

I guess the best way to understand all that, would be to try to write out the constructed proof instead of using tactics. But the task seems overwhelming.

Anyway, back to the original question, that I leave here in case someone has found a way: how do you build a recursive object inside a proof, so that it can be used as a proof of its own existence? Like, a different API for building the same thing, knowing that I don't need it to interact with the whole scope? I suspect that the answer comes from using recOn, brecOn, or the mathlib4 tactic choose (except that choose, if I read correctly, doesn't seem to be well-equipped to build recursive objects, only direct functional ones).

Again, your efforts to put my on the right track are very much appreciated...

Kyle Miller (Mar 20 2024 at 19:14):

but what is the quality of let recs that makes them suitable for one use (while still not finalized) but not the other?

The documentation's examples are all creating functions that are only used, not reasoned about.

You can only reason about the definition once there are "equation lemmas", which can only be generated once the actual definition is generated, and that happens after the pre-definition is constructed (what you see as you write a definition or theorem is the pre-definition). The pre-definition is able to do unrestricted recursion, and its during the transition from pre-definition to definition that the recursion is analyzed and converted into applications of recursors for the inductive types.

Kyle Miller (Mar 20 2024 at 19:18):

It seems like using let rec to define recursive definitions inside a theorem and getting equation lemmas for it, like what you'd need in your original message, would be a reasonable feature request.

You could create an issue on the Lean 4 repository for this feature, including that message's code. If you do, please post a link to the issue here.

Mario Carneiro (Mar 21 2024 at 02:34):

@Kyle Miller The challenge is that those let recs can potentially refer or be referred to by other let recs and the definition itself, so part of the pre-definition -> definition transformation is analyzing the call graph to see where the mutual blocks have to be. We can't generate auxiliary definitions until after that process is completed, but elaboration happens before that point, so the equation lemmas won't be ready yet and we can't put them in the environment during the definition process without committing to some recursion structure that isn't yet completely worked out


Last updated: May 02 2025 at 03:31 UTC