Zulip Chat Archive

Stream: new members

Topic: How do I structure proofs related to closures?


Dan Abramov (Jul 28 2025 at 00:12):

In programming I use closures (including nested functions) quite a lot.

So it feels natural to write nested code like this:

  let Z := union (X.powerset.replace (P := fun X' oX' 
    oX' = Y.powerset.replace (P := fun Y' oY' 
      oY' = power X' Y'
    ) (by sorry)
  ) (by sorry))

(The exact code is not the point, I'm talking more about the structure.)

I have a function called power declared outside that does something like this:

  let power {A B: Set} (A' : A.powerset) (B' : B.powerset) :=
    have SA := (as_subset A').choose
    have SB := (as_subset B').choose
    SA ^ SB

  let Z := union (X.powerset.replace (P := fun X' oX' 
    -- ...
  ) (by sorry))

(I know it's probably wrong or doesn't make sense, that's not my point)

The reason I extracted it to top level is so I can try to prove something about it:

  let power {A B: Set} (A' : A.powerset) (B' : B.powerset) :=
    -- ...

  have power_spec {A B: Set} {A' : A.powerset} {B' : B.powerset} {F}
    (hF : F  power A' B') := by
    unfold power at hF
    exact (powerset_axiom F).mp hF

  let Z := union (X.powerset.replace (P := fun X' oX' 
    -- ...
  ) (by sorry))

And the reason I did that is to give myself at least some chance to unpack the structure of the nested thing I had in the beginning. So maybe I'll write some mock proofs that try to extract things out of that structure.

  let power {A B: Set} (A' : A.powerset) (B' : B.powerset) :=
    -- ...

  have power_spec {A B: Set} {A' : A.powerset} {B' : B.powerset} {F}
    -- ...

  let Z := union (X.powerset.replace (P := fun X' oX' 
    -- ...
  ) (by sorry))

  -- Set up some garbage to get to the construction
  have x : Set := sorry
  have h : (x: Object)  Z := by sorry
  have F: Object := sorry
  have hF : F  x := by sorry
  unfold Z at h
  rw [union_axiom] at h
  obtain S, hS, hS'⟩⟩ := h
  rw [replacement_axiom] at hS'
  obtain T, hT := hS'
  simp only [EmbeddingLike.apply_eq_iff_eq] at hT
  subst hT
  rw [replacement_axiom] at hS
  obtain p, hp := hS
  simp only [EmbeddingLike.apply_eq_iff_eq] at hp
  subst hp
  apply power_spec at hF
  obtain f, hF := hF

And you can see I use power_spec at the end.

Don't think too hard about this specific code, it's probably garbage, my question is more about the principle.

In programming, the way I'd initially want to approach this would be to modularize related things. I.e. I kind of want to prove things related to the union close to taking union, related to the replacement next to that replacement, and somehow compose those smaller proofs alongside the composition of the statement.

With a function definition, I understand how to nest structure (it's just a chain of nested functions, kind of like .map(foo => foo.map(...)). I'm not sure how to do the same for proofs about them. Is that even possible? Is that a bad idea? Is that trivial? Should I really be thinking about these things separately? Or should I try to extract those things at the top level as much as I can, sort of like I do in this other exercise? Are there different styles of doing this, or is there one recommended way?

Thanks!

Dan Abramov (Jul 28 2025 at 00:13):

Full (long and broken) code snippet:

/-- Exercise 3.4.7 -/
theorem SetTheory.Set.partial_functions' {X Y:Set} :
     Z:Set,  F:Object, F  Z   X' Y':Set, X'  X  Y'  Y   f: X'  Y', F = f := by

  have as_subset {A: Set} (A' : A.powerset) :  S: Set, A'.val = S  S  A :=
    (mem_powerset A').mp A'.prop

  let power {A B: Set} (A' : A.powerset) (B' : B.powerset) :=
    have SA := (as_subset A').choose
    have SB := (as_subset B').choose
    SA ^ SB

  have power_spec {A B: Set} {A' : A.powerset} {B' : B.powerset} {F}
    (hF : F  power A' B') := by
    unfold power at hF
    exact (powerset_axiom F).mp hF

  let Z := union (X.powerset.replace (P := fun X' oX' 
    oX' = Y.powerset.replace (P := fun Y' oY' 
      oY' = power X' Y'
    ) (by sorry)
  ) (by sorry))

  have x : Set := sorry
  have h : (x: Object)  Z := by sorry
  have F: Object := sorry
  have hF : F  x := by sorry
  unfold Z at h
  rw [union_axiom] at h
  obtain S, hS, hS'⟩⟩ := h
  rw [replacement_axiom] at hS'
  obtain T, hT := hS'
  simp only [EmbeddingLike.apply_eq_iff_eq] at hT
  subst hT
  rw [replacement_axiom] at hS
  obtain p, hp := hS
  simp only [EmbeddingLike.apply_eq_iff_eq] at hp
  subst hp
  apply power_spec at hF
  obtain f, hF := hF
  sorry

Not runnable because it's nowhere close to being runnable, and my question is about the general structure of a proof rather than about its behavior.

Robert Maxton (Jul 28 2025 at 00:27):

It's hard to work without an actual working example, but generally speaking, you prove things from the outside in -- you show that, I don't know, power_spec on union actually reduces power_spec on each _.powerset.replace, and then that power_spec on _.powerset.replace reduces to power_spec on _.powerset + some side condition, and then .... etc, etc.

Dan Abramov (Jul 28 2025 at 00:29):

Yeah, I'd love to create a working example, the problem is precisely that I don't know what to write. :D

Robert Maxton (Jul 28 2025 at 00:29):

It's generally pretty tough to hoist things out of the middle of a nested structure; in simple cases you can do it, but once you encounter something with a dependent type, you (usually) stop being able to do 'localized' rewrites and are forced to do larger, distributed rewrites that change the structure of the whole expression in some correlated way, which rw and simp_rw are significantly less good at.

Dan Abramov (Jul 28 2025 at 00:31):

OK, so the best alternative is just a flat sequence of proofs that follow the nesting order of the nested structure, maybe written before it? (Or after it?) Or would you recommend flattening the structure itself, by declaring each chain link separately?

Robert Maxton (Jul 28 2025 at 00:33):

Well, you can bundle your flat sequence of proofs into a single lemma (arguably you're basically just creating a nested proof whose structure parallels the data structure, but that probably won't be how it looks if you're using tactics to build it)
The structure itself doesn't need to be flattened, but if it's complicated or again involves dependent types it's a good idea to factor it into bits
That's kinda true in general coding but it's really true in a dependently typed language

Dan Abramov (Jul 28 2025 at 00:41):

I guess I'm still fuzzy about the structure (i.e. how will this lemma be different from my random rw [specification_axiom], rw [blabla] descent in the messy part of the snippet). I think you're saying these would be divided into steps with explicit goals? Or maybe presented in a different order?

Dan Abramov (Jul 28 2025 at 00:42):

Thanks for sharing the point about dependent types. I'm not familiar enough with how this comes up in practice to fully appreciate the insight but it's good to know there are some fundamental constraints that help guide the choices.

Kyle Miller (Jul 28 2025 at 01:22):

Is there anything here that's specifically about nested functions? I'm not sure I understand the question exactly, but I see that you have a nested object, and maybe it's that you're wondering about creating a proof that somehow reflects the structure?

Sometimes it can work out. For example, if you have a predicate like docs#Set.Finite, then there are operations on predicates that reflect the corresponding set operations, e.g. docs#Set.Finite.union. That way if you have hs : Set.Finite s and ht : Set.Finite t, you can write hs.union ht to get that the union is finite.

Proof note: power_spec is suspect, since it doesn't include the conclusion, and poking around I see that it's really hard to state the conclusion. Maybe there's missing API here around powersets? Somehow you want to have a set of sets, rather than a set of objects that can be reinterpreted as sets. Needing to write (as_subset A').choose is seems a bit painful...

(Side note: I'm not sure exactly why people call functions "closures" — this is something that's mildly bugged me for a long time in programming languages. Maybe when lexical closures came from Scheme everyone was excited, and because the main focus was implementation, people started calling [lambda] functions by the word that has to do with implementation details? I guess it could also be that the runtime value is many systems is represented as a closure, so in a bit of synecdoche, the function (which is the syntax) is referred to as if it were the closure. There are no closures here in proofs by the way. Any computations with functions are being done purely syntactically using the greek-letter reduction rules.)

Dan Abramov (Jul 28 2025 at 01:39):

Re: side note, I usually say "closure" when I mean lexical scoping in nested functions (which in runnable languages does usually need some mechanism to store the "closed over" values, which is not a thing here, I guess). I haven't learned whether the word strictly refers to a particular implementation. I probably wouldn't use it for every case of nested functions, but only when the inner one accesses something from the parent lexical scope. (My understanding is that the parent scope "closes" the binding, hence it's a "closure", so I use it in this structural sense rather than meaning a particular implementation trick.)

Dan Abramov (Jul 28 2025 at 01:47):

Proof note: power_spec is suspect, since it doesn't include the conclusion, and poking around I see that it's really hard to state the conclusion. Maybe there's missing API here around powersets? Somehow you want to have a set of sets, rather than a set of objects that can be reinterpreted as sets. Needing to write (as_subset A').choose is seems a bit painful...

I'm not sure about this unfortunately! I know fact a fact that it is at least doable to solve it, but the solution I've looked at is pretty hard to read through and could probably be streamlined. How exactly to streamline this argument is exactly what I'm struggling with.

Dan Abramov (Jul 28 2025 at 01:50):

I started writing this instead

theorem SetTheory.Set.partial_functions' {X Y:Set} :
     Z:Set,  F:Object, F  Z   X' Y':Set, X'  X  Y'  Y   f: X'  Y', F = f := by
  let asSubset {a : Object} {A : Set} (h: a  A.powerset) : Set :=
    ((mem_powerset _).mp h).choose

  have asSubset_spec {a : Object} {A : Set} (h: a  A.powerset) :
      a = ((asSubset h) : Object)  asSubset h  A :=
    ((mem_powerset _).mp h).choose_spec

  let P := union (X.powerset.replace (P := fun ⟨_, hx oX' 
    oX' = Y.powerset.replace (P := fun ⟨_, hy oY' 
      oY' = (((asSubset hx) ^ (asSubset hy)): Set)
    ) (by sorry)
  ) (by sorry))

and then I immediately got lost again because I'm not sure how to layer on more lemmas.

Kyle Miller (Jul 28 2025 at 01:53):

This is a bit of theory building, but maybe something like this could help?

theorem SetTheory.Set.mem_powerset' {X Y : Set} :
    (X : Object)  powerset Y  X  Y := by
  rw [SetTheory.Set.mem_powerset]
  simp_rw [SetTheory.Set.coe_eq_iff, exists_eq_left']

def SetTheory.IsSet (x : Object) : Prop :=  S : Set, x = (S : Object)
def SetTheory.IsSet.mk (S : Set) : IsSet S := S, rfl
noncomputable def SetTheory.IsSet.get {x : Object} (h : IsSet x) : Set := h.choose
@[simp] theorem SetTheory.isSet_get {x : Object} (h : IsSet x) : IsSet h.get := by
  exact h.get, rfl
@[simp] theorem SetTheory.IsSet.get_mk (S : Set) : (IsSet.mk S).get = S := by
  have := (IsSet.mk S).choose_spec
  rw [SetTheory.Set.coe_eq_iff] at this
  exact this.symm
@[simp] theorem SetTheory.coe_get {x : Object} (h : IsSet x) : (h.get : Object) = x := by
  have := h.choose_spec
  exact this.symm

@[simp] theorem SetTheory.Set.isSet_powerset {X : Set} (A : X.powerset) : IsSet A := by
  obtain Y, h1, h2 := (Set.mem_powerset A).mp A.property
  exact Y, h1

@[simp] theorem SetTheory.Set.get_isSet_powerset_subset {X : Set} (A : X.powerset) :
    (isSet_powerset A).get  X := by
  rw [ mem_powerset', SetTheory.coe_get]
  exact A.property

The idea is to build up this IsSet.get theory rather than use Exists.choose directly.

Kyle Miller (Jul 28 2025 at 01:56):

  let power {A B: Set} (A' : A.powerset) (B' : B.powerset) :=
    (isSet_powerset A').get ^ (isSet_powerset B').get

With some more work, you could make

def SetTheory.Set.powerset' (X : Set) : Type u := { Y : Set // Y  X }

and use that instead, so that .val is a Set already, and then link this up into this IsSet stuff if it helps.

Dan Abramov (Jul 28 2025 at 01:57):

Ah, this is an interesting approach. I'll need some time to digest this because I haven't seen quite similar in the book so far.

Dan Abramov (Jul 28 2025 at 01:57):

But I guess there's also a general idea of having very small "if this then that" and having a clear language to state them.

Dan Abramov (Jul 28 2025 at 01:59):

We also haven't gotten to quotients yet by that point so I'm trying to stay within the machinery I've seen used before. Though probably not super important.

Kyle Miller (Jul 28 2025 at 02:49):

I don't expect you to use this for the exercises, but I thought you'd like something that illustrates how in practice we tend to build up theories to make it easy to express things and then having small lemmas that specify all the equations that the things satisfy.

(By the way, {_ // _} is docs#Subtype, not quotients. The powerset' definition is creating a type of all sets that are a subset of X. This is an alternative to powerset; the function SetTheory.Set.toSubtype uses Subtype to convert Set into a Lean Type.)

Kyle Miller (Jul 28 2025 at 02:54):

One thought I had here is that it could be useful to have a IsSetSet (or have IsSet n X mean that X is a set of sets of sets of ... of sets of objects) and then have operations like SetTheory.Set.replace have a specialization that works with IsSetSet, where from within the function you receive the additional fact that the object is a set.

In the core library, there are functions like docs#List.attach that attach the fact that the elements of the list are elements of the list, which is used in conjunction with List.map for example, in case knowing that the elements are elements is essential to creating the new values. Maybe this is getting toward your question? I don't see a good way to do this sort of thing with SetTheory.Set though.

I think one way of saying the issue is that usually in Lean we try to express things in a way where we know what the things are (we have a really strong type system after all!) but with SetTheory.Set it's more on the dynamic typing end of things — we don't know whether objects are sets or not, and we have to write these existentials to do "runtime" typechecking.

Robert Maxton (Jul 28 2025 at 04:49):

Dan Abramov said:

I guess I'm still fuzzy about the structure (i.e. how will this lemma be different from my random rw [specification_axiom], rw [blabla] descent in the messy part of the snippet). I think you're saying these would be divided into steps with explicit goals? Or maybe presented in a different order?

It probably won't (actually look different). One of the (many) reasons we have the infoview is, it's often pretty tough to intuit the structure of a proof just by looking at the code; most proofs will be lists of rws, simps, and the occasional have/let no matter what it's doing, with blocks only occurring when a single goal splits into multiple.

Robert Maxton (Jul 28 2025 at 05:06):

Dan Abramov said:

Thanks for sharing the point about dependent types. I'm not familiar enough with how this comes up in practice to fully appreciate the insight but it's good to know there are some fundamental constraints that help guide the choices.

Key example: Suppose I have a function of dependent type f : (a : α) → β a → ..., appearing in a proof as f x y .... I have a hypothesis h : x = x', so I try to rewrite by h.

But when I do that, I change the expected type of y; it used to be β x, and now it's β x'. Unless h is a definitional equality, y cannot be both a β x and a β x' at the same time, so at least one of these statements do not typecheck.

There are three general solutions to this problem.

  1. If y is actually secretly determined by x anyway, if y = g x for some function g, then I can consolidate my dependencies, I can rewrite by h everywhere at once and the expression will now still typecheck. rw is now smart enough to do this a lot of the time and simp_rw will generally catch the rest.
  2. If I'm not that lucky, I might consider inserting a cast at the same time, replacing f x y with f x' (h ▸ y). This is often a bad idea, always awkward, and at any rate is generally considered beyond the scope of low-level rewriting tactics; if you want to take this approach you will either need specialized lemmas, specialized tactics and API, or 'manual' rewriting with change and convert_to.
  3. As an alternative to 2, I can break up the structure and the proof. When the proof is sufficiently simple and the parameters are similarly low-level/atomic or nearly-atomic types, I can simply recurse on my equality h (using cases), which basically forces the compiler to unify x and x' everywhere; it's a superpowered version of 1. But for Reasons (TM) this only works when the type of x, x' are essentially atomic, which means you have to prove simple lemmas about non-composite cases and then apply them separately in a more complex use case.

Robert Maxton (Jul 28 2025 at 05:11):

(Reasons (TM), if you're curious and want to read ahead a bit:

The key thing here is that equality is an inductive type in Lean, and like any inductive type it has a recursor. Equality's recursor says that, if I have a term of type x = x' and I want to prove something about x, I can just prove it about x' instead; in other words, it's the primary mechanism by which you can "execute a substitution" in Lean, rather than just inserting a cast function that you then have to work around. The problem is that recursing on equality gets more complicated with composite types: most "type constructors" -- including things you might not think of as such, like the function-type constructor → : Type u -> Type v -> Type max u v := fun α β ↦ (α → β) -- turn out not to be injective, so knowing that two composite types are equal doesn't let you conclude that their individual parameters are equal, which introduces an ambiguity in the structure that makes explicit substitution impossible. So you have to prove the lemmas in simple cases, rather than performing the substitution 'on the fly' later; so you have to break up your structures and your proofs. )

Robert Maxton (Jul 28 2025 at 05:14):

Dan Abramov said:

Re: side note, I usually say "closure" when I mean lexical scoping in nested functions (which in runnable languages does usually need some mechanism to store the "closed over" values, which is not a thing here, I guess). I haven't learned whether the word strictly refers to a particular implementation. I probably wouldn't use it for every case of nested functions, but only when the inner one accesses something from the parent lexical scope. (My understanding is that the parent scope "closes" the binding, hence it's a "closure", so I use it in this structural sense rather than meaning a particular implementation trick.)

As a side note, if the specific thing you want is very literally "to work with locally-scoped variables in Lean", what you probably want are the tactics lift_lets and intro. lift_lets, as the name implies, takes all let := and have := statements in the internals of a term and lifts them out as far as possible, hopefully all the way to the top; intro can then add those definitions to the context directly.)

Kyle Miller (Jul 28 2025 at 05:52):

(@Robert Maxton extract_lets does both at once, and even works on local hypotheses, which intro can't do anything about.)

Robert Maxton (Jul 28 2025 at 05:52):

Kyle Miller said:

(Robert Maxton extract_lets does both at once, and even works on local hypotheses, which intro can't do anything about.)

(Ooh. I'll have to keep that in mind, thanks!)

Dan Abramov (Jul 28 2025 at 21:36):

For the record, I ended up approaching this exercise in a relatively simple way: https://leanprover.zulipchat.com/#narrow/stream/514017-Analysis-I/topic/.5BExercise.203.2E4.2E7.5D.20How.20to.20work.20with.20Props.20that.20have.20.60let_fun.60/near/531466204

The trick was to keep an existential in the construction (instead of using choice), and then methodically unwrap the construction outside-in.

I’m still digesting why this worked (and why I used to get stuck previously) but it does show I was overthinking the problem.


Last updated: Dec 20 2025 at 21:32 UTC