Zulip Chat Archive
Stream: Analysis I
Topic: Does 3.5.10 (singleton_iProd_equiv) have a simpler solution?
Dan Abramov (Aug 05 2025 at 00:56):
I stole @Rado Kirov's approach:
noncomputable abbrev SetTheory.Set.singleton_iProd_equiv (i:Object) (X:Set) :
iProd (fun _:({i}:Set) ↦ X) ≃ X where
toFun := fun t ↦ ((mem_iProd _).mp t.property).choose ⟨i, by simp⟩
invFun := fun x ↦ ⟨tuple fun _ ↦ x, by rw [mem_iProd]; tauto⟩
left_inv := by
intro t
apply Subtype.ext
have hx := ((mem_iProd _).mp t.property).choose_spec
simp only [hx, tuple]
congr! with i'
have hi' := (mem_singleton _ _).mp i'.property
rw [hi']
right_inv := by
intro x
dsimp only []
generalize_proofs hf hi
have := hf.choose_spec
rw [tuple_inj] at this
rw [← this]
Notably, I had to learn two (!) new tactics. The left inverse seems to require congr! (with a bang) and the right inverse seems to require generalize_proofs.
I'm wondering if there's any way to solve this without these tactics. I haven't needed them before so I wonder what is it that forces their use here. (I've spent a bunch of time trying other things and couldn't get anything else to work.)
It seems a bit unfortunate if the exercise requires them because we've never "seen" them before yet.
Rado Kirov (Aug 05 2025 at 01:55):
congr! is easier to replace, assuming you are ok with ext. Ultimately you need something to show equality of functions, and all congr!, ext and funext work but do more or less intermediate rewritings along the way too.
left_inv := by
intro t
apply Subtype.ext
have hx := ((mem_iProd _).mp t.property).choose_spec
simp only [hx, tuple]
rw [coe_of_fun_inj]
ext j
simp only
rw [Subtype.val_inj]
have hi' := (mem_singleton _ _).mp j.property
rw [← Subtype.eta j j.prop]
simp [hi']
Rado Kirov (Aug 05 2025 at 02:03):
generalize_proofs is my dark horse. I was completely stuck without it on some previous problems in 3.3 and someone on this forum showed me first how to use it. Now I throw it every time I run into a messy goal with lots of ..., but it is so blunt and undirected that I feel a bit icky using it.
Ultimately, every time one does choose in type producing terms, you need a matching choose_spec in theorems to go with it. So here we have some choose h in toFun and invFun so when working on proofs we need to rewrite h to pull h.choose_spec. The two options are:
1) spell it out if it is short like you have done with ((mem_iProd _).mp t.property).choose_spec
2) let the proof unfolds and when stuck pull out the proofs with generalize_proofs a b c d and do a.choose_spec, b.choose_spec, etc on the appropriate variable that matches the proof we need to do more with.
What I have found frustrating is sometimes I need to multipled generalize_proofs because the proofs get slightly rewritten and often get very long and unreadable (they look unrecongizable from what I wrote when I hover on ...)
Kyle Miller (Aug 05 2025 at 02:05):
(Unlike other ! tactics where there's some feature being enabled, congr! is just an improved version of congr. I've been hoping to merge congr! into congr one day...)
Dan Abramov (Aug 05 2025 at 02:12):
congr!is easier to replace, assuming you are ok withext. Ultimately you need something to show equality of functions, and allcongr!,extandfunextwork but do more or less intermediate rewritings along the way too.
Do you have a good strategy to figure out the sequence of rewrites done manually? In this example I find it pretty tricky to see "in which direction" to move. It's not clear to me how you determined which rws to do.
Kyle Miller (Aug 05 2025 at 02:15):
congr doesn't work using rewrites, so that's a bit hard. It works "outside in". For example, given f a b c = g x y z, it applies a lemma that reduces it to the goals f = g, a = x, b = y, c = z. If you want, it's like it applies congr/congrFun/congrArg repeatedly, but it also can handle more difficult situations that these lemmas can't.
You can write congr n where n is a number to see how things like step-by-step. congr! accepts this too.
Kyle Miller (Aug 05 2025 at 02:16):
(I also failed at reading comprehension and thought you were quoting me, oops.)
Dan Abramov (Aug 05 2025 at 02:17):
Any help is welcome :D
More concretely, what worries me here is that the situation is very fragile, and one wrong move means congr! wouldn't have worked either. E.g. if there was no apply Subtype.ext earlier.
I'm not sure what's a better strategy than guessing here.
Rado Kirov (Aug 05 2025 at 02:20):
This is exactly what I did after I saw this message.
- pasted your proof
- started at congr!
- saw the state is -
**⊢** (fun i_1 ↦ ⟨↑(⋯.choose ⟨i, ⋯⟩), ⋯⟩) = fun i_1 ↦ ⟨↑(⋯.choose i_1), ⋯⟩ - started peeling the onion from the outside in
- first we have two functions so we do
ext jto give them an argument to simplify with - then I see
↑⟨↑(⋯.choose ⟨i, ⋯⟩), ⋯⟩ = ↑⟨↑(⋯.choose j), ⋯⟩- hover over the arrow, because I learned the hard way they can be like 5 different things - we are in luck all the same.valso we applyrw [Subtype.val_inj]to remove them from both sides. Turns out alsosimp onlycollapses two into one, because they are defeq equal I guess. I often dosimp?and rewrite with simplerrwif it finds good ones. - finally we are down to
⋯.choose ⟨i, ⋯⟩ = ⋯.choose jI guess the things in the ... are same (but only generalize_proofs will make sure they are). - notice LHS is writen as a full subtype pair while
jis a subtype element, so I asked claude for theorem to break down j into<j.val, j.prop>which turns out wasSubtype.eta - finally we are down to
⋯.choose ⟨i, ⋯⟩ = ⋯.choose ⟨↑j, ⋯⟩which just means we need to provei = ↑jand be done. You already had a proof for that.
Dan Abramov (Aug 05 2025 at 02:21):
Ah sick, thank you for the writedown! This is helpful.
Dan Abramov (Aug 05 2025 at 02:22):
So I guess the hardest part in this one is getting both sides to be function-ish.
Rado Kirov (Aug 05 2025 at 02:23):
It's part of it, I missed step 0 - there was a rw [coe_of_fun_inj] because actually you start with ↑ fun not fun on both sides
Rado Kirov (Aug 05 2025 at 02:25):
basically at each step if you see ⊢ F ... = F ... you can do something to remove the F if the F some coersion you find the corresponding _inj theorem, if it is fun x => you apply ext
Rado Kirov (Aug 05 2025 at 02:28):
Ah sick, thank you for the writedown! This is helpful.
No problem, I am self learning Lean, so spelling out my current likely flawed mental model of tactics is helpful for myself too as folks in the know can correct me.
Dan Abramov (Aug 05 2025 at 02:30):
This one worries me:
left_inv := by
intro t
have h := ((mem_iProd _).mp t.property)
have hx := h.choose_spec
-- apply Subtype.ext
simp only [hx, tuple]
congr! with i'
have hi' := (mem_singleton _ _).mp i'.property
rw [hi']
You can see losing that apply only breaks way further (complaining about i' not being defined).
I guess in this case simp is not very appropriate and I should use simp_rw to enforce that the rewrite fails if not found?
left_inv := by
intro t
have h := ((mem_iProd _).mp t.property)
have hx := h.choose_spec
apply Subtype.ext
rw [tuple] at hx
simp_rw [hx]
congr! with i'
have hi' := (mem_singleton _ _).mp i'.property
rw [hi']
Now if I comment out that apply, it breaks earlier (at simp_rw).
Kyle Miller (Aug 05 2025 at 02:31):
(btw, you can write ext instead of apply Subtype.ext)
Dan Abramov (Aug 05 2025 at 02:31):
This apply Subtype.ext is annoying because it makes the goal look _worse_ :
from
⊢ (fun x ↦ ⟨tuple fun x_1 ↦ x, ⋯⟩) ((fun t ↦ ⋯.choose ⟨i, ⋯⟩) t) = t
to
⊢ ↑((fun x ↦ ⟨tuple fun x_1 ↦ x, ⋯⟩) ((fun t ↦ ⋯.choose ⟨i, ⋯⟩) t)) = ↑t
So it seems tricky to guess that it's actually a good idea.
Kyle Miller (Aug 05 2025 at 02:33):
It occurred to me that it's possible to make use of a fancy feature to get Lean to extract chooses and auto-apply choose_spec. Here's a proof that doesn't mention any complicated-looking terms (e.g. no (mem_singleton _ _).mp i'.property). It's not necessarily any better, but in case it's interesting, here:
@[elab_as_elim]
theorem Exists.choose_ind {p q : α → Prop} {h : ∃ a, p a}
(ind : ∀ a, p a → q a) :
q h.choose :=
ind _ h.choose_spec
/--
Example 3.5.10. I suspect most of the equivalences will require classical reasoning and only be
defined non-computably, but would be happy to learn of counterexamples.
-/
noncomputable abbrev SetTheory.Set.singleton_iProd_equiv (i:Object) (X:Set) :
iProd (fun _:({i}:Set) ↦ X) ≃ X where
toFun := fun t ↦ ((mem_iProd _).mp t.property).choose ⟨i, by simp⟩
invFun := fun x ↦ ⟨tuple fun _ ↦ x, by rw [mem_iProd]; tauto⟩
left_inv := by
rintro ⟨t, ht⟩
rw [mem_iProd] at ht
obtain ⟨f, rfl⟩ := ht
dsimp only
congr! 2
ext ⟨j, hj⟩
rw [mem_singleton] at hj
subst j
congr!
-- Now trying something fancy: let Lean figure out which choose we want
-- using the elab-as-elim mechanism! This effectively does generalize_proofs,
-- looking *only* for `choose` terms
refine Exists.choose_ind ?_
intro _ h
rw [tuple_inj] at h
exact h.symm
right_inv := by
... omitted ...
Dan Abramov (Aug 05 2025 at 02:35):
Actually maybe I wouldn't get stuck here if this simp wasn't misbehaving:
left_inv := by
intro t
have h := ((mem_iProd _).mp t.property)
have hx := h.choose_spec
ext
-- rw [tuple] at hx
simp_rw [hx] -- tactic 'simp' failed, maximum recursion depth has been reached
congr! with i'
have hi' := (mem_singleton _ _).mp i'.property
rw [hi']
Is there a way to show what the recursion is? I tried the debug thing it's suggesting and I didn't understand its output.
Rado Kirov (Aug 05 2025 at 02:35):
@Kyle Miller I can definitely use that in some abominations I had to in later exercises - https://github.com/rkirov/analysis/blob/main/analysis/Analysis/Section_3_5.lean#L750 . Will give it a spin later.
Kyle Miller (Aug 05 2025 at 02:36):
More about elab_as_elim: Right before that refine, the goal is
⊢ ⋯.choose = f
and then right after it's
⊢ ∀ (a : {i}.toSubtype → X.toSubtype), tuple f = tuple a → a = f
Basically, elab_as_elim is a hint for how Lean should try to solve for the q argument. It finds any cases of Exists.choose _ from the goal, then generalizes it, making a lambda.
In this case it's pretty finicky because it's matching on proofs rather than values, so it won't always correctly solve for _, but maybe that's a Lean bug.
Rado Kirov (Aug 05 2025 at 02:38):
So it seems tricky to guess that it's actually a good idea.
One way to justify it, you know you wrote ((mem_iProd _).mp t.property).choose in toFun, so its choose_spec variant will come into play in the proof.
So you can predict that and just write right as you introduce t
have hx := ((mem_iProd _).mp t.property).choose_spec
but now this has ↑t which seems hard to remove, so in reverse you introduce it in the goal
Rado Kirov (Aug 05 2025 at 02:40):
Btw, my guess is that if you want to remove generalize_proofs you can reverse engineer the choose_spec you need, but in the other direction it is toFun \comp invFun so if you want to spell it out it is something like
((mem_iProd _).mp (⟨tuple fun _ ↦ x, by rw [mem_iProd]; tauto⟩).property).choose
which is a mouthful, and it is easier to let lean reduce it a bit and then pull out with generalize_proofs
Dan Abramov (Aug 05 2025 at 03:03):
OK this one is stumping me completely.
left_inv := by
intro t
have h := ((mem_iProd _).mp t.property)
have hx := h.choose_spec
unfold tuple at *
ext
simp_rw [hx] -- This works
-- rw [hx]
congr! with i'
have hi' := (mem_singleton _ _).mp i'.property
rw [hi']
Why does simp_rw work here but rw doesn't?
They seem to have the same effect on the right side. On the left side, simp_rw flashes green (rw doesn't) but even with set_option pp.proofs true I can't see meaningful difference.
Rado Kirov (Aug 05 2025 at 03:46):
:hmm: also just adding simp only after rw [hx] fixes it. Good mystery.
Rado Kirov (Aug 05 2025 at 03:55):
Oh, what I see depends on whether the cursor is 1) one space before simp_rw [hx] 2) right at the beginning of simp_rw [hx] ?!?
Screenshot 2025-08-04 at 8.54.52 PM.png
Screenshot 2025-08-04 at 8.54.35 PM.png
Rado Kirov (Aug 05 2025 at 03:59):
looks like a bug in what simp_rw shows, I think it shows the state after the simp, while the cursor is still infront?
Aaron Liu (Aug 05 2025 at 10:08):
simp_rw does an extra simp only for you and for some reason the position associated with that is in from of the tactic syntax
Last updated: Dec 20 2025 at 21:32 UTC