Zulip Chat Archive
Stream: Is there code for X?
Topic: Bolzano-Weirstrass
Patrick Massot (Apr 01 2020 at 21:44):
I should be the one answering this question, but just to make sure: do we have anything that look like Bolzano-Weirstrass? Specifically, do we know that every sequence with values in Icc a b
has a convergent subsequence? https://github.com/leanprover-community/mathlib/blob/33764abc51cbd699d43976085513444216639670/src/topology/sequences.lean#L21-L22 is not too encouraging, but there may be more elementary places to look at. I didn't find anything in metric spaces.
Bhavik Mehta (Apr 01 2020 at 22:03):
This might not be what you're looking for but infinite ramsey which I proved here implies that any sequence in a linear order has a monotone subsequence which would help with a big part of BW
Patrick Massot (Apr 02 2020 at 09:24):
Thanks, but I think I'll try a more direct path.
Patrick Massot (Apr 02 2020 at 09:25):
However I realize I have no idea how to do some very basic proof step in Lean. How do I define a sequence by induction in the middle of a proof? It doesn't seem possible to use the equation compiler in a let
.
Bhavik Mehta (Apr 02 2020 at 11:51):
I had this exact question! For me, the right thing to use was Reid's crec construction
Chris Hughes (Apr 02 2020 at 12:04):
I think just nat.rec_on
should be okay most of the time.
Patrick Massot (Apr 02 2020 at 15:11):
Thanks Bhavik, this crec looks promising. And https://leanprover.zulipchat.com/#narrow/stream/116395-maths/topic/Inductive.20construction is the thread I should have been looking for. Chris nat.rec_on
is not good enough here (or at least it's hugely inconvenient).
Mario Carneiro (Apr 02 2020 at 15:25):
crec
is for when the type of the sequence you are constructing requires a partial proof of correctness of earlier parts of the sequence. It's a pretty advanced tool and doesn't sound like what you are describing, which is just how to write recursive functions in a proof
Mario Carneiro (Apr 02 2020 at 15:25):
do you have an example?
Patrick Massot (Apr 02 2020 at 15:28):
Ok, maybe this discussion is too abstract.
Patrick Massot (Apr 02 2020 at 15:28):
Could you please prove:
example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := sorry
without using a special trick that would bypass the recursive definition issue.
Patrick Massot (Apr 02 2020 at 15:34):
The proof I have on paper is: set f(0)=0 and then, assuming f(n-1) is defined, set
Mario Carneiro (Apr 02 2020 at 15:46):
Option 1: equation compiler hacks
example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := ⟨λ n, match n with | 0 := 0 | (n+1) := ψ (by exact _match n) + 1 end, begin split; intros, { induction a; dsimp [(∘), _example._match_1], { exact hψ _ }, { exact lt_trans a_ih (hψ _) } }, { induction n, {apply le_refl}, dsimp [_example._match_1], exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end⟩
Mario Carneiro (Apr 02 2020 at 15:47):
Option 2: nat.rec_on
example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := begin refine ⟨λ n, nat.rec_on n 0 (λ n ih, ψ ih + 1), λ m n h, _, λ n, _⟩, { induction h; dsimp [(∘)], { exact hψ _ }, { exact lt_trans h_ih (hψ _) } }, { induction n, {apply le_refl}, exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end
Mario Carneiro (Apr 02 2020 at 15:49):
Option 3: local def
def foo (ψ : ℕ → ℕ) : ℕ → ℕ | 0 := 0 | (n+1) := ψ (foo n) + 1 example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := ⟨foo ψ, begin split; intros, { induction a; dsimp [(∘), foo], { exact hψ _ }, { exact lt_trans a_ih (hψ _) } }, { induction n, {apply le_refl}, dsimp [foo], exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end⟩
Patrick Massot (Apr 02 2020 at 15:49):
Option 1 says: invalid match/convoy expression, expected type is not known
. But this is using an old Lean+mathlib (frozen in early January)
Mario Carneiro (Apr 02 2020 at 15:50):
It works for me but you can see if this works
example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := ⟨λ n, (match n with | 0 := 0 | (n+1) := ψ (by exact _match n) + 1 end : ℕ), begin split; intros, { induction a; dsimp [(∘), _example._match_1], { exact hψ _ }, { exact lt_trans a_ih (hψ _) } }, { induction n, {apply le_refl}, dsimp [_example._match_1], exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end⟩
Patrick Massot (Apr 02 2020 at 15:50):
Same error.
Mario Carneiro (Apr 02 2020 at 15:50):
This version should always work
example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := ⟨λ n, match n : ℕ → ℕ with | 0 := 0 | (n+1) := ψ (by exact _match n) + 1 end, begin split; intros, { induction a; dsimp [(∘), _example._match_1], { exact hψ _ }, { exact lt_trans a_ih (hψ _) } }, { induction n, {apply le_refl}, dsimp [_example._match_1], exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end⟩
Patrick Massot (Apr 02 2020 at 15:52):
But option 2 works (option 3 is not good, it relies on having everything needed before starting a proof).
Patrick Massot (Apr 02 2020 at 15:52):
Thank you very much.
Patrick Massot (Apr 02 2020 at 15:52):
I really really think we should have a tactic helping to setup this.
Mario Carneiro (Apr 02 2020 at 15:52):
I think the best option is option 3
Mario Carneiro (Apr 02 2020 at 15:53):
Option 1 is just an obfuscated version of option 3
Mario Carneiro (Apr 02 2020 at 15:53):
because lean extracts the def into its own function anyway
Patrick Massot (Apr 02 2020 at 15:53):
Option 3 is not viable in general. We want to be able to build a sequence in the middle of a complicated proof. Extracting the required context for an auxiliary def could be a mess.
Mario Carneiro (Apr 02 2020 at 15:54):
Even if you are in the middle of the proof, you can just collect the hypotheses that are currently active and put them as parameters to the recursive function. This is what lean does (note that _example._match_1
in the first case has a psi argument)
Patrick Massot (Apr 02 2020 at 15:54):
We need a tactic handling this, and all the cases that Bhavik and Reid were interested in.
Mario Carneiro (Apr 02 2020 at 15:54):
The abstract
tactic does this for you
Mario Carneiro (Apr 02 2020 at 15:54):
it's not an editor plugin though
Patrick Massot (Apr 02 2020 at 15:55):
What is the abstract tactic?
Mario Carneiro (Apr 02 2020 at 15:58):
Here is a version of option 2 which abstracts the definition after constructing it. Note that in the latter two goals the function has the name foo._aux_1
, and this name lives on after the def is done
def foo (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := begin refine ⟨λ n, _, λ m n h, _, λ n, _⟩, abstract { exact nat.rec_on n 0 (λ n ih, ψ ih + 1) }, { induction h; dsimp [(∘)], { exact hψ _ }, { exact lt_trans h_ih (hψ _) } }, { induction n, {apply le_refl}, exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end #print foo._aux_1 -- def foo._aux_1 : (ℕ → ℕ) → ℕ → ℕ := -- λ (ψ : ℕ → ℕ) (n : ℕ), nat.rec_on n 0 (λ (n ih : ℕ), ψ ih + 1)
Patrick Massot (Apr 02 2020 at 15:59):
I don't see how it helps. Now you have the auxiliary definition when you no longer need it, right?
Mario Carneiro (Apr 02 2020 at 16:00):
What it doesn't give you is the nice parsing with an equation compiler that def
provides
Mario Carneiro (Apr 02 2020 at 16:01):
What it is accomplishing that other tactics don't usually do is the creation of a definition in the global environment based on a term stuck in a local context
Patrick Massot (Apr 02 2020 at 16:01):
But we don't want a def at all. We want to construct a sequence during a proof and then throw it away.
Mario Carneiro (Apr 02 2020 at 16:02):
We do want a def, because a def has equations and stuff that you want recursive functions to have
Mario Carneiro (Apr 02 2020 at 16:02):
If the recursion is small enough, like in this example, option 2 is tolerable, but the function is this big expression that makes things harder to see
Patrick Massot (Apr 02 2020 at 16:03):
That's why we need a tactic.
Mario Carneiro (Apr 02 2020 at 16:04):
For bigger recursive functions the equation compiler packs up the recursor behind a constant and you only unfold it through dsimp
or rw
and it automatically refolds the recursive call into the constant
Patrick Massot (Apr 02 2020 at 16:04):
Just like we needed the choose
tactic. The tactic will need to both create the function and the correct properties in the context.
Mario Carneiro (Apr 02 2020 at 16:05):
It can't be in the local context because defeq matters here
Mario Carneiro (Apr 02 2020 at 16:05):
the function is supposed to definitionally unfold a certain way
Patrick Massot (Apr 02 2020 at 16:06):
How does abstract
help with that?
Mario Carneiro (Apr 02 2020 at 16:06):
It creates a def for the call to the recursor
Mario Carneiro (Apr 02 2020 at 16:06):
and then that def now satisfies the requisite defeq lemmas
Sebastien Gouezel (Apr 02 2020 at 16:07):
Just for the record, I needed to construct a sequence in the middle of the proof of Baire theorem, with complicated assumptions for which it wouldn't make sense to pull them out of the proof, and I used nat.rec_on
(see https://github.com/leanprover-community/mathlib/blob/a88356ff00b46ca1ea498c282e14c245e1424f79/src/topology/metric_space/baire.lean#L129). It worked perfectly well, but if you see a better way to refactor it I am interested. I also used it to prove completeness of the set of compact subsets for the Hausdorff distance, see https://github.com/leanprover-community/mathlib/blob/a88356ff00b46ca1ea498c282e14c245e1424f79/src/topology/metric_space/closeds.lean#L126. So, this comes up pretty often in analysis, and a nice tactic would definitely be useful.
Mario Carneiro (Apr 02 2020 at 16:07):
I should mention that abstract
is very rarely used and somewhat "raw". Some more work needs to go into it to make it more generally useful
Patrick Massot (Apr 02 2020 at 16:07):
Mario, why can't you do that in a tactic if abstract
can do it?
Mario Carneiro (Apr 02 2020 at 16:07):
you can, work is needed
Mario Carneiro (Apr 02 2020 at 16:08):
abstract is doing the equivalent of taking a term and copying it into a new def
Patrick Massot (Apr 02 2020 at 16:08):
Is abstract
written in Lean or C++?
Mario Carneiro (Apr 02 2020 at 16:09):
the only smarts it has is identifying which variables from the current context need to be in the new def to make it typecheck
Mario Carneiro (Apr 02 2020 at 16:09):
It's a relatively small lean wrapper around some C++
Patrick Massot (Apr 02 2020 at 16:10):
So we need to change Lean, right?
Mario Carneiro (Apr 02 2020 at 16:10):
actually, I stand corrected, almost all of the smarts is in lean
meta def abstract (tac : tactic unit) (suffix : option name := none) (zeta_reduce := tt) : tactic unit := do fail_if_no_goals, gs ← get_goals, type ← if zeta_reduce then target >>= zeta else target, is_lemma ← is_prop type, m ← mk_meta_var type, set_goals [m], tac, n ← num_goals, when (n ≠ 0) (fail "abstract tactic failed, there are unsolved goals"), set_goals gs, val ← instantiate_mvars m, val ← if zeta_reduce then zeta val else return val, c ← mk_aux_decl_name suffix, e ← add_aux_decl c type val is_lemma, exact e
Mario Carneiro (Apr 02 2020 at 16:10):
the C++ core bit is add_aux_decl
at the end
Patrick Massot (Apr 02 2020 at 16:11):
Is there hope then?
Mario Carneiro (Apr 02 2020 at 16:12):
This still isn't really what you are after since it does nothing to make writing recursive functions easy. It doesn't call the equation compiler, and it doesn't even generate a regular equation for the term which breaks dsimp [foo._aux_1]
Mario Carneiro (Apr 02 2020 at 16:12):
I think some of the new apis in community lean can help with this
Patrick Massot (Apr 02 2020 at 16:13):
Ok, this definitely seems to be above my tactic writing skills. Let's say Sébastien, Bhavik and I would be very grateful if some expert could make this dream true.
Mario Carneiro (Apr 02 2020 at 16:15):
I think the best option is to make option 1 have nicer syntax and skip the hackery, like so
example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := let foo : ℕ → ℕ | 0 := 0 | (n+1) := ψ (foo n) + 1 in begin split; intros, { induction a; dsimp [(∘), foo], { exact hψ _ }, { exact lt_trans a_ih (hψ _) } }, { induction n, {apply le_refl}, dsimp [foo], exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end
Patrick Massot (Apr 02 2020 at 16:17):
No, this is still assuming you know everything needed upfront.
Mario Carneiro (Apr 02 2020 at 16:17):
No, it can be in the middle of a proof too
Mario Carneiro (Apr 02 2020 at 16:18):
It might be a bit messy to exit and re-enter tactic mode but I do this all the time when I want to use match
Mario Carneiro (Apr 02 2020 at 16:19):
Here's a horrible idea:
example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := ⟨let foo : ℕ → ℕ | 0 := 0 | (n+1) := ψ (foo n) + 1 end, begin split; intros, { induction a; dsimp [(∘), foo], { exact hψ _ }, { exact lt_trans a_ih (hψ _) } }, { induction n, {apply le_refl}, dsimp [foo], exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end⟩
That let foo ... end
is just a named match
Mario Carneiro (Apr 02 2020 at 16:21):
in the middle of a proof:
example (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, (∀ n m, n < m → (ψ ∘ f) n < (ψ ∘ f) m) ∧ ∀ n, f n ≥ n := begin use (let foo : ℕ → ℕ | 0 := 0 | (n+1) := ψ (foo n) + 1 end), split; intros, { induction a; dsimp [(∘), foo], { exact hψ _ }, { exact lt_trans a_ih (hψ _) } }, { induction n, {apply le_refl}, dsimp [foo], exact nat.succ_le_succ (le_trans n_ih (hψ _)) } end
Mario Carneiro (Apr 02 2020 at 16:23):
we could also steal Coq's notation and call it fix
instead
Patrick Massot (Apr 02 2020 at 16:24):
I swear I tried this let
+ |
thing.
Mario Carneiro (Apr 02 2020 at 16:24):
It doesn't work
Mario Carneiro (Apr 02 2020 at 16:24):
it's a proposed syntax
Mario Carneiro (Apr 02 2020 at 16:24):
conveniently unused as of now
Patrick Massot (Apr 02 2020 at 16:24):
Oh ok.
Mario Carneiro (Apr 02 2020 at 16:25):
I'm more interested in whether this solves your problems
Patrick Massot (Apr 02 2020 at 16:26):
But really this looks too much like programming. My dreamed tactic would open a goal for foo 0
, a goal for foo n+1
, just like induction does, and then give me all the required lemmas and definitional equalities.
Mario Carneiro (Apr 02 2020 at 16:26):
You can do that today
Mario Carneiro (Apr 02 2020 at 16:26):
just define a function by induction
Mario Carneiro (Apr 02 2020 at 16:27):
If you are defining a function and want all the lemmas to look nice, I think there is a decent argument that you should write the function in term mode like this, so that you actually have to write down what the lemmas look like
Mario Carneiro (Apr 02 2020 at 16:28):
For example, did you know that
def foo : ℕ → ℕ | 0 := 0 | (n+1) := 1 #print prefix foo.equations -- foo.equations._eqn_1 : foo 0 = 0 -- foo.equations._eqn_2 : ∀ (n : ℕ), foo (n + 1) = 1 def foo' : ℕ → ℕ | 0 := 0 | (nat.succ n) := 1 #print prefix foo.equations -- foo'.equations._eqn_1 : foo' 0 = 0 -- foo'.equations._eqn_2 : ∀ (n : ℕ), foo' (nat.succ n) = 1
produce different equation lemmas?
Mario Carneiro (Apr 02 2020 at 16:29):
the equation compiler uses the way you write the cases as a cue for how to state the lemmas
Mario Carneiro (Apr 02 2020 at 16:35):
I'm struggling to figure out what the UI for an equation compiler tactic would be. I guess something like:
equation_compile [0, n+1] { { /- P 0 -/ }, { /- P (n+1) -/ } },
but all the pieces seem to be placed badly
Patrick Massot (Apr 02 2020 at 16:40):
I'm not sure actually.
Patrick Massot (Apr 11 2020 at 22:11):
I have a variation on this question that is also related to handling finite stuff. Here is my new challenge:
example {γ : Type*} [decidable_eq γ] {P : γ → set γ → Prop} (h : ∀ t, finite t → ∃ c, P c t) : ∃ u : ℕ → γ, ∀ n, P (u n) (u '' Iio n) := sorry
Patrick Massot (Apr 11 2020 at 22:14):
The math proof is an obvious strong induction, apply assumption h
to the part of the sequence that is already constructed.
Patrick Massot (Apr 12 2020 at 09:08):
I should have posted that in the new members stream using a pseudonym :sad:
Mario Carneiro (Apr 12 2020 at 09:16):
MWE?
Mario Carneiro (Apr 12 2020 at 09:17):
import data.set.intervals data.set.finite open set example {γ : Type*} [decidable_eq γ] {P : γ → set γ → Prop} (h : ∀ t, finite t → ∃ c, P c t) : ∃ u : ℕ → γ, ∀ n, P (u n) (u '' Iio n) := sorry
Kenny Lau (Apr 12 2020 at 09:42):
import data.set.intervals data.set.finite data.nat.basic open set instance (n : ℕ) : fintype (Iio n) := fintype.of_finset (finset.range n) $ by simp theorem nat.strong_rec_on_beta {P : ℕ → Sort*} {ih} {n : ℕ} : (nat.strong_rec_on n ih : P n) = ih n (λ m hmn, (nat.strong_rec_on m ih : P m)) := sorry -- I give up example {γ : Type*} [decidable_eq γ] {P : γ → set γ → Prop} (h : ∀ t, finite t → ∃ c, P c t) : ∃ u : ℕ → γ, ∀ n, P (u n) (u '' Iio n) := ⟨λ n, @nat.strong_rec_on (λ _, γ) n $ λ n ih, classical.some $ h (range $ λ m : Iio n, ih m.1 m.2) (finite_range _), λ n, begin refine nat.strong_induction_on n (λ n ih, _), rw nat.strong_rec_on_beta, convert classical.some_spec (h _ _), ext x, split, { rintros ⟨m, hmn, rfl⟩, exact ⟨⟨m, hmn⟩, rfl⟩ }, { rintros ⟨⟨m, hmn⟩, rfl⟩, exact ⟨m, hmn, rfl⟩ } end⟩
Kevin Buzzard (Apr 12 2020 at 11:33):
Last updated: Dec 20 2023 at 11:08 UTC