Zulip Chat Archive

Stream: new members

Topic: Coarse notions on metric spaces: Review+Help


bottine (Feb 02 2022 at 07:27):

I've started trying to write down this quasi-isometry stuff that I want to attack. The first step, I thought, is probably to write down some properties of metric spaces. I've hit some roadblocks, and thought it could be useful to get some feedback.
Here is the code:

import data.int.interval
import topology.algebra.ordered.compact
import topology.metric_space.emetric_space


universes u

open filter function set fintype function emetric_space
open_locale nnreal ennreal


section space

  variables (α : Type u) [pseudo_emetric_space α]

  def is_locally_finite :=
     x : α,  r : 0, (emetric.ball x r).finite

  def is_uniformly_locally_finite_with (k : 0  ) :=
    is_locally_finite α   x : α,  r : 0,  ((emetric.ball x r).to_finset).card  (k r)

  def is_uniformly_locally_finite :=
     k : 0  , is_uniformly_locally_finite_with α k

  def is_coarsely_connected_with (K : 0) :=
     x x' : α,
       n : ,  w : fin (n+1)  α,
        w (fin.mk 0 (nat.zero_lt_succ n)) = x 
        w  (fin.mk n (by simp))= x'    -- not sure how to make that clean
         (i j : fin n), edist (w i) (w j)  K * | j-i |

  def is_coarsely_geodesic_with (K k : 0) :=
     x x' : α,
       n : ,  w : fin (n+1)  α,
        w (fin.mk 0 (nat.zero_lt_succ n)) = x 
        w  (fin.mk n (by simp))= x'    -- lol
         (i j : fin n), edist (w i) (w j)  K * | j-i |  (| j-i | : 0)  k * (edist (w i) (w j))

end space

section subset

variables {α : Type u} [pseudo_emetric_space α]

  def is_coarsely_dense_in_with (ε : 0) (s t : set α) :=
     x (hx : x  t),  y (hy : y  s), edist x y  ε

  def is_coarsely_dense_with (ε : 0) (s : set α) :=
    is_coarsely_dense_in_with ε s s-- (set.univ α) TODO !!! replace s by set.univ α in such a way that it typechecks

  def is_coarsely_dense_in (s t : set α)  :=
     (ε : 0) , is_coarsely_dense_in_with ε s t

  def is_coarsely_dense (s : set α)  :=
     (ε : 0) , is_coarsely_dense_with ε s

  def is_net (s : set α) := is_coarsely_dense s

  def is_coarsely_separated_with  (δ : 0) (s : set α)  :=
     x (hx : x  s) y (hy : y  s), x  y  edist x y > δ
  -- We take > δ to get a better "duality" between coarsely separated and dense

  def is_coarsely_separated  (s : set α) :=
     δ : 0 , δ > 0   is_coarsely_separated_with δ s
  -- We need to assume δ>0 because δ=0 would just imply that s (with induced metric) is an emetric space
  -- rather than a pseudo-emetric space

  def is_coarsely_separated_net_in_with  (δ ε : 0) (s t : set α) :=
    is_coarsely_separated_with δ s  is_coarsely_dense_in_with ε s t

  def is_coarsely_separated_net_with  (δ ε : 0) (s : set α) :=
    is_coarsely_separated_with δ s  is_coarsely_dense_with ε s

  def is_coarsely_separated_net_in (s t : set α) :=
    (is_coarsely_separated s)  (is_coarsely_dense_in s t)

  def is_coarsely_separated_net (s : set α) :=
    (is_coarsely_separated s)  (is_coarsely_dense s)



  -- Facts related to density
  namespace is_coarsely_dense_in_with

    lemma refl (s : set α) :
      is_coarsely_dense_in_with 0 s s
    := begin
      rintros x xs,
      use x,
      split,
      { exact xs,},
      { simp,}
    end

    lemma trans {ε ε' : 0} {r s t : set α} :
      is_coarsely_dense_in_with ε r s   is_coarsely_dense_in_with ε' s t 
      is_coarsely_dense_in_with (ε + ε') r t
    := begin
      rintros r_in_s s_in_t,
      rintros z zt,
      rcases s_in_t zt with y,ys,yd ,
      rcases r_in_s ys with x,xr,xd⟩,
      use x,
      split,
      { exact xr, },
      {
        calc edist z x  (edist z y) + (edist y x) : edist_triangle z y x
                   ...  ε           + ε'        : sorry --  by linarith [xd,yd] -- sorry -- best way here?
      }
    end

    lemma weaken {ε ε' : 0} {s s' t t' : set α } :
      is_coarsely_dense_in_with ε s t   s  s'   t'  t   ε  ε' 
      is_coarsely_dense_in_with ε' s' t'
    := begin
      intros s_in_t s_sub_s' t'_sub_t ε_le_ε' ,
      rintros z zt',
      have zt : z  t, from t'_sub_t zt',
      cases s_in_t zt with x xgood,
      cases xgood with xs xd,
      have xs' : x  s', from s_sub_s' xs,
      use x,
      split,
      { exact xs'},
      {
        calc edist z x  ε  : xd
                   ...  ε' : ennreal.coe_le_coe.mpr ε_le_ε',
      },


    end

  end is_coarsely_dense_in_with

  namespace coarsely_separated_net

    def is_max_coarsely_separated_in_with (δ : 0) (s S : set α) :=
      s  S 
      is_coarsely_separated_with δ s  
      ( t : set α, s  t  t  S   is_coarsely_separated_with δ t  s = t)


    lemma max_coarsely_separated_in_is_net (δ : 0) (s S: set α) :
      is_max_coarsely_separated_in_with δ s S  is_coarsely_separated_net_in_with δ δ s S
    := begin
      rintros s_sub_S, s_sep, s_max⟩,
      split,
      { exact s_sep},
      {
        rintros x xS,
        let t := s.insert x,
        by_contradiction H,
        push_neg at H,
        have x_notin_s : x  s, {
          by_contradiction x_in_s,
          -- and then?
          have lol : edist x x > δ, from H x_in_s,
          sorry, -- and then?

        }, -- use h telling us that x is far from all elements of s
        have s_sub_t : s  t , from subset_insert x s,
        have s_ne_t : s  t , from ne_insert_of_not_mem s x_notin_s,
        have t_sub_S : t  S, from insert_subset.mpr xS, s_sub_S⟩,
        have : is_coarsely_separated_with δ t,
        {
          rintros x xt,
          rintros y yt,

          -- apply forall_insert_of_forall,
          -- actually need set.forall_insert_of_forall
          sorry,
        },
        exact s_ne_t (s_max t s_sub_t t_sub_S this),
      }
    end

    lemma exists_max_coarsely_separated_in_with (δ : 0) (S : set α) :
       s : set α, is_max_coarsely_separated_in_with δ s S
    := begin
      sorry -- Zorn
    end

    lemma exists_coarsely_separated_net_in_with (δ : 0) (S : set α) :
       s  S, is_coarsely_separated_net_in_with δ δ s S
    := begin
      rcases exists_max_coarsely_separated_in_with δ S with s, s_sub_S, s_sep, s_max_sep⟩,
      use s,
      split,
      { exact s_sub_S },
      { exact max_coarsely_separated_in_is_net δ s S s_sub_S, s_sep, s_max_sep },
    end
  end coarsely_separated_net

end subset

There are a few sorrys that I'd be glad to get help on. Also, since I'd eventually like to have it merged, any more consequent critics/discussion would be welcome. I can also describe my further plans!

Johan Commelin (Feb 02 2022 at 07:42):

I don't know anything about metric spaces. But I do know that mathlib heavily relies on filters, and I expect that a bunch of these concepts that you are defining would have to be connected to the filter library so that these developments play well together.

Also, since I'd eventually like to have it merged, any more consequent critics/discussion would be welcome.

Minor stylistic issue: we don't indent declarations inside sections/namespaces. So the word def or lemma always appears flush left. (There are some very old files in mathlib that don't do this. We should fix that.)

bottine (Feb 02 2022 at 07:44):

Minor stylistic issue: we don't indent declarations inside sections/namespaces. So the word def or lemma always appears flush left. (There are some very old files in mathlib that don't do this. We should fix that.) Noted!

bottine (Feb 02 2022 at 07:45):

Concerning filters, I'm not really sure: As far as I know/see it, these coarse notions on metric spaces don't have much topological content to them… but maybe I'm misled here

Johan Commelin (Feb 02 2022 at 07:50):

You should also prove all the different implications between these notions. And take advantage of dot-notation in the names.
If you prove is_foo → is_bar, then call the lemma is_foo.is_bar. Because then you can use it as follows: if you have an assumption h : is_foo somewhere, and you want is_bar, you can simply write h.is_bar.

bottine (Feb 02 2022 at 07:56):

Should trivial implications also be proved? As in, if I define a property extending another one, the "weakening" deserves a lemma? Similarly, I don't know how many variations on a definition should be provided (say being dense relative to instead of just being dense, or the _with versions, etc). In a sense, I feel like it can be a waste to define every single minor variation and implication, if I don't know that I will use them.

Johan Commelin (Feb 02 2022 at 07:57):

bottine said:

Should trivial implications also be proved? As in, if I define a property extending another one, the "weakening" deserves a lemma?

Library users will thank you if you do that for them.

Johan Commelin (Feb 02 2022 at 07:58):

If you look at things like continuous you will see that there are variations continuous_at, continuous_within and same for deriv etc... And systematic ways of passing from one to the other. It takes a bit of effort to set this up, but afterwards it is pleasant to use.

Patrick Massot (Feb 02 2022 at 08:53):

There are some things that make sense for any topological spaces in this. There are also many things that make sense in any uniform space but I'm not sure this is worth the trouble. I will help you clean this but I don't have time right now. Please make sure you follow the style guide and repost your code.

Daniel Roca González (Feb 02 2022 at 08:54):

The right generality are not uniform spaces, but coarse spaces.
Coarse structures (like the ones induces by metric spaces) are "dual" to uniform spaces: this measn that rather than being filters thet are "cofilters", so to speak. One can go about it by defining them via the "cocoarse filter" instead: in the metric case, these are the set of subsets of X times X whose complements have bounded diameter. I actually defined that a while ago if you think it's good to take that approach.

Daniel Roca González (Feb 02 2022 at 08:56):

import order.filter.lift
import topology.subset_properties
import topology.uniform_space.basic
import topology.metric_space.basic
import data.real.nnreal
import tactic.rewrite_search.frontend

open set filter topological_space

open_locale uniformity topological_space big_operators filter nnreal ennreal

set_option eqn_compiler.zeta true

universes u
variables {α : Type*} {β : Type*} {γ : Type*} {δ : Type*} {ι : Sort*}

def coid_rel {α : Type*} := {p : α × α | p.1  p.2}

@[simp] theorem nmem_coid_rel {a b : α} : (a, b)  @coid_rel α  a  b := iff.rfl

@[simp] theorem coid_rel_subset {s : set (α × α)} : s  coid_rel   a, (a, a)  s :=
by simp [subset_def]; exact forall_congr (λ a, by
begin
  split,
  show ( (b : α), (a, b)  s  ¬a = b)  (a, a)  s,
  by {contrapose, simp,},
  show (a, a)  s  ( (b : α), (a, b)  s  ¬a = b),
  by {
    intros not_diag b in_s,
    have := not_diag,
    by_contradiction,
    rw h at in_s,
    contradiction,
  }
end)

@[simp] theorem co_of_coid_id_rel : (@coid_rel α) = id_rel :=
begin
  ext v,
  unfold coid_rel id_rel,
  simp,
end

/-- The composition of corelations -/
def cocomp_rel {α : Type u} (r₁ r₂ : set (α×α)) := {p : α × α | z:α, (p.1, z)  r₁  (z, p.2)  r₂}

localized "infix ` □ `:55 := cocomp_rel" in coarse

@[simp] theorem mem_cocomp_rel {r₁ r₂ : set (α×α)}
  {x y : α} : (x, y)  r₁  r₂   z, (x, z)  r₁  (z, y)  r₂ := iff.rfl

@[simp] theorem not_mem_cocomp_rel {r₁ r₂ : set (α×α)}
  {x y : α} : (x, y)  r₁  r₂   z, (x, z)  r₁  (z, y)  r₂ :=
begin
  split,
  {
    intro not_in_comp,
    simp at not_in_comp,
    cases not_in_comp with z g,
    by_contra h,
    simp at h,
    have either_r : (x, z)  r₁  (z, y)  r₂,
    {
      exact or_iff_not_imp_left.mpr (h z),
    },
    contradiction,
  },
  {
    intro exists_in_neither,
    cases exists_in_neither with z hz,
    simp,
    use z,
    exact not_or_distrib.mpr hz,
  }
end

@[simp] lemma co_of_cocomp_eq_comp_of_co {r₁ r₂ : set(α×α)} :
  (r₁  r₂) = r₁  r₂ :=
begin
  ext v,
  cases v with x y,
  simp,
  push_neg,
  exact iff.rfl,
end

@[simp] lemma not_comp_iff_cocomp_of_co {r₁ r₂ : set(α×α)}
  : (r₁  r₂) = r₁  r₂ :=
begin
  apply compl_inj_iff.mp,
  simp,
  exact boolean_algebra.to_core (set (α × α)), -- why is this needed?
end

@[simp] theorem swap_coid_rel : prod.swap '' coid_rel = @coid_rel α :=
set.ext $ assume a, b⟩, by simp [image_swap_eq_preimage_swap]; rw eq_comm


theorem monotone_cocomp_rel [preorder β] {f g : β  set (α×α)}
  (hf : monotone f) (hg : monotone g) : monotone (λx, (f x)  (g x)) :=
begin
  unfold monotone,
  intros a b lt,
  unfold cocomp_rel,
  simp,
  intros a' b' h z,
  cases h z,
  {
    left,
    apply hf lt,
    exact h_1,
  },
  {
    right,
    apply hg lt,
    exact h_1,
  }
end



@[mono]
lemma cocomp_rel_mono {f g h k: set (α×α)} (h₁ : h  f) (h₂ : k  g) : h  k  f  g  :=
begin
  unfold cocomp_rel, simp,
  intros a b h z,
  have := h z,
  tauto,
end


lemma prod_mk_nmem_cocomp_rel {a b c : α} {s t : set (α×α)} (h₁ : (a, c)  s) (h₂ : (c, b)  t) :
  (a, b)  s  t :=
begin
  simp,
  use c,
  push_neg,
  split,
  assumption',
end

@[simp] lemma coid_cocomp_rel {r : set (α×α)} : coid_rel  r = r :=
begin
  apply compl_inj_iff.mp,
  rw co_of_cocomp_eq_comp_of_co,
  simp,
end

lemma cocomp_rel_assoc {r s t : set (α×α)} :
  (r  s)  t = r  (s  t) :=
begin
  apply compl_inj_iff.mp,
  repeat {rw co_of_cocomp_eq_comp_of_co},
  exact comp_rel_assoc,
end

lemma subset_cocomp_self {α : Type*} {s : set (α × α)} (h : s  coid_rel) : s  s  s :=
λ x, y xy_in, by {
  simp at h,
  simp at xy_in,
  have := xy_in y,
  cases this,
  assumption,
  have := h y,
  contradiction,
}


structure coarse_space (α : Type u) :=
(cocoarse   : filter (α × α))
(corefl     : cocoarse  𝓟 coid_rel)
(symm       : tendsto prod.swap cocoarse cocoarse)
(cocomp       : cocoarse  cocoarse.lift' (λs, s  s))

def coarse_space.mk' {α : Type u} (U : filter (α × α))
  (corefl : coid_rel  U)
  (symm :  r  U, prod.swap ⁻¹' r  U)
  (cocomp :  r  U, r  r  U) : coarse_space α :=
U, λ r ru, mem_of_superset corefl ru, symm,
  begin
    intros v h,
    rw [mem_lift'_sets] at h,
    rcases h with w, hw, hcomp⟩,
    have : w  w  U, by {exact cocomp w hw},
    apply mem_of_superset this, exact hcomp,
    refine monotone_cocomp_rel _ _, tidy,
  end

lemma coarse_space.eq :
  {u₁ u₂ : coarse_space α}, u₁.cocoarse = u₂.cocoarse  u₁ = u₂
| u₁, _, _, _  u₂, _, _, _ h := by { congr, exact h }

attribute [class] coarse_space


section coarse_space
variables [coarse_space α]

def cocoarse (α : Type u) [s : coarse_space α] : filter (α × α) :=
  @coarse_space.cocoarse α s

localized "notation `𝓒'` := cocoarse" in cocoarse

def coarse (α : Type u) [s : coarse_space α] : set (set (α×α)) :=
  compl '' (𝓒' α).sets

localized "notation `𝓒` := coarse" in cocoarse

lemma mem_coarse {s : set (α×α)} : s  𝓒 α  s  𝓒' α :=
begin
  split,
  {rintro h, rcases h with t, h_h_left, rfl⟩, simpa,},
  {rintro h, use s, simpa,}
end

lemma cocoarse_le_corefl : 𝓒' α  𝓟 coid_rel :=
@coarse_space.corefl α _

lemma corefl_mem_cocoarse :
  coid_rel  𝓒' α :=
begin
  have := @coarse_space.corefl α _,
  simpa,
end

lemma mem_cocoarse_of_eq {x y : α} {s : set (α × α)} (hx : x  y) :
   s  𝓒' α, (x, y)  s :=
begin
  use coid_rel,
  split, by {exact corefl_mem_cocoarse},
  simpa,
end

lemma symm_le_cocoarse : map (@prod.swap α α) (𝓒' _)  (𝓒' _) :=
(@coarse_space.symm α _)

lemma cocoarse_le_cocomp : 𝓒' α  (𝓒' α).lift' (λs:set (α×α), s  s) :=
(@coarse_space.cocomp α _)

lemma tendsto_swap_cocoarse : tendsto (@prod.swap α α) (𝓒' α) (𝓒' α) :=
symm_le_cocoarse

lemma cocomp_mem_cocoarse_sets {s : set (α × α)} (hs : s  𝓒' α) :
  s  s  𝓒' α :=
begin
  apply cocoarse_le_cocomp,
  rw mem_lift'_sets, use s, split,
  {assumption},
  {intros x h, assumption,},
  {refine monotone_cocomp_rel _ _, tidy,}
end


end coarse_space


lemma mem_cocoarse_dist' [metric_space α] {s : set (α×α)} : s  ( r{r:|r0},  𝓟 ({p:α×α | dist p.1 p.2  r}))  ( r0,  {p:α×α}, p  s  dist p.1 p.2  r) :=
begin
  rw mem_binfi_of_directed _ _,
  show directed_on ((λ (i : ), 𝓟 {p : α × α | dist p.fst p.snd  i}) ⁻¹'o ge) {r :  | r  0},
  by {
    intros r₁ h₁ r₂ h₂,
    use max r₁ r₂,
    simp, tauto,
  },
  show {r :  | r  0}.nonempty,
  by {use 1, simp,},
  simp [subset_def],
  split,
  repeat {
    rintro r, _, h⟩, use r,
    split, assumption,
    intros a b, contrapose, simp, apply h,
  },
end



instance coarse_of_bounded_space [metric_space α] : coarse_space α :=
{ cocoarse := ( r{r:|r0}, 𝓟 ({p:α×α | dist p.1 p.2  r})),
  corefl := by {
    have : (1 : ) > (0 : ),
    by linarith,
    have lm : 𝓟 ({p : α × α | dist p.fst p.snd  1})  𝓟 coid_rel,
    by {
      rw [principal_le_iff],
      intros V hV,
      rw mem_principal at hV,
      refine subset.trans _ hV,
      intro p, cases p with x y,
      simp only [not_le, mem_set_of_eq, nmem_coid_rel, ne.def, mem_compl_eq],
      intro v,
      have : dist x y  0,
      by linarith,
      rwa dist_ne_zero at this,
    },
    refine trans _ lm,
    rw filter.le_def,
    simp only [ mem_cocoarse_dist'],
    intros s h, use 1,
    simp only [true_and, zero_le_one, prod.forall, ge_iff_le, mem_compl_eq],
    simp [compl_subset_comm] at h,
    intros a b in_s,
    have := h in_s, simp at this, assumption,
  },
  symm := by {
    intros f hf,
    simp only [ge_iff_le, mem_map],
    simp only [mem_cocoarse_dist'] at *,
    simp at *,
    conv in ( (r : ), 0  r   (x y : α), (y, x)  f  dist x y  r)
    begin
      congr, funext,
      rw forall_swap,
      find (dist _ _) {rw dist_comm},
    end,
    tauto,
  },
  cocomp := by {
    intro s,
    rw filter.mem_lift'_sets,
    show monotone (λ (s : set (α × α)), s  s), by {exact monotone_cocomp_rel monotone_id monotone_id},
    rintro t, h, cocomp_subset⟩⟩,
    have comp_subset : s  t  t, by {rw compl_subset_compl, simpa,},
    rw mem_cocoarse_dist' at *,
    rcases h with r, r_ge_0, t_has_diam⟩,
    use 2*r,
    have two_r_ge_0 : 2 * r  0, by linarith,
    refine two_r_ge_0, _⟩,
    rintro p in_compl_s,
    have := comp_subset in_compl_s,
    rcases this with z, xz_in_compl_t, zy_in_compl_t⟩,
    have : dist p.fst z  r  dist z p.snd  r,
    by exact t_has_diam xz_in_compl_t, t_has_diam zy_in_compl_t⟩,
    have := dist_triangle p.1 z p.2,
    linarith,
  }
}

open_locale cocoarse

lemma mem_cocoarse_dist [metric_space α] {s : set (α×α)} : s  𝓒' α  ( (r : ) , r  0   {a b : α}, (a, b)  s  dist a b  r) :=
begin
  have unfold_prod : ( (r : ) , 0  r   {a b : α}, (a, b)  s  dist a b  r) = ( r0,  {p:α×α}, p  s  dist p.1 p.2  r),
  by simp,
  rw [unfold_prod, mem_cocoarse_dist'],
  tauto,
end


lemma mem_coarse_dist [metric_space α] {s : set (α×α)} : s  𝓒 α  ( (r : ) , r  0   {a b : α}, (a, b)  s  dist a b  r) :=
begin
  rw mem_coarse,
  rw mem_cocoarse_dist,
  simp,
end

theorem cocoarse_basis_dist [metric_space α] :
  (𝓒' α).has_basis (λ r : , r  0) (λ r, {p:α×α | dist p.1 p.2 > r}) :=
begin
  rw filter.has_basis_iff, intro t,
  rw mem_cocoarse_dist, split,
  {
    rintro r, compl_t_bounded_by_r⟩, use r,
    rw [compl_subset_compl, subset_def],
    simpa,
  },
  {
    rintro r, r_ge_0, coball_subset_t⟩, use r,
    rw [compl_subset_compl, subset_def] at coball_subset_t,
    simp [mem_compl_iff] at *,
    tauto,
  }
end

bottine (Feb 02 2022 at 08:58):

@Patrick Massot Great, thanks! Shall I repost here?

bottine (Feb 02 2022 at 09:00):

@Daniel Roca González Interesting… do you have literature on the subject? Also, Does this greater generality make my code redundant? how do you see both fitting together? I'm somewhat wary of "generality creep", and would rather have something basic up and running than chasing the best interface, though it's probably also better not to rush towards the first construction I stumble upon.

bottine (Feb 02 2022 at 09:04):

(I'd particularly appreciate literature on this duality with uniform spaces.)

Kevin Buzzard (Feb 02 2022 at 09:26):

@bottine is this work just for a project you're doing to learn Lean or are you thinking of ultimately PRing things to mathlib?

bottine (Feb 02 2022 at 09:30):

I'd hope to PR, eventually, if I manage to write PR-able code. This current code is a mix of learning the basics and trying to set up grounds for doing GGT.

Patrick Massot (Feb 02 2022 at 10:13):

I think we should try to prove a couple of things with both the low-tech version and the fancy version and see whether this makes any difference. If the fancy version is harder to use then we should definitely stick to the low-tech one. In the case of uniform spaces many metric proofs are actually easier in the fancy setup, because it gets rid of irrelevant stuff.

Patrick Massot (Feb 02 2022 at 10:39):

You should also make sure @Sebastien Gouezel is aware of your efforts. He formalized a lot of metric geometry in Isabelle (including Gromov hyperbolicity).

Sebastien Gouezel (Feb 02 2022 at 10:42):

Yes, I am subscribed to this thread :-)

Rémi Bottinelli (Feb 05 2022 at 09:21):

OK, I'm slowly cleaning up the space.lean, coarse_lipschitz.lean, coarse_antilipschitz.lean files here: https://github.com/bottine/mathlib/tree/quasi_isometry/src/topology/metric_space/coarse . The plan is to have everything needed to easily show the equivalence between the different definitions of quasi-isometries.

Patrick Massot (Feb 05 2022 at 09:55):

Did you read the style guide? It would help discussing interesting things if you could fix indentation, braces, dangling colon-equal etc.

Daniel Roca González (Feb 05 2022 at 10:23):

I've also been doing a skeleton for the abstract coarse space version and proven that metric spaces are coarse. (A "coarse space" is an abstract definition of a space with a notion of sets of bounded diameter).

Rémi Bottinelli (Feb 05 2022 at 10:28):

Patrick Massot said:

Did you read the style guide? It would help discussing interesting things if you could fix indentation, braces, dangling colon-equal etc.

I did. I meant to ping you once things are clean but to share my progress in the meantime. Sorry for the confusion!

Daniel Roca González (Feb 05 2022 at 10:29):

One way to talk about quasiisometries would be as follows: in general a coarse equivalence is not QI, but a coarse equivalence between geodesic spaces is, but most coarse spaces are coarse equivalent to a geodesic space, so we get QI equivalences for free. The problem is that the proof of this may be out of reach (we need topological versions of graphs).

Daniel Roca González (Feb 05 2022 at 10:29):

https://github.com/leanprover-community/mathlib/tree/coarse_structure_ggt/src/topology/coarse

Rémi Bottinelli (Feb 05 2022 at 10:33):

@Daniel Roca González Re. "a coarse equivalence between geodesic space is [QI]": If that is because the "coarse control functions" between your spaces are then necessarily affine, then I guess you don't need geodesicity, but just that any distance is realized by a "coarse walk", no?

Daniel Roca González (Feb 05 2022 at 10:38):

If by a "coarse walk" you mean a QI path from A to B, then yes I think so. If you mean a coarse map from A to B then maybe not.

Rémi Bottinelli (Feb 05 2022 at 10:40):

I mean that there exists some K>0 such that for any two points x,y in the space at distance d, there exists a sequence of something like d/K points each at distance at most K from their predecessor/successor, starting with x, and ending with y.

Daniel Roca González (Feb 05 2022 at 10:44):

Yes, I'm pretty sure that's the same as being QI to a geodesic space, not coarse equivalent.
A coarse map is a map which is proper and bornologous, which in a metric space means:

lemma metric.coarse_proper_iff (f : α  β) :
  coarse.proper f  ( b : set β, emetric.diam b    emetric.diam (f ⁻¹' b)  ) :=
sorry

lemma metric.bornologous_iff (f : α  β) : coarse.bornologous f
   ( (R : 0),  (S : 0),  x y, dist x y < R  dist (f x) (f y) < S) :=
sorry

Daniel Roca González (Feb 05 2022 at 10:54):

yeah, I think that would make things easier, you're right. I'll see if it's actually enough.

Rémi Bottinelli (Feb 05 2022 at 10:55):

Well, if the principal goal is GGT, then it's reasonable to work with the assumption that all spaces are coarsely geodesic/graphs/whatever, so I wouldn't reject the idea of dropping the pedestrian approach I tried.

Daniel Roca González (Feb 05 2022 at 11:00):

yes, that sounds good. I'll try to prove that being coarsely geodesic is enough.

Daniel Roca González (Feb 05 2022 at 11:02):

@Patrick Massot what do you think of the definition of coarsely geodesic in Rémis branch? Can it be simplified in some way? It's quite convoluted, but I can't see how to simplify it...
https://github.com/bottine/mathlib/blob/quasi_isometry/src/topology/metric_space/coarse/space.lean

Rémi Bottinelli (Feb 05 2022 at 11:08):

Re the definition I used: I think it would be cleaner to say that \alpha is coarsely geodesic if there exists (K,L,k,l) such that for any pair of points x y, there exists a (K,L,k,l) quasi-isometric embedding from a path graph to the space with end points x and y. This is closest to "coarsifying" the strict definition I guess.

Rémi Bottinelli (Feb 05 2022 at 11:08):

But I thought it might make more sense to have a definition without needing to define quasi-isometric embeddings first.

Daniel Roca González (Feb 05 2022 at 12:07):

No, I do think the definition should use QI embeddings

Patrick Massot (Feb 05 2022 at 13:47):

Rémi, you need to learn about

import data.fin.basic

variables n : 

#check fin.last n

#check (0 : fin $ n+1)

Rémi Bottinelli (Feb 06 2022 at 11:24):

@Patrick Massot I tried to follow most of the guidelines wrt alignment: https://github.com/bottine/mathlib/blob/quasi_isometry/src/topology/metric_space/coarse/ (see only files spaces.lean, and coarse_(anti)lipschitz.lean, but haven't documented the code).

Rémi Bottinelli (Feb 06 2022 at 11:31):

If possible, I'd like some feedback on the following:

  • The definition of is_uniformly_locally_finite_with doesn't type-check, and I'm not sure how to make it do: how do I tell .card that I really have a finset?
  • Generally, how do I decide on having _with variants or not, and how do I deal with the doubling of code this entails?
  • In max_coarsely_separated_in_is_net, I haven't yet managed to finish the proof, if someone wants to fill it in!
  • Same for the exists_max_coarsely_separated_in_with.
  • In files coarsely_(anti)lipschitz.lean, I have plenty of proofs by calculations that are relatively trivial, but filling in the steps is often tedious and results in long terms. Am I doing something wrong? It essentially all follows from monotonicity of all functions involved.
  • How to cleanly deal with implicit arguments coherently?
  • Similarly, is there a way to deal with coercion more efficiently than I do?

And then, I'll want to define quasi-isometric embeddings and quasi-isometries. I think I now have close to all lemmas necessary to show that all reasonable definitions are equivalent. Is there any preference of which definition to use?

To be honest, this code feels very messy, and I'm not sure how to make it properly organized.

Patrick Massot (Feb 06 2022 at 14:27):

For you first question, you can use:

variables (α : Type u) [pseudo_emetric_space α]

class is_locally_finite :=
(fintype_ball :  x : α,  r : 0, fintype (ball x r))

attribute [instance] is_locally_finite.fintype_ball

def is_uniformly_locally_finite_with [is_locally_finite α] (k : 0  ) :=
 x : α,  r : 0,  card (ball x r)  k r


def is_uniformly_locally_finite [is_locally_finite α] :=
 k : 0  , is_uniformly_locally_finite_with α k

Patrick Massot (Feb 06 2022 at 14:28):

But really is_locally_finite is a strange name since your definition doesn't seem to be equivalent to local finiteness as a topological space.

Patrick Massot (Feb 06 2022 at 14:29):

The natural definition would have an existential quantifier on the radius, so that the definition says that every point has some finite neighborhood.

Patrick Massot (Feb 06 2022 at 14:31):

An easy solution if you insist on using this definition is to rename it finite_balls_space or something like this.

Patrick Massot (Feb 06 2022 at 14:31):

In topology the natural definition is

class loc_finite_space (α : Type u) [topological_space α] : Prop :=
(has_basis :  x : α, (𝓝 x).has_basis (λ s : set α, s  𝓝 x  finite s) id)

lemma loc_finite_space_of_bases {α : Type u} [topological_space α]
  {ι : Sort*} {p : ι  Prop} {s : α  ι  set α}
  (h :  x, (𝓝 x).has_basis p (s x)) (h' :  x i, p i  finite (s x i)) :
  loc_finite_space α :=
begin
  intro x,
  apply (h x).to_has_basis,
  { intros i pi,
    exact s x i, ⟨(h x).mem_of_mem pi, h' x i pi⟩, by refl },
  { rintros U U_in, hU⟩,
    rcases (h x).mem_iff.mp U_in with i, pi, hi⟩,
    exact i, pi, hi }
end

Patrick Massot (Feb 06 2022 at 14:33):

About _with variants, it really depends on whether you need this in math. There is no Lean-only answer.

Patrick Massot (Feb 06 2022 at 14:43):

I also did your missing proof:

    have : is_coarsely_separated_with δ t,
    { rintros z (rfl | zs) y (rfl | ys),
     { exact λ h, (h rfl).elim },
     { exact λ hzy, H ys },
     { intro hzy,
       rw edist_comm,
       exact H zs },
     { exact s_sep zs ys } },

Patrick Massot (Feb 06 2022 at 14:46):

I exhausted my time budget, I'll return to this later if nobody else beats me to it.

Rémi Bottinelli (Feb 06 2022 at 14:51):

Patrick Massot said:

But really is_locally_finite is a strange name since your definition doesn't seem to be equivalent to local finiteness as a topological space.

Indeed, I see now that the naming is not ideal. The problem is that in the case of arbitrary metric space, the definition of "finite balls" is not as clear as in the case of graphs, for which this is the same as "locally finite".

Rémi Bottinelli (Feb 06 2022 at 14:55):

And thanks for your time and the snippet :)

Yaël Dillies (Feb 06 2022 at 15:15):

Also, please stop taking common names in the root namespace :smile:
I swear I had no clue what docs#locally_finite was about before I went to see the docs.

Daniel Roca González (Feb 06 2022 at 15:26):

@Rémi Bottinelli for locally compact metric spaces (which are the ones we'll be interested in) your definition should be equal to the usual topological one: you can use compact sets instead of balls in the locally compact case

Daniel Roca González (Feb 06 2022 at 15:26):

docs#locally_finite is a genuinely different thing, since it's a property for families of sets, not of spaces.

Yaël Dillies (Feb 06 2022 at 15:33):

Yeah of course. I'm mentioning it for the name, not for the meaning.

Yaël Dillies (Feb 06 2022 at 15:34):

People would have probably been quite upset if I called docs#locally_finite_order locally_finite.

Rémi Bottinelli (Feb 06 2022 at 15:37):

Yaël Dillies said:

Also, please stop taking common names in the root namespace :smile:
I swear I had no clue what docs#locally_finite was about before I went to see the docs.

Mmh, is that an actual problem? I thought the folder hierarchy induced its own namespacing, making it OK

Rémi Bottinelli (Feb 06 2022 at 15:44):

@Daniel Roca González I'm not sure what you're aiming at. In "your definition should be", are you saying that whatever definition we take for "locally finite", we want it to agree with the topological definition ? I think this is already the case if the space is locally compact: any ball can be covered by finitely many balls of a given radius, hence is finite, so we're good. Are we?

Alex J. Best (Feb 06 2022 at 15:45):

The folder heierachy is more just for organising things, it has no effect on namespace.
The sort of issue we encounter is if you have a bunch of things imported because you are working on, say, manifolds which have a lot of prerequisites. We need to be able to write things unambiguously
For instance what should locally_finite G mean for some topogical group?
It could mean that G is locally finite as a topological space, or it could mean that G is locally finite as a group.
The way we solve this is with explicit namespaces in the code. So

namespace topological_space
def locally_finite ...
end topological_space

means that locally_finite defined there is really called topological_space.locally_finite if someone then wants to work on topology they open topological_space and then locally_finite refers only to that notion.

The folder structure has no effect on namespaces though

Rémi Bottinelli (Feb 06 2022 at 15:47):

Ah, I see, thanks for the explanation!

Rémi Bottinelli (Feb 06 2022 at 15:49):

But, say, I define "coarse lipschitz" maps at some point: coarse_lipschitz. I presume those fit in the root namespace because the name is pretty unambiguous, or should everything be namespaced, essentially?

Daniel Roca González (Feb 06 2022 at 15:50):

@Rémi Bottinelli yes, that's what I want to say: I don't think it should have a different name, because in all cases we're interested in it's equivalent

Daniel Roca González (Feb 06 2022 at 15:51):

rather, if we want to phrase it in a different way, we should use an iff lemma in the setting we want: see docs#metric.continuous_iff for an example of something like this

Rémi Bottinelli (Feb 06 2022 at 15:54):

And re namespaces: shall I just wrap it all in a namespace coarse … end coarse namespace? Does having this namespace mean I could/should drop the "coarse"/"coarsely" adjectives in my names?

Yaël Dillies (Feb 06 2022 at 15:59):

I personally prefer more descriptive names over namespaces.

Rémi Bottinelli (Feb 06 2022 at 16:01):

[o/t: I don't know if thumbsup is the proper "ack" emoji/reaction, or if it's interpreted here as approbation]

Kevin Buzzard (Feb 06 2022 at 16:05):

I usually use it to mean "I agree"

Rémi Bottinelli (Feb 06 2022 at 16:07):

ah, mmh, then maybe I shall use the octopus, whatever this one means :)

Reid Barton (Feb 06 2022 at 16:42):

The topology library got a pass on namespacing to some extent by being first (and by being important) but there are some names in it (like embedding maybe?) that should probably be namespaced

Alex J. Best (Feb 06 2022 at 16:49):

Rémi Bottinelli said:

ah, mmh, then maybe I shall use the octopus, whatever this one means :smile:

https://leanprover.zulipchat.com/#narrow/stream/113489-new-members/topic/octopus/near/199551941 :grinning: :octopus:

Patrick Massot (Feb 06 2022 at 20:27):

I'm back with a bit of time. I'm reading a bit more the space file. Note that mathlib's tradition is that stupid lemmas can have obfuscated proofs so you can write:

lemma refl (s : set α) : is_coarsely_dense_with_in 0 s s :=
λ x xs, x, xs, by simp

Patrick Massot (Feb 06 2022 at 20:28):

Then the next proofs can be compressed without loosing any readability:

lemma trans {ε ε' : 0} {r s t : set α}
  (r_in_s : is_coarsely_dense_with_in ε r s) (s_in_t : is_coarsely_dense_with_in ε' s t) :
  is_coarsely_dense_with_in (ε + ε') r t :=
begin
  rintros z zt,
  rcases s_in_t zt with y,ys,yd ,
  rcases r_in_s ys with x,xr,xd⟩,
  use [x, xr],
  calc edist z x  edist z y + edist y x : edist_triangle z y x
                ...  ε'          + ε    : add_le_add yd xd
                ... = ε + ε'             : add_comm _ _
end

lemma weaken {ε ε' : 0} {s s' t t' : set α }
  (s_in_t : is_coarsely_dense_with_in ε s t)
  (s_sub_s' : s  s') (t'_sub_t : t'  t) (ε_le_ε' : ε  ε') :
  is_coarsely_dense_with_in ε' s' t' :=
begin
  rintros z zt',
  have zt : z  t, from t'_sub_t zt',
  rcases s_in_t zt with x, xs, xd⟩,
  have xs' : x  s', from s_sub_s' xs,
  use [x, xs'],
  calc edist z x  ε  : xd
             ...  ε' : ennreal.coe_le_coe.mpr ε_le_ε'
end

Patrick Massot (Feb 06 2022 at 20:30):

If you want to rather improve readability then you can add type ascriptions and use obtain:

lemma weaken {ε ε' : 0} {s s' t t' : set α }
  (s_in_t : is_coarsely_dense_with_in ε s t)
  (s_sub_s' : s  s') (t'_sub_t : t'  t) (ε_le_ε' : ε  ε') :
  is_coarsely_dense_with_in ε' s' t' :=
begin
  rintros (z : α) (zt' : z  t'),
  have zt : z  t, from t'_sub_t zt',
  obtain x: α, xs : x  s, xd : edist z x  ε := s_in_t zt,
  have xs' : x  s', from s_sub_s' xs,
  use [x, xs'],
  calc edist z x  ε  : xd
             ...  ε' : ennreal.coe_le_coe.mpr ε_le_ε'
end

Patrick Massot (Feb 06 2022 at 20:32):

Things like

  split,
  { exact s_sep},

in max_coarsely_separated_in_is_net costs one level of indent for very little gain. You can use refine ⟨s_sep, _⟩

Patrick Massot (Feb 06 2022 at 20:35):

The lines

  have x_notin_s : x  s,
   { by_contradiction x_in_s,

prove that you are not quite yet used to how negation is handled in Lean. And I really with by_contradiction would refuse to do that. By definition, x ∉ s means x ∈ s → false. So your first line should be intro x_in_s.

Patrick Massot (Feb 06 2022 at 20:39):

The proof

lemma symm {K : 0} {f₀ f₁ : α  β} :
  are_close_with K f₀ f₁   are_close_with K f₁ f₀ :=
begin
  intro acw,
  intro x,
  specialize acw x,
  rw edist_comm (f₀ x) (f₁ x),
  exact acw,
end

should be

lemma symm {K : 0} {f₀ f₁ : α  β} :
  are_close_with K f₀ f₁  are_close_with K f₁ f₀ :=
begin
  intros acw x,
  rw  edist_comm,
  exact acw x,
end

Patrick Massot (Feb 06 2022 at 20:41):

For

lemma trans {K L : 0} {f g h: α  β} (c : are_close_with K f g) (d : are_close_with L g h) :
  are_close_with (K + L) f h :=
begin
  intro x,
  let y := f x,
  let z := g x,
  let w := h x,
  calc edist y w  edist y z + edist z w : edist_triangle _ _ _
              ...  K        + L        : add_le_add (c x) (d x)
              ... = (K + L)              : by simp,
end

it is enough and more readable to write

lemma trans {K L : 0} {f g h: α  β} (c : are_close_with K f g) (d : are_close_with L g h) :
  are_close_with (K + L) f h :=
λ x, calc edist (f x) (h x)  edist (f x) (g x) + edist (g x) (h x) : edist_triangle _ _ _
              ...  K        + L        : add_le_add (c x) (d x)

Patrick Massot (Feb 06 2022 at 20:42):

Did you know that lambda can destructure stuff as in

lemma symm  (f₀ f₁ : α  β) : are_close f₀ f₁   (are_close f₁ f₀) :=
λ K₀,cw₀⟩, K₀,are_close_with.symm cw₀

Patrick Massot (Feb 06 2022 at 20:42):

Beware it can't do the rfl magic of rintro

Patrick Massot (Feb 06 2022 at 20:48):

I hope you can learn stuff from some compression.

lemma close_composites_of_dense {ε : 0} {s : set α} (cdw : is_coarsely_dense_with ε s) :
 retract : α  subtype s,
  are_close_with ε (coe  retract) (@id α) 
  are_close_with ε (retract  coe) (@id (subtype s)) :=
begin
    -- First we restate `cdw` in terms the axiom of choice likes:
  have cdw' :  x : α,  y : subtype s, edist x y  ε,
  { intro x,
    rcases cdw (mem_univ x) with y, ys, yd⟩,
    exact ⟨⟨y,ys⟩, yd },
  choose ret good using cdw',
  use ret,
  split ;
  { intros x,
    rw edist_comm,
    exact good x }
end

Patrick Massot (Feb 06 2022 at 20:49):

lemma range_of_coarse_identity {C : 0} {f : α  α} (close : are_close_with C id f) :
  is_coarsely_dense_with C (range f) :=
λ x hx, f x, mem_range_self x, close x

Patrick Massot (Feb 06 2022 at 20:52):

I'm moving to the coarse_lipschitz file. You should learn about docs#le_rfl which is a version of docs#le_refl with an implicit argument

Patrick Massot (Feb 06 2022 at 20:56):

lemma iff_range_restriction_is (K L: 0) (f : α  β)  :
  coarse_lipschitz_with K L f  coarse_lipschitz_with K L (range_factorization f) :=
begin
  split,
  { intro clw,
    intros x y,
    --rw range_factorization_coe,
    rw subtype.edist_eq _ _,
    rw range_factorization_coe,
    rw range_factorization_coe,
    apply clw,},
  { intro clw,
    intros x y,
    rw range_factorization_coe f x,
    rw range_factorization_coe f y,
    rw subtype.edist_eq _ _,
    apply clw,},
end

could be

lemma iff_range_restriction_is (K L: 0) (f : α  β)  :
  coarse_lipschitz_with K L f  coarse_lipschitz_with K L (range_factorization f) :=
by simp only [coarse_lipschitz_with, subtype.edist_eq, range_factorization_coe]

Patrick Massot (Feb 06 2022 at 20:59):

lemma of_coarse_lipschitz_images
  {K L : 0} {f : α  β} (clf : coarse_lipschitz_with K L f)
  {ε : 0} {s t : set α} (cdiw : is_coarsely_dense_with_in ε s t) :
  is_coarsely_dense_with_in (L + K*ε) (f '' s) (f '' t) :=
begin
  rintros _ x, xt, rfl⟩,
  rcases cdiw xt with y, ys, yd⟩,
  use [f y, mem_image_of_mem f ys],
  calc edist (f x) (f y)  L + K * edist x y  : clf _ _
                        ...  L + K * ε      : le_add_mul le_rfl le_rfl yd
                        ...  (L + K * ε : 0)      : by simp
end

Patrick Massot (Feb 06 2022 at 21:01):

lemma iff_range_restriction_is {f : α  β}  :
  coarse_lipschitz f  coarse_lipschitz (range_factorization f) :=
λ K, L, clwf⟩, K, L, (coarse_lipschitz_with.iff_range_restriction_is K L f).mp clwf⟩,
 λ K, L, clwf⟩, K, L, (coarse_lipschitz_with.iff_range_restriction_is K L f).mpr clwf

Patrick Massot (Feb 06 2022 at 21:03):

I'm not sure what to say about painful computation. The main issue here is the classic issue that linarith doesn't handle nnreal and ennreal. But you can still simplify a bit as in

lemma comp
  {Kf Lf : 0} {f : α  β} (clf : coarse_lipschitz_with Kf Lf f)
  {Kg Lg : 0} {g : β  γ} (clg : coarse_lipschitz_with Kg Lg g) :
  coarse_lipschitz_with (Kg * Kf) (Lg + Kg * Lf) (g  f) :=
λ x y, calc
edist ((g  f) x)  ((g  f) y)  Lg + Kg * (edist (f x) (f y)) : clg (f x) (f y)
                           ...  Lg + Kg * (Lf + Kf * edist x y) : le_add_mul le_rfl le_rfl (clf x y)
                           ... = (Lg + Kg * Lf ) + (Kg * Kf) * edist x y : by ring
                           ... = _ : by simp

Rémi Bottinelli (Feb 07 2022 at 06:08):

Woah, thanks a lot!

Patrick Massot (Feb 07 2022 at 06:54):

Note there is no mathematical content to all those modifications. I've blindly applied usual Lean tricks. I couldn't tell what this is all about, I didn't read any statement.

Rémi Bottinelli (Feb 07 2022 at 07:10):

I assume I should mostly read pieces of mathlib to get more of these tricks/literacy?

Kevin Buzzard (Feb 07 2022 at 07:15):

You should just keep writing code and then other people will look at it and tell you tricks. At least, that's what I did. I found mathlib impossible to read and very intimidating as a learner.

Rémi Bottinelli (Feb 07 2022 at 07:15):

Works for me :)

Rémi Bottinelli (Feb 07 2022 at 07:40):

Patrick Massot said:

lemma iff_range_restriction_is {f : α  β}  :
  coarse_lipschitz f  coarse_lipschitz (range_factorization f) :=
λ K, L, clwf⟩, K, L, (coarse_lipschitz_with.iff_range_restriction_is K L f).mp clwf⟩,
 λ K, L, clwf⟩, K, L, (coarse_lipschitz_with.iff_range_restriction_is K L f).mpr clwf

Any reason this is not by simpa only [coarse_lipschitz_with.iff_range_restriction_is] beside variety for pedagogical purposes ?

Rémi Bottinelli (Feb 07 2022 at 07:43):

(I say simpa because simp doesn't seem to work)

Kevin Buzzard (Feb 07 2022 at 07:50):

I bet the longer proof compiles much quicker. It's swings and roundabouts probably (some advantages, some disadvantages). The longer proof doesn't fire up the tactic framework at all.

Rémi Bottinelli (Feb 07 2022 at 07:53):

I assume there is a kind of proof refinement process, where you start with "whatever you came up with that typechecks" and then refine the code until you get something striking a balance between readable, robust and efficient.

Kevin Buzzard (Feb 07 2022 at 08:00):

I think that when it comes to mathlib, robust is at the top of the list, and readable at the bottom (although not everyone likes the latter decision)

Patrick Massot (Feb 07 2022 at 08:00):

Rémi Bottinelli said:

I assume I should mostly read pieces of mathlib to get more of these tricks/literacy?

I hope you will already learn a lot by carefully comparing what you originally wrote with my version. This is much more efficient than reading efficient proofs that you didn't try to write yourself. This is precisely the reason why I did this yesterday.

Patrick Massot (Feb 07 2022 at 08:01):

Rémi Bottinelli said:

Patrick Massot said:

lemma iff_range_restriction_is {f : α  β}  :
  coarse_lipschitz f  coarse_lipschitz (range_factorization f) :=
λ K, L, clwf⟩, K, L, (coarse_lipschitz_with.iff_range_restriction_is K L f).mp clwf⟩,
 λ K, L, clwf⟩, K, L, (coarse_lipschitz_with.iff_range_restriction_is K L f).mpr clwf

Any reason this is not by simpa only [coarse_lipschitz_with.iff_range_restriction_is] beside variety for pedagogical purposes ?

No, in that case I simply noticed your proof wasn't a real tactic proof (no rw or simp) so it could be turned into a proof term. And also the proof sounded like something you wouldn't bother mentioning on paper so it was eligible for obfuscation.

Kevin Buzzard (Feb 07 2022 at 08:02):

I am constantly doing this (golfing code) with students too -- they show me some code and I reply with some code which is much more compressed and often does the same thing in the same way. It's miserable having to write 40 lines doing something which looks trivial in math land, but if you only have to write 5 lines then you feel a lot better about it

Patrick Massot (Feb 07 2022 at 08:03):

Rémi Bottinelli said:

(I say simpa because simp doesn't seem to work)

That's presumably because simpa tries reflexivity after doing its work. There was a recent conversation about that where Mario, the author of simpa, was very surprised to learn that everybody was abusing this property, hence completely blurring the intended semantics of simpa

Rémi Bottinelli (Feb 07 2022 at 08:04):

OK, I'll try and attack the third file coarse_antilipschitz.lean with these new tricks. I think you gave me plenty enough to munch on, @Patrick Massot , thanks!

Patrick Massot (Feb 07 2022 at 08:04):

Note also that you'll quickly learn how to smell proof inefficiency. There were many proofs in you file where I could see without reading anything carefully that they were wasting a lot of lines doing nothing.

Kevin Buzzard (Feb 07 2022 at 08:11):

Patrick you should mark some of my student projects!

Kevin Buzzard (Feb 07 2022 at 08:11):

Have have have have have refl

Rémi Bottinelli (Feb 07 2022 at 12:50):

https://github.com/bottine/mathlib/blob/31bfb8e72f8a1579ff5ea99b755dc5df42960b07/src/topology/metric_space/coarse/coarse_antilipschitz.lean#L335

Rémi Bottinelli (Feb 07 2022 at 12:51):

I'm having a "technical" problem here: the term dense has type is_coarsely_dense_with_in L (range f) (univ) and I want to return something of type is_coarsely_dense_with L (range f), which by definition is just is_coarsely_dense_with_in L (range f) (univ)`. I would have thought this is a "defeq" and lean would figure this out, but it seems not. What am I doing wrong?

Yaël Dillies (Feb 07 2022 at 13:00):

I don't see how that's defeq. true → p is not defeq to p. You probably want to write the lemma whatever_in_univ_iff (and the other direction whatever.whatever_in). See docs#monotone_on_univ and docs#monotone.monotone_on.

Rémi Bottinelli (Feb 07 2022 at 13:09):

mmh, maybe my question was misguided and I wasn't understanding the error… let me see

Rémi Bottinelli (Feb 07 2022 at 13:11):

yeah, I think the errors comes from somewhere else. Thanks for the pointers though!

Rémi Bottinelli (Feb 08 2022 at 05:55):

hah, the culprit was A COMMA, I'm ashamed!

Daniel Roca González (Feb 08 2022 at 08:27):

Classic!

Rémi Bottinelli (Feb 11 2022 at 13:52):

great day: I managed to use Zorn's lemma successfully!

Rémi Bottinelli (Feb 11 2022 at 13:54):

it wasn't even that hard

Rémi Bottinelli (Feb 12 2022 at 08:47):

OK, what would a good unit for a PR be? I'm close to having one notion of quasi-isometry formalised (quasi-isometric embedding with dense range). In my opinion, assuming the code were clean, this would be a good "semi-atomic" PR, since it defines the basic notions on a space plus one "useful" notion: quasi-isometries (along with it being an equivalence relation).

The WIP code is here.

Some questions:

  • There are multiple reasonable definitions of a quasi-isometry: should one define them all with descriptive names, and show equivalence? Which one should be the canonical one? For now I have chosen one, and am just planning on having fun with unicode arrows to distinguish them.
    I think that with the tools defined in my code, showing all equivalences will not be much work, but is it worth it?

  • I'm unsure about when to define structures vs properties: Should we have predicates saying that this function is a quasi-isometry, or bundle it somewhere? Similarly, iiuc, equivalence relations follow the refl,symm,trans nomenclature, while "arrows" follow something like self,inv,comp, is that a good rule to go by, in that if an arrow can be inverted, but non-canonically, one should not use inv.

  • At some point, I need to use that "being close" for maps is a congruence relation when all (or sufficiently many) maps are coarsely Lipschitz. I feel like using simplifications rules properly and tagging the congruence properties of "closeness" would make things easy to write. The problem is that I think I would need to define a type of "lipschitz maps" on which these congruence rules hold, but it seems a bit heavy to define a new class just for this…

Johan Commelin (Feb 12 2022 at 09:27):

Re "good unit for a PR": what you describe sounds reasonable. And the file you link to is currently 237 lines, which also seems a good size.
I can't really comment on your other questions atm.

Patrick Massot (Feb 12 2022 at 09:37):

For quasi-isometry, you should choose one definition, and write lemma proving it is equivalent to other definitions when you need it. Compare with docs#continuous and then docs#continuous_iff_continuous_at and docs#continuous_iff_is_closed.

Patrick Massot (Feb 12 2022 at 09:40):

The question about bundling is more subtle. It depends how you'll want to use them. Do you primarily want to be able to say that some pre-existing function is a quasi-isom or manipulate the set of quasi-isometries? If unsure then you can start unbundled and then later introduce bundled one. In topology (in mathlib) continuous functions are unbundled almost everywhere, but we do have bundled ones when we want to discuss topologies on spaces of continuous functions. @Anne Baanen may have a preprint for you to read.

Yaël Dillies (Feb 12 2022 at 09:41):

https://arxiv.org/abs/2202.01629

Patrick Massot (Feb 12 2022 at 09:41):

Thanks Yaël! I didn't know there was a public version

Yaël Dillies (Feb 12 2022 at 09:42):

Anne thought I might like the gory details :grinning:

Kevin Buzzard (Feb 12 2022 at 09:45):

@Yaël Dillies you should post a link to the abstract not the pdf -- at least this is what ArXiv tells everyone (I think there's even a Twitter bot now which nags people about this). The pdfs sometimes get tidied up and the links break; they're then regenerated when someone clicks from the abstract page

Yaël Dillies (Feb 12 2022 at 09:46):

Oh okay!

Rémi Bottinelli (Feb 12 2022 at 10:19):

Johan Commelin said:

Re "good unit for a PR": what you describe sounds reasonable. And the file you link to is currently 237 lines, which also seems a good size.
I can't really comment on your other questions atm.

Note that this file uses content of each of space.lean (which I guess should be renamed to basics) and coarse_(anti)lipschitz.lean, so that the whole of it is way larger than 237 lines (not deep content, though).

Rémi Bottinelli (Feb 12 2022 at 10:24):

Patrick Massot said:

The question about bundling is more subtle. It depends how you'll want to use them. Do you primarily want to be able to say that some pre-existing function is a quasi-isom or manipulate the set of quasi-isometries? If unsure then you can start unbundled and then later introduce bundled one. In topology (in mathlib) continuous functions are unbundled almost everywhere, but we do have bundled ones when we want to discuss topologies on spaces of continuous functions. Anne Baanen may have a preprint for you to read.

Noted. Honestly, I don't really have a clear picture on how I plan to use those definitions exactly… I guess it's when you need to actually use them that you know which way is better.

Rémi Bottinelli (Feb 12 2022 at 10:36):

RE that paper: doesn't lean4 use a different mechanism for dealing with those, which makes /something/ linear time rather than exponential in lean3 ?

Patrick Massot (Feb 12 2022 at 10:52):

https://arxiv.org/abs/2001.04301

Kevin Buzzard (Feb 12 2022 at 12:51):

@Rémi Bottinelli the answer is that PRs of 100 lines are the sort of thing which is preferred, so if your code needs big imports but you have things working then you have evidence that your definitions are usable, so start PRing them in 50-150 line chunks and be prepared for reviews of the form "I believe this definition should be changed (and this will break your code)"; hopefully you have enough evidence to rebut them now. Be warned that the PR process is arduous because the maintainers have extremely high and exacting standards about mathlib's code, an approach which makes it harder to get stuff in mathlib but which has served the community extremely well.

Rémi Bottinelli (Feb 12 2022 at 13:15):

OK, that makes sense, thanks! I'll take some time to let things settle down, clean the code some more, and then have a look at the PR procedure guide.

Rémi Bottinelli (Feb 12 2022 at 13:22):

Can I already ask for "write access to non-master branches of the mathlib repository" ? my github handle is bottine.

Patrick Massot (Feb 12 2022 at 13:25):

done

Rémi Bottinelli (Feb 12 2022 at 13:26):

Thanks :)

Kevin Buzzard (Feb 12 2022 at 13:31):

Now you have no excuse not to start :-)

Rémi Bottinelli (Feb 13 2022 at 08:55):

https://github.com/leanprover-community/mathlib/pull/12010

Rémi Bottinelli (Feb 13 2022 at 08:55):

here goes: I hope it's all reasonably OK

Kevin Buzzard (Feb 13 2022 at 09:00):

It's well over 150 lines (three times as long :-( ) so you might have real problems getting anyone to review it. Can't you just PR the first third somehow?

Rémi Bottinelli (Feb 13 2022 at 09:02):

three times as long is a no-go? I probably can, but then it's very low content-wise. I assume this should be made as an update to this specific PR ?

Yaël Dillies (Feb 13 2022 at 09:03):

Also, you have many lines over 100 characters. You'll need to wrap them for style.

Rémi Bottinelli (Feb 13 2022 at 09:04):

Yeah, it's the comments: my reasoning was that since there would anyway be something to change, I might as well cut the comments in the end.

Yaël Dillies (Feb 13 2022 at 09:05):

It's really not that hard to keep your comments well trimmed if you're on VScode with the 100 characters guide on.

Yaël Dillies (Feb 13 2022 at 09:05):

You're introducing many new definitions, some of which you're not using at all. I'd suggest concentrating on fewer and developing more API about them.

Yaël Dillies (Feb 13 2022 at 09:06):

For example, docs#is_antichain is a really simple notion but I still managed to write 100 lines about it.

Rémi Bottinelli (Feb 13 2022 at 09:07):

But more API means getting past the 150 lines "soft limit" too, no?

Yaël Dillies (Feb 13 2022 at 09:07):

That's why you should introduce less definitions :wink:

Yaël Dillies (Feb 13 2022 at 09:09):

No joke, you have 20 new definitions and one new typeclass. A typical PR adds maybe 3 or 4 of them simultaneously at most.

Rémi Bottinelli (Feb 13 2022 at 09:09):

I'm unsure now: should I retract the PR, or just update it according to the feedback I get? I don't want to push bad stuff and get people sick too fast

Kevin Buzzard (Feb 13 2022 at 09:09):

People don't review large PRs because they have 15 minutes for reviews in their spare time and think "golly I'm not looking at that one". Definitions are really hard to get right and need expert opinions. A long PR with lots of new definitions is asking for trouble. Everyone puts it off and then it starts rotting.

Yaël Dillies (Feb 13 2022 at 09:10):

First, remove all definitions for which you proved no lemma. We definitely won't take those.

Yaël Dillies (Feb 13 2022 at 09:11):

Second, if there are many left, remove some according to your common sense and keep them for a second PR.

Kevin Buzzard (Feb 13 2022 at 09:12):

Yeah close the PR and make a smaller one with <= 150 lines of non comment code and ideally <= 100. Also take a look at what the linters say -- nobody will review until your file compiles and lints, and 100+ character lines will stop it linting. The PR process is very hard. There are currently over 300 PRs in progress and it's really important to keep things super-easy for the maintainers especially if you're a new contributer whose code will need some attention

Yaël Dillies (Feb 13 2022 at 09:13):

I'm not sure the PR itself needs closing. Changes can be done in place. You can always move the surplus stuff to another branch.

Rémi Bottinelli (Feb 13 2022 at 09:15):

Mmh… Well, I closed it just before you replied, so I guess I'll leave it closed now that it's done. In any case I got some very actionable feedback, which is nice :)

Rémi Bottinelli (Feb 13 2022 at 09:16):

Thanks to the both of you, and back to the drawing board.

Kevin Buzzard (Feb 13 2022 at 09:17):

Every definition comes with a cost. Every definition wraps up something in extra wrapping and lean has to figure out whether to unwrap the wrapping or not, and this is hard. Every definition makes terms bigger and more complicated, and this is bad. So every definition has to come with an API which tells lean and users how to use the definition. If you can possibly get away without making a definition, don't make it. If you are making a definition then you need to surround it with theorems. Definitions are costly

Rémi Bottinelli (Feb 13 2022 at 11:49):

Does that mean that whenever you have a definition, say of the form lipschitz_with \eps, and a non-qualified version lipschitz := \exists \eps lipschitz_with \eps, you should stick to one variant if you don't have a good reason to use both?

Yaël Dillies (Feb 13 2022 at 11:51):

Exactly

Rémi Bottinelli (Feb 13 2022 at 11:51):

OK, then there is a lot to cut down already!

Rémi Bottinelli (Feb 13 2022 at 13:25):

OK, I've shortened it to around 150 lines without comments, and a bit more than 200 with comments. I'll let it sit for a while since I may be rushing things.

Rémi Bottinelli (Feb 14 2022 at 07:35):

Thinking about it all, I have a few more "theoretical" questions, if you people don't mind:

  • RE "definition sparsity": I naturally tend to make a lot of definitions, e.g. just as forms of alias, and to be able to name things. For instance, in the old version of the code, I had a def is_max_blah, which I now inlined in the two functions using it. You might argue that if there is only two places where it's used, this is proof that it doesn't need a def, but at the same time I find it makes things less transparent. So, is there a way to use such "thin" definitions? Does this "definition sparsity" rule stem from an engineering rationale (mathlib being so big a project that some hard rules have to be taken in order to keep things manageable ) ?
  • Branches: I see that every time I push to my branch on the mathlib repo, things get rebuilt, which might indicate that these branches are not meant for quick and dirty development. Is that the case?

Patrick Massot (Feb 14 2022 at 07:47):

You can always use local notations as a thin definition.

Patrick Massot (Feb 14 2022 at 07:49):

About branches, you can use the building power. That's part of the community infrastructure. Specifically the machines are paid by the Hoskinson center. It's a bit silly to push every three lines, but you can use this infrastructure when you modify a file which is low in the hierarchy and triggers a lot of rebuild.

Rémi Bottinelli (Feb 14 2022 at 08:00):

Noted, thanks!

Rémi Bottinelli (Feb 14 2022 at 08:18):

@Patrick Massot In your opinion, how should I deal with this definition, for instance?

Rémi Bottinelli (Feb 14 2022 at 08:20):

I'm using it to have a nice "bundled" object to reason about when using Zorn.

Yaël Dillies (Feb 14 2022 at 09:05):

I think this definition should be internal to the proof.

Rémi Bottinelli (Feb 14 2022 at 10:30):

Indeed, it makes things much more transparent, and allows some shortening: https://github.com/leanprover-community/mathlib/blob/quasi_isometry/src/topology/metric_space/coarse/basic.lean .

Rémi Bottinelli (Feb 14 2022 at 11:03):

In the doc entry for rcases, there is a part about extracting a witness to the equality with some name, but I'm not sure how it works. What syntax should I use?

Kyle Miller (Feb 14 2022 at 11:08):

That's for the non-interactive rcases tactic. The interactive syntax is over at tactic#rcases, see the second-to-last paragraph (rcases h : e with PAT)

Rémi Bottinelli (Feb 14 2022 at 11:10):

Oh, ups, thanks for clearing that up!

Rémi Bottinelli (Feb 15 2022 at 05:47):

I reopened the PR: hope it's more appropriate now :)


Last updated: Dec 20 2023 at 11:08 UTC