Zulip Chat Archive

Stream: general

Topic: naming challenge


Floris van Doorn (Aug 01 2021 at 14:43):

How should I (re)name this concept?

/--
An element `a : α` is `regular` (name subject to change) if `x ↦ a + x` is order-reflecting.
We will make a separate version of lemmas that require `[contravariant_class α α (+) (≤)]` with
`regular` assumptions instead. These can then be easily instantiated to specific types, like
`ennreal`, where we can replace the assumption `regular x` by `x ≠ ∞`.
-/
def regular {α} [has_le α] [has_add α] (a : α) : Prop :=
 x y⦄, a + x  a + y  x  y

Kalle Kytölä (Aug 01 2021 at 15:37):

How about cancellable? Or add_le_cancellable (if the relation is and operation is +)? Or rel_cancellable (if the same name is to be used for many relations)?

I am not very familiar yet with the naming conventions, but I thought having various suggestions to consider can hardly hurt :grinning_face_with_smiling_eyes:.

Kalle Kytölä (Aug 01 2021 at 15:45):

The term "regular" would of course be following a time-honored tradition in math naming conventions for just about anything... By which I mean: I hope mathlib will not adopt it in this case! :smile:

Floris van Doorn (Aug 02 2021 at 00:00):

I think I'll go for add_le_cancellable. Thanks for the suggestions.

Floris van Doorn (Aug 02 2021 at 16:15):

@Eric Wieser pointed out in #8503 that lean actually already has a notion of docs#is_left_regular (added by @Damiano Testa). Interestingly, that definition doesn't have@[to_additive]. We could try to make these definitions more closely connected, for example by calling the both is_[left|right]_[mul|add]_[le_]_cancellable or both is_[left|right]_[mul|add]_[le_]_regular

Damiano Testa (Aug 03 2021 at 03:49):

(deleted)

Damiano Testa (Aug 03 2021 at 03:51):

I like the idea of connecting the two notions: it is a common trick to prove injectivity by showing strict monotonicity!

As for the name, I chose regular because I was thinking of using this for regular elements and regular sequences in commutative algebra. For this same reason, it had not occurred to me to do the same for additive. Also, at the time of the PR I was less "Lean-mature" and was not aware of the to_additive attribute.

In fact, following up on a comment by Eric, it seems to make sense to define co(ntra)variant as ∀ (m), analogue_of_mul_le_cancellable_with_explicit_m_input.

Anne Baanen (Aug 16 2021 at 12:58):

Anyone have a nice idea how to give these two definitions distinct, but not too long, names in #8514?

/-- Given `P 0`, `P 1`, and `P (p ^ k)` for positive prime powers, and a way to extend `P a` and
`P b` to `P (a * b)` when `a, b` are coprime, you can define `P` for all natural numbers. -/
@[elab_as_eliminator]
def rec_on_coprime' {P :   Sort*} (hp :  p n : , prime p  0 < n  P (p ^ n))
  (h0 : P 0) (h1 : P 1) (h :  a b, coprime a b  P a  P b  P (a * b)) :  a, P a :=
rec_on_prime_pow h0 h1 $ λ a p n hp' hpa ha,
  h (p ^ n) a ((prime.coprime_pow_of_not_dvd hp' hpa).symm)
  (if h : n = 0 then eq.rec h1 h.symm else hp p n hp' $ nat.pos_of_ne_zero h) ha

/-- Given `P 0`, `P (p ^ k)` for all prime powers, and a way to extend `P a` and `P b` to
`P (a * b)` when `a, b` are coprime, you can define `P` for all natural numbers. -/
@[elab_as_eliminator]
def rec_on_coprime {P :   Sort*} (h0 : P 0) (hp :  p n : , prime p  P (p ^ n))
  (h :  a b, coprime a b  P a  P b  P (a * b)) :  a, P a :=
rec_on_coprime' (λ p n h _, hp p n h) h0 (hp 2 0 prime_two) h

Eric Wieser (Aug 16 2021 at 13:54):

Swapping the order of the arguments in the first one so that hp comes after h1 would make it easier to tell what the difference is!

Eric Wieser (Aug 16 2021 at 13:55):

rec_on_prime_coprime and rec_on_pos_prime_coprime?

Floris van Doorn (Mar 01 2022 at 12:15):

Anyone have any ideas to name the following linear maps?

def name_me1 (L : E L[𝕜] E' L[𝕜] F) : E L[𝕜] (E'' L[𝕜] E') L[𝕜] (E'' L[𝕜] F) :=
(continuous_linear_map.compL 𝕜 E'' E' F).comp L

def name_me2 (L : E L[𝕜] E' L[𝕜] F) : (E'' L[𝕜] E) L[𝕜] E' L[𝕜] (E'' L[𝕜] F) :=
(precompR E'' (flip L)).flip

Eric Wieser (Mar 01 2022 at 12:39):

Can you include the simps lemma they generate too?

Floris van Doorn (Mar 01 2022 at 12:56):

( : E) : (name_me1 E'' L)  = (compL 𝕜 E'' E' F) (L )
( : E) (f : E'' L[𝕜] E') : ((name_me1 E'' L) ) f = (L ).comp f
( : E) (f : E'' L[𝕜] E') (ᾰ_1 : E'') : (((name_me1 E'' L) ) f) ᾰ_1 = (L ) (f ᾰ_1)

Floris van Doorn (Mar 01 2022 at 13:03):

I'm thinking of something like precompose_right / precompose_left, since we're precomposing the function E'' →L[𝕜] E' on the right/left argument.

Moritz Doll (Mar 01 2022 at 13:15):

Maybe compL₂ for the first one to be consistent with docs#linear_map.compl₂ ? your second map should correspond to docs#linear_map.compr₂

Floris van Doorn (Mar 01 2022 at 14:11):

Those are indeed very close, my maps have the second linear map bundled.
flip (name_me1 L) f is basically the same as linear_map.compl₂ L f (up to continuity and semilinearity).

Moritz Doll (Mar 01 2022 at 14:32):

I think it would be a good idea if the API of bilinear maps and continuous bilinear maps would be as close as possible, but I also think that the bundled definitions are better then linear_map.compl₂ L f (btw please change your linear map from L to B, it is really confusing with the L from the clm notation).

Moritz Doll (Mar 01 2022 at 14:33):

but there is already diverging notation with continuous_linear_map.compL and linear_map.lcomp.

Moritz Doll (Mar 01 2022 at 14:33):

(deleted)

Eric Wieser (Mar 01 2022 at 15:37):

Sometimes I wonder if we're shooting ourselves in the foot with this heavy point-free approach, and would do better to just write \lam a f x, L a (f x) and use some linearity tactic to find the linearity proof automatically.

Patrick Massot (Mar 01 2022 at 20:57):

Indeed all this doesn't exist on paper, and we certainly want to keep it that way on paper.

Floris van Doorn (Mar 02 2022 at 10:52):

This sounds like some abstract nonsense that does exist in cartesian closed categories (possibly with some extra structure).

Eric Rodriguez (May 13 2022 at 15:54):

/poll What statement would people expect for the name equiv.subtype_congr?
{α} {p q : α → Prop} (e : {x // p x} ≃ {x // q x}) (f : {x // ¬p x} ≃ {x // ¬q x}) : perm α
(e : α₁ ≃ α₂) (p : α₂ → Prop) : {a₁ // (p ∘ e) a₁} ≃ {a₂ // p a₂}

Eric Rodriguez (May 13 2022 at 15:55):

answer: docs#equiv.subtype_congr
bonus: what would you call the other? (I'm making the one of these that isn't in mathlib already and not sure what name to give it if not that)

Eric Wieser (May 13 2022 at 16:11):

docs#equiv.subtype_equiv is one of the other ones

Eric Wieser (May 13 2022 at 16:13):

I think the current equiv.subtype_congr should be called equiv.subtype_piecewise?

Eric Rodriguez (May 13 2022 at 16:13):

I was rewriting option 3 as I didn't think of it, d'ah!

Eric Wieser (May 13 2022 at 16:14):

Or maybe even just equiv.piecewise

Eric Rodriguez (May 13 2022 at 16:15):

oh, I think my proof is nicer so I'll PR that (no non-terminal simps :]) I currently don't have a good computer though, so I will refactor the names when I get to a good computer

Gabriel Ebner (May 13 2022 at 16:20):

Option 1 looks like a perm.subtype_piecewise to me. I would expect equiv.subtype_piecewise to be {α β} {p : α → Prop} {q : β → Prop} (e : { a // p a } ≃ { b // q b }) (f : { a // ¬ p a } ≃ { b // ¬ q b }) : α ≃ β.

Eric Rodriguez (May 13 2022 at 16:23):

oh, and option 2 is docs#equiv.subtype_equiv_of_subtype

Eric Rodriguez (May 13 2022 at 16:26):

#14125

Eric Rodriguez (May 21 2022 at 15:02):

Eric Rodriguez said:

oh, and option 2 is docs#equiv.subtype_equiv_of_subtype

what should this be called? I don't like this name but not sure what a better one would be

Yaël Dillies (May 21 2022 at 15:31):

I rarely see stuff in the equiv.subty namespace :stuck_out_tongue:

Chris Hughes (May 23 2022 at 13:32):

Eric Wieser said:

Sometimes I wonder if we're shooting ourselves in the foot with this heavy point-free approach, and would do better to just write \lam a f x, L a (f x) and use some linearity tactic to find the linearity proof automatically.

I worked on this a little bit recently. If you use every bound variable exactly once then the map will be linear. I have some very untidy code taking lambda terms and turning them into terms using linear composition and swap, which swaps the arguments of a bilinear map.

Yaël Dillies (May 23 2022 at 13:36):

You can replace "exactly once", by "at most once", right? not for linearity, but for affine maps yes

Chris Hughes (May 23 2022 at 13:42):

Chris Hughes said:

Eric Wieser said:

Sometimes I wonder if we're shooting ourselves in the foot with this heavy point-free approach, and would do better to just write \lam a f x, L a (f x) and use some linearity tactic to find the linearity proof automatically.

I worked on this a little bit recently. If you use every bound variable exactly once then the map will be linear. I have some very untidy code taking lambda terms and turning them into terms using linear composition and swap.

I wonder how much Lean 4 will help us with this. I also want to be able to write down functors with lambda calculus, but making sure I end up with a term I can actually use.

Eric Wieser (May 23 2022 at 15:41):

I would have thought we could just copy the design of continuity

Eric Wieser (May 23 2022 at 15:41):

Which would mean using docs#is_linear_map a lot more

Chris Hughes (May 23 2022 at 16:13):

Eric Wieser said:

I would have thought we could just copy the design of continuity

I'm not at all sure that this is the same problem. You don't need to have a library of tagged lemmas, the condition is that every bound variable is used exactly once; this never changes. The linearity of the functions is part of the type, there's no need for a search for lemmas. And I don't think it can be solved just by applying lemmas. I'm not sure what lemma you would apply first to prove linearity of something like \la f g x y, g y (f x).

Eric Wieser (May 23 2022 at 16:15):

I'm thinking of a more general case like showing \la p, matrix.trace (f p.1 + g p.2) is linear

Chris Hughes (May 23 2022 at 16:30):

Okay. I think that you need a combination of both. The method I had in mind would prove that \la p1 p2, trace (add (f p1) (g p2)) is linear if trace, add, f and g were all bundled linear maps. But you need something else to figure out addition is bilinear and to case split on p.

Chris Hughes (May 23 2022 at 16:32):

So I guess you need something like continuity, but because the category of modules is monoidal closed you can add more things that continuity won't do currently like proving linearity of \la f g x y g y (f x)

Eric Wieser (May 23 2022 at 16:39):

Not really a case split on p, just noting that prod.fst is a linear map

Chris Hughes (May 24 2022 at 06:38):

It's a bit more complicated than that because p appears twice in the term which usually means the map isn't linear but somehow this one works and to be honest I'm not sure exactly what the general rule is that this is a special case of that means it's linear despite p appearing twice.

Eric Wieser (May 24 2022 at 07:50):

Isn't it just the rule that f x + g x is linear in x if both f and g are?

Yaël Dillies (May 24 2022 at 08:15):

Isn't the rule "appears exactly once per summand"?

Eric Wieser (May 24 2022 at 08:18):

My thinking was that the above would be found in the same way that docs#continuous.add is found today

Eric Wieser (May 24 2022 at 08:19):

There's no need for a notion of summands

Chris Hughes (May 24 2022 at 09:44):

Yaël Dillies said:

Isn't the rule "appears exactly once per summand"?

It works I guess because addition is actually linear rather than bilinear. There's a map of type (A →ₗ[R] B) → (A →ₗ[R] C) → (A →ₗ[R] B × C), called linear_map.prod. So if there was a version of add with type A × A →ₗ[R] A, you could rewrite \la p, trace (f p.1 + g p.2) as \la p, trace (add (linear_map.prod (f.comp prod.fst) (g.comp prod.snd) p)) and that can be proven linear using the rule about using every bound variable exactly once.

Chris Hughes (May 24 2022 at 09:54):

So I would prefer to use a rewrite rule instead of something similar to continuous_add to rewrite addition in terms of bundled linear maps, because this allows the more powerful automation to be used afterwards. The earlier example of \lam a f x, L a (f x) probably can't be proven linear using the same approach as the continuity tactic, linearity is really a harder problem than continuity because the space of linear maps is a vector space.


Last updated: Dec 20 2023 at 11:08 UTC