Zulip Chat Archive
Stream: general
Topic: map from mult gp to add gp
Kevin Buzzard (Feb 12 2022 at 18:44):
OK so it's finally happened. Kummer theory relates the additive group to the the multiplicative group if is a number field containing all the $$n$$th roots of unity, Selmer group calculations give you maps in the proof of the Mordell-Weil theorem with an additive group, and I think that for my gang of PhD students at Imperial working on cool things they are going to need some kind of solution to the issue that right now mathlib has two ugly ways to define a group homomorphism from an additive group to a multiplicative group (G ->* multiplicative A
and additive G ->+ A
) and no non-ugly way. I've been putting this discussion off for years (e.g. with perfectoids we used multiplicative valuations, Maria has been using multiplicative \Z
here there and everywhere) but I don't think can go on forever. The idea of splitting up the concept of an additive and multiplicative monoid is brilliant because it makes things like rings fit very nicely into the typeclass system, but I think that for the sake of my students' sanity we're going to have to figure out a way of doing this.
Two options spring to mind:
1) make a new typeclass mul_to_add_monoid_hom G A
and then either duplicate a ton of stuff or write a tactic which does it for us
2) Continue to run away from the problem and write horrible-looking code with multiplicative (zmod n)
everywhere.
Neither looks fun but this is now a real problem. I think the option "define the group law on an elliptic curve to be multiplication" has to be ruled out because we'd be a laughing stock. Same comment goes for group law on a cohomology group, and definition of an abelian category. Maybe morally what's going on is that should somehow have multiplication as the group law (because that's the group law on ) but history won't let us do that.
Yaël Dillies (Feb 12 2022 at 18:47):
Do we care about being the laughing stock?
Kevin Buzzard (Feb 12 2022 at 18:48):
Yes
Kevin Buzzard (Feb 12 2022 at 18:48):
because then it's harder to learn. We need to stick to usual mathematical notation.
Kevin Buzzard (Feb 12 2022 at 18:51):
I guess one thing that could be done is to simply define the map as a function, and then carry around the axiom f (a * b) = f a + f b
and rewrite as necessary. But I'm sure this will get old. We will want to talk about the short exact sequence for example, and the group law magically changes between multiplication and addition in the middle. Hmm, for this one we surely have to go with additive
.
Eric Rodriguez (Feb 12 2022 at 19:01):
This came up in the Galois thing, too. We wanted to filter the automorphisms of a cyclo extension through the roots of unity group, and we'd have to mess around with this
Kevin Buzzard (Feb 12 2022 at 19:09):
Right now @David Ang is just putting multiplicative
everywhere.
Adam Topaz (Feb 12 2022 at 21:27):
Are you sure multiplicative
is the approach you want to take, as opposed to additive
? These are global sections on an \'etale sheaf of abelian groups, and we usually write those additively...
Adam Topaz (Feb 12 2022 at 21:34):
(Of course, this is the global sections of (THE multiplicative group!), so maybe an exception is in order.)
Adam Topaz (Feb 12 2022 at 21:57):
In Lean4, is it possible to create a class called, say, CommGroup
where instances can also include notations for the operation and the unit?
Eric Wieser (Feb 12 2022 at 22:15):
Don't forget notation for the inverse and the division and the nat-power and the int-power and the ...
Adam Topaz (Feb 12 2022 at 22:16):
Sure. Let's just think about monoids for now.
Yaël Dillies (Feb 12 2022 at 22:17):
Does that mean you expect lemma statements to not visually match where you'll use them?
Adam Topaz (Feb 12 2022 at 22:21):
Maybe. Is that so awful? I would prefer to have one lemma called op_assoc
or something that can be applied to both addition in Z and multiplication in
Eric Wieser (Feb 12 2022 at 22:23):
Can do we something like this?
namespace hidden
class has_unit (tag : name) (R : Type*) :=
(unit : R)
class has_op (tag : name) (R : Type*) :=
(op : R → R → R)
instance has_unit.to_has_one (R : Type*) [has_unit `additive R] : has_one R :=
⟨has_unit.unit `additive⟩
instance has_op.to_has_mul (R : Type*) [has_op `additive R] : has_mul R :=
⟨has_op.op `additive⟩
class monoid (n : name) (R : Type*) extends has_op n R, has_unit n R :=
(unit_op : ∀ x, op unit x = x)
(op_unit : ∀ x, op x unit = x)
(op_assoc : ∀ x y z, op (op x y) x = op x (op y z))
export monoid (unit_op op_unit op_assoc)
notation `mul_monoid` := monoid (name.mk_string "multiplicative" name.anonymous)
notation `add_monoid` := monoid (name.mk_string "additive" name.anonymous)
example {R} [add_monoid R] (x : R) : x * 1 = x :=
begin
dunfold has_mul.mul has_one.one, -- TODO
rw op_unit,
end
end hidden
Eric Wieser (Feb 12 2022 at 22:23):
That is, store a name for the notation as an argument in the typeclass
Yaël Dillies (Feb 12 2022 at 22:25):
Dumb stuff, but are you not getting a name conflict with docs#unit?
Adam Topaz (Feb 12 2022 at 22:25):
But this still would mean that additive monoids and multiplicative monoids are different classes because the name parameter would be different
Yaël Dillies (Feb 12 2022 at 22:26):
How does that matter? You can be "name-polymorphic".
Adam Topaz (Feb 12 2022 at 22:27):
Yeah okay
Eric Wieser (Feb 12 2022 at 22:28):
We'd need some special support in rw
to avoid the line marked TODO
Eric Wieser (Feb 12 2022 at 22:28):
Or we'd need to entirely remove has_zero
and has_one
, and have them just be notation for has_unit _
Yaël Dillies (Feb 12 2022 at 22:30):
This sounds like a very bad intersection with the OfNat
conundrum.
Adam Topaz (Feb 12 2022 at 22:30):
What I would really like is to be able to pass the notation itself as a parameter
Adam Topaz (Feb 12 2022 at 22:32):
I.e. to be able to write
variables (M : Type) [monoid (*) 1 M]
example (a b : M) : M := a * 1 * b
Eric Wieser (Feb 12 2022 at 22:32):
Yeah, but that scales really badly to [group (*) 1 (has_inv.inv) (/) (^) (^) R]
and ring
is even worse
Yaël Dillies (Feb 12 2022 at 22:33):
I would vouch against that. Soon, docs#boolean_algebra will have around 10 notations.
Adam Topaz (Feb 12 2022 at 22:33):
You can have default values for parameters btw
Yaël Dillies (Feb 12 2022 at 22:34):
Writing add_group
as a shorthand for group (+) 0 has_neg.neg
sounds like a better idea already.
Eric Wieser (Feb 12 2022 at 22:35):
Yeah, but then you have to write [has_add G] [has_zero G] [has_neg G] [add_group G]
Adam Topaz (Feb 12 2022 at 22:35):
Why?
Eric Wieser (Feb 12 2022 at 22:36):
Because (+)
doesn't exist until you have the has_add
instance lying around
Yaël Dillies (Feb 12 2022 at 22:36):
because the notation typeclasses are typeclasses... :sad:
Adam Topaz (Feb 12 2022 at 22:36):
In Lean4 you could presumably have a macro that expands add_group M
to group (+) ...
Eric Wieser (Feb 12 2022 at 22:36):
It would need to expand it to include the [has_add G]
too
Yaël Dillies (Feb 12 2022 at 22:37):
Again, that's not the problem. The problem is that (+)
doesn't refer to anything.
Eric Wieser (Feb 12 2022 at 22:37):
Using `additive
instead of (+)
circumvents the problem because it generates the notation typeclasses from the names rather than depending on the notation typeclasses
Adam Topaz (Feb 12 2022 at 22:38):
Okay, this is all hypothetical anyway. I don't know if it's possible to pass in notation as a variable in Lean4.
Eric Wieser (Feb 12 2022 at 22:38):
One obvious problem with the "name-polymorphism" option of my proposal is that you need some way to pick a convenient notation inside the name-polymorphic lemma
Yaël Dillies (Feb 12 2022 at 22:39):
Also, you definitely don't want to call all your lemmas op_op_eq_op
Eric Wieser (Feb 12 2022 at 22:39):
Sure, you'd keep around to_additive
and have it generate mul
and add
from the op
versions
Adam Topaz (Feb 12 2022 at 22:45):
I don't know how this idea would play with the notation classes. Here is the simplest example of what I would like to be possible
variables (M : Type) [has_binary_op "*" M]
example (a b : M) : M := a * b
And not to rely on has_mul
whatsoever.
Adam Topaz (Feb 12 2022 at 22:46):
(yes I know has_ mul
is called Mul
in Lean4 but you get the idea...)
Eric Wieser (Feb 12 2022 at 23:06):
Right, that's sort of similar to the approach I was trying to hit. As I mention in my message above though, that's only half the problem - how do you write statements about monoids in general if you haven't chosen a notation for them?
Kyle Miller (Feb 12 2022 at 23:12):
@Eric Wieser Your name tagging idea made me think about a metaprogramming approach. What if to_additive
were revamped to generate multiplicative and additive lemmas for each generic monoid lemma?
In files about generic monoids, you could have local infix ` * ` := monoid.op
so you can work with notation. Since names are being modified by the to_additive
machinery, we don't have to worry about how ring
would have conflicting op_assoc
axioms for its additive and multiplicative structures.
We could then also generate lemmas about ->*
, ->+
, *->+
, and +->*
homomorphisms if the to_additive
machinery could consider all four possible mul/add structures for the two arguments. (Presumably directed by some arguments to to_additive
so this doesn't get too out of hand.)
Adam Topaz (Feb 12 2022 at 23:13):
@Eric Wieser make it polymorphic in the notation variable
Eric Wieser (Feb 12 2022 at 23:14):
The thing is that "files about generic monoids" are actually "all files that currently use @[to_additive]
", so I think doing that globally in each of those files might not work out so well
Eric Wieser (Feb 12 2022 at 23:14):
Adam Topaz said:
Eric Wieser make it polymorphic in the notation variable
Yes, indeed - but then you can't use *
in your lemma statement any more, which is what I was trying to point out.
Adam Topaz (Feb 12 2022 at 23:14):
With this approach, the whole to_additive
game wouldn't even exist!
Eric Wieser (Feb 12 2022 at 23:15):
Adam Topaz said:
With this approach, the whole
to_additive
game wouldn't even exist!
It does if we want mul
or add
in our lemma names as Kyle I think is suggesting.
Eric Wieser (Feb 12 2022 at 23:15):
At any rate, my point is that "everywhere that currently uses @[to_additive]
would end up being a notation-polymorphic lemma", and so we'd need to enable local notation for each one of those lemma
Kyle Miller (Feb 12 2022 at 23:15):
And we do want mul
and add
versions because rewriting gets hard when the function is a variable...
Eric Wieser (Feb 12 2022 at 23:16):
If rewriting gets hard, then it will be hard in all the polymorphic cases where add
and mul
can't save us
Adam Topaz (Feb 12 2022 at 23:17):
I guess one solution is to have automation that takes a lemma written with *
and that would make a lemma with arbitrary notation
Kyle Miller (Feb 12 2022 at 23:17):
@Adam Topaz That's basically what I was proposing, though the other way.
Adam Topaz (Feb 12 2022 at 23:18):
Yeah I see now!
Kyle Miller (Feb 12 2022 at 23:18):
A reason I was thinking the generic monoid
seems useful is that then it puts mul
and add
on equal footing when you're configuring code generation.
Kyle Miller (Feb 12 2022 at 23:19):
And then you could also extend it with more notation typeclasses and generate more lemmas.
Yaël Dillies (Feb 12 2022 at 23:19):
But there comes my question again. What names will you use? You're forbidding yourself both convention so as to avoid conflicts.
Kyle Miller (Feb 12 2022 at 23:20):
(It reminds me of how in some older languages with generics, like Ada, you have to declare that you're instantiating the generic variables. In this case, monoid
gets instantiated as add_monoid
and mul_monoid
, perhaps.)
Adam Topaz (Feb 12 2022 at 23:20):
Why do the names matter?
Adam Topaz (Feb 12 2022 at 23:21):
Just write assoc
instead of add_assoc
Yaël Dillies (Feb 12 2022 at 23:21):
because, against all odds, I'm not a computer :upside_down:
Kyle Miller (Feb 12 2022 at 23:21):
@Yaël Dillies For generic monoids, op
-based seems fine. But then if there's to_additive
-like metaprogramming, that all gets turned into mul
and add
versions.
Eric Wieser (Feb 12 2022 at 23:21):
Adam Topaz said:
Just write
assoc
instead ofadd_assoc
This doesn't scale too well for things like mul_mul_mul_comm
, which after stripping away the operation name becomes the same as mul_comm
.
Yaël Dillies (Feb 12 2022 at 23:22):
Okay, and what about docs#sub_eq_add_neg? Your heuristic breaks as soon as two operations are involved.
Kyle Miller (Feb 12 2022 at 23:22):
Maybe we just use mul
-based naming :shrug:
Kyle Miller (Feb 12 2022 at 23:23):
so mul
, div
, inv
for generic monoids.
Yaël Dillies (Feb 12 2022 at 23:23):
Then you will get name conflicts with the multiplicative lemmas.
Eric Wieser (Feb 12 2022 at 23:23):
And have generic_monoid.mul_mul_mul_comm
vs mul_mul_mul_comm
for the generic vs specialized versions?
Eric Wieser (Feb 12 2022 at 23:25):
I wonder if we could have something like:
lemma mul_left_comm
{R} (tag : name := `multiplicative) [generic_comm_monoid tag R] (x y z : R) :
x * (y * z) = y * (x * z) := sorry
where the default value of tag
is used to inform the notation in the local scope of the lemma
Eric Wieser (Feb 12 2022 at 23:28):
In particular it would be nice to be able to use this in things like file#group_theory/eckmann_hilton to have two simultaneous monoid structures on a type with locally distinct notations for each
Kevin Buzzard (Feb 12 2022 at 23:35):
I should say that I'm more than happy to wait for Lean 4 if that helps. We can keep ploughing on with the multiplicative
hack until then, nothing is blocked, it's just getting a bit nasty
Jireh Loreaux (Feb 13 2022 at 02:58):
Stupid question, but why can't you just have one generic operation and just use different locales for different notation. Don't we basically use locales to unlock different notations anyway?
Jireh Loreaux (Feb 13 2022 at 03:16):
Oh nevermind, you couldn't use both locales at the same time.
Yury G. Kudryashov (Feb 13 2022 at 03:21):
Exactly, the problem is that we want ring
s.
Damiano Testa (Feb 13 2022 at 11:36):
Since this is still on the drawing board (and you can always want more!), would it be possible to also include the opposite operation? I find \op
somewhat clunkier and being able to simply pass (swap (*))
to a statement could be easier.
Damiano Testa (Feb 13 2022 at 11:37):
This would preempt also a to_right
tactic from being written... :smile:
Eric Wieser (Feb 13 2022 at 11:44):
I've never seen a case where to_right
would help; do you have one in mind?
Damiano Testa (Feb 13 2022 at 11:56):
A lot of the lemmas about monotonicity of multiplication on the left get reproven for multiplication on the right. The proofs are always the "same" and should also follow by the left
lemma applied to the \opp
monoid, but are simply repeated. I'm not at a computer now, but lemmas in algebra.order.monoid_lemmas should give plenty of examples!
Eric Rodriguez (Feb 13 2022 at 13:09):
Eric Wieser said:
Yeah, but then you have to write
[has_add G] [has_zero G] [has_neg G] [add_group G]
I still think this is the way forwards, fwiw. We need macros that expand stuff like [is_cyclotomic_extension {n} K L]
, because even without this happening we get crazy long variable lines that aren't how mathematicians work
Eric Rodriguez (Feb 13 2022 at 13:09):
Have we got anything like the proposed [[]] notation in Lean4?
Sebastien Gouezel (Feb 13 2022 at 14:09):
Eric Rodriguez said:
Eric Wieser said:
Yeah, but then you have to write
[has_add G] [has_zero G] [has_neg G] [add_group G]
I still think this is the way forwards, fwiw.
No, it is important to keep things bundled (not always, but still a lot) because of performance problems otherwise. See for instance https://arxiv.org/abs/2202.01629, Section 10.
Eric Rodriguez (Feb 13 2022 at 14:38):
Won't it be a constant slowdown at most? With only the amount of relevant [has_notation] typeclasess
Eric Rodriguez (Feb 13 2022 at 14:39):
Regardless, this is really interesting, we should definitely have a thread for all the ITP preprints!
Anne Baanen (Feb 13 2022 at 16:00):
Eric Rodriguez said:
Won't it be a constant slowdown at most? With only the amount of relevant [has_notation] typeclasess
Term size is exponential in the length of an "unbundled inheritance" chain, so quadratic for this proposal: let's say we want an instance of add_group (ℤ × ℤ × ... × ℤ)
, with n
copies of ℤ
. Then we get a term like prod.add_group ℤ^n int.add_group (prod.has_add ℤ^n int.has_add (prod.has_add ℤ^(n-1) int.has_add _) (prod.has_zero ℤ^n int.has_zero (prod.has_zero ℤ^(n-1) int.has_zero _) (prod.has_add ℤ^(n-1) int.has_add _) (prod.has_neg ℤ^n int.has_neg (prod.has_neg ℤ^(n-1) int.has_neg _) (prod.add_group ℤ^(n-1) _ _ _ _)
. For each prod.add_group ℤ^i
we get some constant term size and 3 different prod.has_$op ℤ^i
and one prod.add_group ℤ^(i-1)
. For each prod.has_$op ℤ^i
we get some constant term size and i
different prod.has_$op ℤ^j
.
Yakov Pechersky (Feb 13 2022 at 16:01):
How often do we have large chains of prod
as opposed to using pi.add_group
?
Anne Baanen (Feb 13 2022 at 16:02):
prod
is an easy example since the recursive behaviour is easy to explain, but the same goes for heterogeneous definitions of the form set (finsupp (multiplicative (fin n → ℤ)^op))
Anne Baanen (Feb 13 2022 at 16:08):
I'd expect that for these examples, quadratic depth is not too bad and increasing the term size here moves some complexity away from unfolded term size, which is relevant in certain parts of the type checker I understand from diagnosing a class of timeouts where big terms appear in the type. So I'm not opposed to unbundling notation/data from axioms/proofs.
Anne Baanen (Feb 13 2022 at 16:15):
One option that we should consider is bundling all the data into one structure and all the proofs into another. So you'd have a add_group_data ℤ
instance containing +
, -
, 0
, and a add_group_laws ℤ
instance containing add_assoc
, add_zero
, add_neg
. That makes refactors like adding a nsmul
field to monoids a bit less painful: in the unbundled data case, you'd need to add an has_smul ℕ M
everywhere you write monoid M
.
Eric Rodriguez (Feb 13 2022 at 16:20):
I like this add_group_data
approach; I think the [[]]s will be essential for that though
Damiano Testa (Feb 13 2022 at 20:46):
I also like this very much. It also aligns with how I would think of a ring: a set/type with a bunch of operations, satisfying some conditions. I imagine that working flexibly with (+), (*)
, their swap
s, adding opposites/inverses as needed will become easier!
Patrick Massot (Feb 13 2022 at 20:54):
Unfortunately this discussion is not really about how we'd like to manipulate things. It's really about performance of the type class mechanism. I certainly don't understand this technology well enough to write any meaningful comment here.
Oliver Nash (Feb 15 2022 at 10:39):
I also don't have a good answer here but this has reminded me that I was messing around with alternatives to to_additive
a few months ago. I've just pasted a very old script I had lying around into this gist.
Oliver Nash (Feb 15 2022 at 10:41):
I think it's basically the same idea as what Eric pasted above, namely we parameterise our operations so the key definition is:
variables {Ω : Type*} (α : Type*)
class has_op (ω : Ω) := (op : α → α → α)
class has_unit (ω : Ω) := (unit : α)
and then a group is defined as:
variables (ω₁ : Ω)
open_locale mul_group_notation
class group extends has_op α ω₁, has_unit α ω₁ :=
(mul_assoc : ∀ a b c : α, (a * b) * c = a * (b * c))
(mul_unit : ∀ a : α, a * Ι = a)
(unit_mul : ∀ a : α, Ι * a = a)
(inv_left : ∀ a : α, ∃ b, b * a = Ι)
(inv_right : ∀ a : α, ∃ b, a * b = Ι)
It sort-of works. I haven't tried to see if it could really scale up but I'd love if something like this could.
Oliver Nash (Feb 15 2022 at 10:42):
But it does at least give very basic to_additive
-type magic without actually duplicating any lemmas as well as allowing multiple operations and notation:
class distrib :=
(left_distrib : ∀ a b c : α, a * (b + c) = (a * b) + (a * c))
(right_distrib : ∀ a b c : α, (a + b) * c = (a * c) + (b * c))
Oliver Nash (Feb 15 2022 at 10:49):
I also have another mathematical example where this comes up that I just learned last week. Apparently algebraists studying nilpotent groups G
like to regard the lower central series as a filtration and then take the associated graded group LG
. However LG
is Abelian and furthermore carries a natural Lie bracket (coming from the commutator of two group elements) making it a Lie ring so we really want to land in the additive world. I have no plans to formalise this, and I think it wouldn't be too hard to pass from multiplicative to additive classes, but it is another example (albeit exotic) I think.
Adam Topaz (Feb 15 2022 at 13:02):
That example is not exotic at all. It's related to Malcev completions which are quite important (take Deligne's paper on P1 minus three points, for example).
Oliver Nash (Feb 15 2022 at 13:44):
Interesting, thanks!
Adam Topaz (Feb 15 2022 at 13:52):
BTW, if you interpret it properly, Kevin's example of Kummer theory can be seen as a special case of this. Namely, if you look at the isomorphism induced by the Kummer map , assume for simplicity that then taking Pontryagin duals you obtain an isomorphism . The cup-product in Galois cohomology is then related to the 2-step nilpotent part of this Lie algebra (well, the one arising from the mod-n central descending series) obtained from the absolute Galois group of , which (using some strong theorems, like the Merkurjev-Suslin theorem) can be related to the arithmetic of .
Adam Topaz (Feb 15 2022 at 13:56):
I guess one doesn't really need Galois cohomology for this... the same holds for group cohomology of any group. For example, group homology is a thing, and you can ask what and looks like.
Kyle Miller (Feb 15 2022 at 18:17):
In knot theory, the Alexander module of a knot is as a -module, where and where is the universal abelian cover of . In group theory land, (as an additive group) is isomorphic (as a multiplicative group) to with the earlier action being conjugation by , where is the fundamental group of .
In general, the derived series is a filtration, and each quotient is an abelian group and a -module (by conjugation), and is known as a higher Alexander module of .
Adam Topaz (Feb 15 2022 at 18:26):
@Kyle Miller Barry Mazur would say these are the same picture.
Kevin Buzzard (Feb 16 2022 at 09:05):
At the kind of mathematical level we're working at in mathlib I've found it fascinating that multiplication and addition have coexisted in different worlds and we've never needed to cross this divide at all. There's a natural pecking order for notation -- we have distribs to make * distribute over + but we never need to make + distribute over . Valuations were the first example where I thought we'd have to deal with this but we formalised the theory of adic spaces in the perfectoid project and I thought it was striking that Huber used multiplicative notation for his target monoids meaning that the axiom was still v(xy)=v(x)*v(y). However in this thread we're seeing examples where mathematicians really do pass from the multiplicative world to the additive world. I don't know whether this is some sort of profound thing or just a fact of life. Do we call something multiplication because it distributes over some addition? Do we call something addition because some multiplication distributes over it? Cohomology has an addition because we use cohomology rings, although the product is often denoted with a ∪.
Michael Stoll (Jul 22 2022 at 12:29):
Here is some related discussion.
My use case is Gauss sums, where one has to simultaneously consider a morphism ψ
from the additive group of a (finite) ring R
into the multiplicative monoid of another ring R'
("additive character") and another morphism χ
from the multiplicative monoid of R
into that of R'
("multiplicative character"), which are multiplied and summed over all elements of R
. It looks like I will have to define this in the form ∑ a : R, (χ a) * (ψ (of_add a))
, which is clumsy and ugly...
Kevin Buzzard (Jul 22 2022 at 12:43):
Maybe it's time we had this? What are the objections, other than "we'll have to write some boilerplate"? The advantages are that we don't have to write what Michael just pasted above.
Sebastien Gouezel (Jul 22 2022 at 12:46):
Since it will only ever be needed between groups or monoids (I mean, no ring version or field version or whatever), I don't think it will ever get too complex or too boilerplaty. So I think it's indeed a good idea to get it.
Damiano Testa (Jul 22 2022 at 12:48):
I also (and again!) support of this! We may even try to get degree
(with the appropriate no-zero-divisors assumption) to fit in there!
Eric Wieser (Jul 22 2022 at 12:49):
I can see us ending up with zero_one_hom
, one_zero_hom
, add_mul_hom
, mul_add_hom
, monoid_add_monoid_hom
, add_monoid_monoid_hom
, and another 6 versions for equivs
Eric Wieser (Jul 22 2022 at 12:49):
That's quite a lot of boilerplate
Eric Wieser (Jul 22 2022 at 12:50):
Even if we don't care about all 12 we certainly care about 4.
Kevin Buzzard (Jul 22 2022 at 12:50):
Is that an argument against or just an observation? I mean making LTE involved writing tens of thousands of lines of code which was quite a lot of code.
Eric Wieser (Jul 22 2022 at 12:51):
Sort of an objection to the "not too boilerplaty" claim
Kevin Buzzard (Jul 22 2022 at 12:51):
But do we care if we make another definition and then write some boilerplate?
Eric Rodriguez (Jul 22 2022 at 13:07):
Eric Wieser said:
I can see us ending up with
zero_one_hom
,one_zero_hom
,add_mul_hom
,mul_add_hom
,monoid_add_monoid_hom
,add_monoid_monoid_hom
, and another 6 versions for equivs
We probably don't need zero_one
and one_zero
, that's 2 down already.
Eric Rodriguez (Jul 22 2022 at 13:07):
I'm not sure if we'd even need add_mul_hom
+mul_add_hom
Sebastien Gouezel (Jul 22 2022 at 13:12):
No need to add all the possible ones, only those that are genuinely useful. And I guess this should only be monoid_add_monoid_hom
and add_monoid_monoid_hom
. More can be added later if uses show up.
Eric Wieser (Jul 22 2022 at 13:21):
Eric Rodriguez said:
I'm not sure if we'd even need
add_mul_hom
+mul_add_hom
I can see us ending up with them eventually, given the interest in non-unital rings
Eric Wieser (Jul 22 2022 at 13:21):
Sebastien Gouezel said:
No need to add all the possible ones, only those that are genuinely useful. And I guess this should only be
monoid_add_monoid_hom
andadd_monoid_monoid_hom
. More can be added later if uses show up.
Indeed; but my thought is that this is a sign of either not having a scalable design, or just needing some automation to generate boilerplate for us
Eric Wieser (Jul 22 2022 at 13:22):
And it's easier to write that automation / do the redesign if we have n
existing versions to clean up instead of n+12
Sebastien Gouezel (Jul 22 2022 at 13:29):
Eric Wieser said:
Eric Rodriguez said:
I can see us ending up with them eventually, given the interest in non-unital rings
Precisely, it's not a ring thing, it's a group thing, and used in very specific contexts. That's why I imagine we will only need these 2 versions.
Eric Wieser (Jul 22 2022 at 13:36):
I'd argue it's a monoid thing not a group thing, and so therefore applies to rings too. We indeed don't need special cases for rings, but for instance if we introduce a power operation on non-unital monoids/rings by pnat
, then that operaion would be an add_mul_hom
in the second argument.
Michael Stoll (Jul 25 2022 at 15:03):
I'm sitting in a lecture by Hendrik Lenstra at the Park City Mathematics Institute, and he just defined a map
\Z^t --> K^x, (n_i)_i |--> a_1^{n_1} ... a_t^{n_t} (where $K$ is a number field), whose kernel he wants to compute...
Eric Rodriguez (Jul 25 2022 at 15:24):
Michael Stoll (Jul 25 2022 at 16:17):
(I probably had a type in my first attempt at LaTeX.)
Adam Topaz (Jul 26 2022 at 03:48):
Here's a bit of code playing around with notation in Lean4
class Op (α : Type u) where op : α → α → α
macro "[[ Op " a:term " using " n:str "]]" : command => do
`(variable [Op $a]
local infix:65 $n:str => (λ a b : $a => Op.op a b))
variable (α : Type u)
[[ Op α using " × " ]]
variable (β : Type u)
[[ Op β using " ++ " ]]
example (a b : α) : α := a × b
example (a b : β) : β := a ++ b
Kevin Buzzard (Jul 26 2022 at 07:32):
The question is can we do rings like this or will typeclass inference get confused by the two Op
s?
Michael Stoll (Aug 03 2022 at 22:24):
I have started to write (= copy and modify) code for homomorphisms from additive to multiplicative monoids and vice versa.
So far, I took algebra/hom/group.lean
and extracted three files out of it, add_mul_hom.lean
(definition of homomorphisms from additive to multiplicative monoids and stuff that only relies on the definition), mul_add_hom.lean
(same for the other dierction) and hom_comp.lean
(stuff that involves both types like composition). I also converted (most of) equiv.lean
into a file equiv_mixed.lean
that has stuff on isomorphisms. What still needs to be done is to produce the corresponding versions of grop_instances.lean
and units.lean
(everything inside algebra/hom
). (Plus perhaps further material that lives elsewhere.)
What would be a reasonable way to proceed? I could make a PR with what I have so far (labeled as WIP), so that people can comment on it, and we can also discuss questions that arise here. Does this sound reasonable, or are there other preferred ways of proceeding in similar cases?
Jireh Loreaux (Aug 03 2022 at 23:11):
So, I'm going to second Eric's suggestion that this is too much boilerplate to actually undertake. That pnat
instance he mentioned is definitely something on my todo list. I will completely agree that we need some way to talk abut group homs between additive and multiplicative groups (for instance, in my use case, the index map from the (operator algebraic) $K_1$ group to the $K_0$ group is a multiplicative to additive group hom). But @Yury G. Kudryashov convinced me at LftCM that there is just too much duplication. Perhaps with the right to_additive
-like automation, it could be feasible, but that still feels like the wrong solution to me.
Michael Stoll (Aug 03 2022 at 23:35):
I think that to_additive
-like automation would be difficult, since this would require to figure out which mul
s etc. to turn into add
s etc, depending on the kind of structure on the various monoids present (there can be up to four; the latter occurs when proving associativity of composition).
Probably something like switch_add_mul
would work that takes a statement/proof involving one "mixed" homomorphism of type mul to add (say) and turns it into the corresponding statement for add to mul.
What is the problem with duplication as long as somebody writes the code?
Alex J. Best (Aug 04 2022 at 00:05):
The code has to be maintained after, ideally any change to the original API should be reflected in the duplicated one, which is tricky as often people making changes aren't aware of the duplicated version
Alex J. Best (Aug 04 2022 at 00:07):
to_additive type automation may not be flawless, but I'd say it's worth exploring to reduce the manual effort. The original to_additive has edge cases too but it is still remarkably useful
Michael Stoll (Aug 04 2022 at 00:15):
I assume the "right" way would be to write everything in terms of generic structures (not tied to additive or multiplicative notation) and then have automation that produces the 2^n variants of the generic statement (when n objects are involved). This should actually be quite possible via some kind of Macro expansion (which I think Lean4 is supposed to provide).
Kevin Buzzard (Aug 04 2022 at 07:14):
I think it's dangerous to say "we don't want to do it this way which works, we should do it this way which is a dream which exists only in my head". Michael has a problem which needs to be solved and he has a solution. I know CS people are against code duplication in general but they're also against big monorepos in general and we've shown that actually a big monorepo can work fantastically. Who cares if the duplicated code gets out of sync? Category theory contains files which are literally two copies of the same code, one for limits and one for colimits. Separating additive and multiplicative groups is a fundamental design decision in mathlib, made because we want type class inference to infer monoid laws, and type class inference wants there to be one answer but rings have two laws. It's very easy to speculate how things could have been if rings had been built in another way, but they weren't, and the code duplication that Michael is suggesting is simply a consequence we are at some point going to have to face. Nobody was complaining when Scott was making PRs which were twice as long as some theoretical automation could have made them because the duplication was a practical solution to a clearly existing problem. I think the same is true here.
Eric Wieser (Aug 04 2022 at 14:41):
and the code duplication that Michael is suggesting is simply a consequence we are at some point going to have to face
The problem is, the longer we put off facing it, the more it grows and the more likely we never end up facing it, which means:
- Newcomers to mathlib always have to write tonnes of boilerplate, because while we already have hundreds of boilerplate lemmas for
add_monoid_hom
, we won't have them formul_add_hom
- We're increasing faced with refactors which are no longer possible to make because there's just too much boilerplate that needs changing.
Kevin Buzzard (Aug 04 2022 at 15:07):
Yes I see that refactors would be problematic. But I can't imagine how automation can do this job, because how can it distinguish the source *
from the target *
? Is that sort of problem possible? If f : G ->* H
and the lemma is f (a * b) = f a * f b
you want the automation to spit out f (a * b) = f a + f b
.
For me the question of a refactor, or of automation, are problems which can possibly be solved tomorrow (the first might never happen, the second is probably a nontrivial project), but making a bundled structure f : G -> A
with f (g * h) = f g + f h
is a problem which can easily be solved today and which we need today.
Eric Wieser (Aug 04 2022 at 15:12):
Automation could have you define the structures / write the lemmas in the generic way, then spit out the various specialized versions
Eric Wieser (Aug 04 2022 at 15:13):
Kevin Buzzard said:
because how can it distinguish the source
*
from the target*
?
One is @has_mul.mul G _inst_1
, the other is @has_mul.mul H _inst_2
Kevin Buzzard (Aug 04 2022 at 16:21):
So moving forward how about we write the code duplication today and leave the tactic-writing until Lean 4?
Michael Stoll (Aug 04 2022 at 17:16):
But note that you need a version of the automation that can deal with more than two monoids (or similar), so it will have to be able to generate 8 or 16 versions in some cases.
Jireh Loreaux (Aug 04 2022 at 18:22):
I think if Michael really wants to write an initial draft of this by hand, then that's fine; but I think before we merge anything like this we'll need to have a hard think about how to deal with these problems. My fear (maybe unfounded, but obviously shared by several people here) is that we will mire ourselves in tedious work and / or that the library will become disjointed because of failure to keep the multiplicative/additive and mixed parts in sync.
At the very least, we need some sort of back-of-the-envelope calculation to estimate how many defs and lemmas we're talking about here. I get the feeling that it may be a few thousand. To me, this reeks of trying to do convergence without filters, which would be a nightmare; but maybe I'm wrong about mul/add.
Kevin Buzzard (Aug 04 2022 at 20:43):
I think that one reason I'm so much more keen on the idea of making a mul_add_hom
structure is that my naive model of the result is just one file which is handy in a few places but just sits there in the background not really doing much. Please don't take my huge enthusiasm for this structure as some kind of demand! It's more a "what is there not to like about this idea?" situation -- I really don't get what people are fussing about. Just one file, imported occasionally in the (rare!) situations where it really makes life easier, and who cares if it's out of sync; if it actually becomes a problem then OK we just make the tactic (note of course that any tactic-making now is a really bad idea as it actively makes the port to Lean 4 harder), but just one random file imported a few times to make some constructions less horrible -- how can this be problematic? My impression is that everyone else seems to be gazing years into the future and worrying about things not scaling. That's not why we want this structure! It's hard to imagine that it will ever need to be built upon too much, and people will add API if and when they need it. Note that we've got to LTE without ever needing it, so it's clear (to my surprise) that this structure only has some kind of niche usage, it's not going to take over the algebra hierarchy! I just don't get what people are worried about. Michael are you envisaging a few thousand lemmas to do what you want to do??
Jireh Loreaux (Aug 04 2022 at 21:09):
Kevin, I shared your feeling until I talked to Yury at breakfast at LftCM (I thought maybe you were even sitting there, but perhaps I'm misremembering). I don't remember all the details, but essentially, you're probably going to need: map and comap, kernels and images, composition lemmas, etc. (I think he went on naming several much more far reaching things).
Where did I ballpark a few thousand lemmas? Well, algebra.hom.group
alone (where monoid_hom
is defined) likely has 100-200 lemmas, and this doesn't even touch subgroups (kernel, image etc.). You will also need either new hom classes, or else just write the map_add_mul
, map_zero_one
lemmas manually. I think it's reasonable to hit a thousand lemmas doing this, and I've almost surely missed a bunch of things which will be necessary.
Now, despite my relatively newfound skepticism, if this is implemented and it seems to work, then great! All i'm advocating for is that we design carefully; not doing so can bite us.
Jireh Loreaux (Aug 04 2022 at 21:12):
Remember also, naming is hard to keep straight. Right now, (almost) every time a lemma name is changed to_additive
takes care of renaming the additive one. For all of these, you have to manually keep the names in sync, not to mention the statements. Maybe because it is only about groups, the naming is relatively stable at this point, I have no idea.
Michael Stoll (Aug 04 2022 at 21:25):
Naming is going to be a problem when there are three or more monoids involved.
E.g., there are eight ways to compose homomorphisms. Fixing the type of the seoncd one (in order of application; it is the first argument to comp
), there are still two compositions to be defined: say, the second homomorphism g
is from multiplicative to additive. Then we can (pre-)compose g
with an f
that is either a hom. from additive to multiplcicative or a multiplicative hom.
It would be desirable if all of this would be called comp
in some appropriate namespace. But since there are two different comp
s, we then cannot just simply use monoid_to_add_monoid_hom.comp
. Perhaps monoid_to_add_monoid_hom.add.comp
and monoid_to_add_monoid_hom.mul.comp
?
Then we could write g.add.comp f
or g.mul.comp f
. (Of course, it would be more desirable to just write g.comp f
, but I think this would require the possibility to overload names, which, as far as I know, Lean does not provide.)
Michael Stoll (Aug 04 2022 at 21:25):
This would have to be taken care of by the to_additive
analogues.
Michael Stoll (Aug 04 2022 at 21:27):
It gets even worse with comp_assoc
. This involves four monoids, so we need names like monoid_hom.add.mul.comp_assoc
if we do it in a similar way.
Michael Stoll (Aug 04 2022 at 21:28):
(It would be even better if there would be a general comp_assoc
lemma that applies whenever stuff gets composed that can be coerced to functions and for which composition makes sense. The proofs of these sixteen lemmas are all the same.)
Kevin Buzzard (Aug 04 2022 at 21:38):
But do you actually need these things in your use case? You don't even need a map which sends multiplication to addition, right? Such things seem to be much rarer.
Kevin Buzzard (Aug 04 2022 at 21:42):
Recently I played with the idea of defining contravariant functors directly (no mention of op
) and here you probably really would need all these extra things, but boy was it glorious not having to deal with all the op
s!
Michael Stoll (Aug 04 2022 at 21:51):
For my use case, I probably really don't need much. But I assume there will be more applications (also for mul_to_add), and so I would support the notion that it should be done "right", by which I would understand that you can do everything you can do with homomorphisms between monoids of the same kind also with homomorphisms between monoids of different kinds. And I agree that the best way of doing that is to beef up to_additive
such that it produces the necessary versions for all combinations.
Michael Stoll (Aug 04 2022 at 21:52):
Logarithms and exponentials are not that uncommon and would be fairly typical examples.
Kevin Buzzard (Aug 04 2022 at 21:56):
I want to argue that logarithm is uncommon, because it's not defined on the negative reals, so as it stands in mathlib (where it takes junk values) it doesn't send * to +. It would work for positive reals but we don't even have a type for these in Lean, so they can't be common either. I think the exp direction is much more common.
Michael Stoll (Aug 04 2022 at 21:58):
"not that uncommon" was referring to mathematics, not necessarily mathlib :smile:
Kevin Buzzard (Aug 04 2022 at 22:02):
If you'd asked me 5 years ago I would have said that maps that take + to * and * to + were all over the place and that any system which randomly separated out "groups with group law +" from "groups with group law *" was completely crazy. Now I have to admit that I was wrong.
Michael Stoll (Aug 04 2022 at 22:05):
Yes, it does work surprisingly well.
Michael Stoll (Aug 04 2022 at 22:06):
Has anybody been trying to consider pow x
or zpow x
as a monoid/group homomorphism? That would appear pretty natural (and perhaps underscores Kevin's point that add_to_mul is more common than mul_to_add
).
Jireh Loreaux (Aug 04 2022 at 22:18):
@Kevin Buzzard Regarding both directions of maps: in the 6-term cyclic exact sequence (due to Bott periodicity) in operator algebraic K-theory corresponding to a short exact sequence 0 → J → A → A / J → 0
, which is:
K₀(J) → K₀(A) → K₀(A / J) e→ K₁(J) → K₁(A) → K₁(A / J) δ→ K₀(J)
(wrap this around)
Then the maps I labeled e
is an add → mul
and δ
is a mul → add
because K₀
is additive but K₁
is multiplicative.
Mario Carneiro (Aug 05 2022 at 04:44):
Not sure if this has been mentioned already, but when one of these maps appears on the side in some complex construction, I think it is not at all a bad idea to just use A -> multiplicative B
homs instead of doing thousands of boilerplate lemmas for a new class. It is only when these show up often enough that the boilerplate associated with working with them is higher than the general API that we should consider doing all the combinatorial explosion of lemmas
Johan Commelin (Aug 05 2022 at 04:58):
@Mario Carneiro Yes, that's what Michael has been using so far. Search for add_char
in mathlib if you want to see details (-;
Mario Carneiro (Aug 05 2022 at 04:59):
In particular, if you have that as a backup plan then it means you don't have to go nuts on the combinatorial explosion, you can just do the particular special cases that happen to be very common
Johan Commelin (Aug 05 2022 at 05:03):
Certainly if the new notion would also unfold into a A →* multiplicative B
.
Michael Stoll (Aug 09 2022 at 17:33):
The solution I have come up with for my use case is the following. Define
import tactic.basic
import field_theory.finite.galois_field
import number_theory.cyclotomic.primitive_roots
import field_theory.finite.trace
universes u v
variables (R : Type u) [add_monoid R] (R' : Type v) [comm_monoid R']
@[derive [comm_monoid, inhabited]]
def add_char : Type (max u v) := (multiplicative R) →* R'
as is currently done in number_theory.legendre_symbol.add_character
, and then define the coercion to a function as
open multiplicative
instance add_char.has_coe_to_fun : has_coe_to_fun (add_char R R') (λ x, R → R') :=
{ coe := λ ψ x, ψ.to_fun (of_add x) }
so that it includes the conversion from R
to multiplicative R
.
Then I can write, e.g., ψ (a * x)
(for ψ : add_char R R'
and a x : R
) , and it is what I want it to be (i.e., ψ.to_fun (of_add (a * x))
and not ψ.to_fun (of_add a * of_add x) = ψ.to_fun (of_add (a + x))
).
It is then easy to show API lemmas like
lemma map_zero_one (ψ : add_char R R') : ψ 0 = 1 := ...
lemma map_add_mul (ψ : add_char R R') (x y : R) : ψ (x + y) = ψ x * ψ y := ...
lemma map_nsmul_pow (ψ : add_char R R') (n : ℕ) (x : R) : ψ (n • x) = (ψ x) ^ n := ...
and use them in the proofs, so that there is no need to even see an of_add
or to_add
anywhere (except in the proofs of these API lemmas).
Michael Stoll (Aug 09 2022 at 17:34):
I'm going to PR this next, once #15888 is merged (which should be soon).
Jireh Loreaux (Aug 09 2022 at 18:34):
That's very nice!
Junyan Xu (Aug 09 2022 at 18:38):
Has anybody been trying to consider pow x or zpow x as a monoid/group homomorphism?
docs#powers_hom and docs#zpowers_hom
Kevin Buzzard (Aug 09 2022 at 18:42):
Oh that's a great solution!
Jireh Loreaux (Aug 09 2022 at 18:48):
@Michael Stoll How generally can this be made to work? That is, does it solve all our problems? Are there limitations you have encountered?
Michael Stoll (Aug 09 2022 at 19:01):
I haven't tried to see how general this is. At least for what I wanted, it's sufficient.
I guess it would not work that well when you want to compose homomorphisms, say you have M →+ N
and N +→* P
(using +→*
for a homomorphism from additive to multiplicative) and want to produce M +→* P
.
Michael Stoll (Aug 09 2022 at 19:55):
It is possible (to get compositions as above), but one has to use to_multiplicative
, e.g. as in the following.
ψ' := ψ.char.comp (algebra.trace (zmod p) F).to_add_monoid_hom.to_multiplicative
But maybe it is a good compromise...
Jireh Loreaux (Aug 09 2022 at 20:00):
In terms of the simplicity of the solution and the integration with existing API, I think this is really quite spectacular. Nice work!
Eric Wieser (Aug 09 2022 at 21:59):
I think this is already the solution we use for docs#add_valuation.has_coe_to_fun
Eric Wieser (Aug 09 2022 at 22:00):
Although your version of inserting the of_add
is certainly nicer
Michael Stoll (Aug 11 2022 at 18:30):
This is now #16016 (also a nice number...).
Michael Stoll (Aug 13 2022 at 07:25):
It would be nice if someone could have a look at #16016. @Johan Commelin @Jireh Loreaux @Eric Wieser @Kevin Buzzard ?
Michael Stoll (Aug 14 2022 at 19:16):
"Lint mathlib" fails on #16016 after the latest push with error message
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to github.com:443
elan: command failed: curl -sSfL -o /dev/null -w %{url_effective} https://github.com/leanprover/elan/releases/latest
curl: (22) The requested URL returned error: 404
elan: command failed: curl -sSfL https://github.com/leanprover/elan/releases/download/latest/elan-x86_64-unknown-linux-gnu.tar.gz -o /tmp/tmp.97CY0BiNCz/elan-init.tar.gz
Error: Process completed with exit code 1.
What can I do about this?
Michael Stoll (Aug 14 2022 at 19:25):
I'm re-running the failed jobs for a second time now, and at least it doesn't fail after a few seconds. I'll keep my fingers crossed...
Michael Stoll (Aug 14 2022 at 20:04):
OK, #16016 is green again. Can it go on the merge queue now? @Johan Commelin @Riccardo Brasca
See also the discussion here.
Last updated: Dec 20 2023 at 11:08 UTC