Zulip Chat Archive
Stream: condensed mathematics
Topic: polyhedral lattice
Johan Commelin (Jan 20 2021 at 20:49):
For the statement of theorem 9.5 (and key lemma 9.8) we need polyhedral lattices. These seem like somewhat tricky gadgets to get a smooth definition working, because we have parts that live over the integers, the rationals, and the reals. I'll try some experiments in the next few days.
If you want to try things out, the definition is given just before theorem 9.5.
Johan Commelin (Jan 25 2021 at 16:13):
I've tried to push a definition to the polyhedral_lattice.lean
file
Damiano Testa (Jan 25 2021 at 17:04):
I would definitely be interested in this!
In fact, I decided to start with lemma 9.7 and have made some small progress, though nothing too serious.
I think that getting the foundations right for polyhedral lattices will be very useful!
Damiano Testa (Jan 25 2021 at 17:05):
(By the way, there is a lot of polyhedral geometry going into Groebner basis, so for someone looking to implement this tactic... this is a great place to start!)
Johan Commelin (Jan 25 2021 at 17:40):
I've now also pushed a statement of lemma 9.8
Johan Commelin (Jan 25 2021 at 17:41):
I have no idea whether the definition I chose for polyhedral lattice is useful. We'll have to play with it to find out.
Aaron Anderson (Jan 25 2021 at 18:01):
Johan Commelin said:
I've tried to push a definition to the
polyhedral_lattice.lean
file
In what branch?
Johan Commelin (Jan 25 2021 at 18:04):
master
Aaron Anderson (Jan 25 2021 at 18:05):
or rather, what project, I assume this isn't the master branch of main mathlib
Johan Commelin (Jan 25 2021 at 18:06):
https://github.com/leanprover-community/lean-liquid
Damiano Testa (Jan 26 2021 at 03:50):
In my small experience working with Z, Q and R, I found that replacing the coercions by explicit maps between generic comm_ring
/field
was very helpful. It gave lots of control over which identities or inequalities to use at each time.
Peter Scholze (Jan 26 2021 at 21:00):
By the way, a terminology question: I made up the term "polyhedral lattice" by analogy with "euclidean lattice" (and "hermitian lattice" etc.), but I assume that the concept already has a name(?). It would probably be good to switch to standard terminology (especially if this is merged to mathlib).
Peter Scholze (Jan 26 2021 at 21:00):
Similar comments also apply to say "pseudo-normed abelian group"
Peter Scholze (Jan 26 2021 at 21:18):
for any casual reader, here are the definitions in the lecture notes:
A "polyhedral lattice" is a finite free abelian group equipped with a norm that is given as the maximum of finitely many linear functions on with rational coefficients. (In particular, is a polyhedron with "rational" faces.)
A "pseudo-normed abelian group" is an abelian group together with subsets for all satisfying , and . In particular, for . (I do not ask that and although that is usually satisfied. I also do not ask that .)
Johan Commelin (Jan 26 2021 at 21:19):
I just realized that I think I managed to formalise an incorrect version of polyhedral_lattice
(besides the mistake caught by Adam).
@Peter Scholze can you explain what you mean by "a set of elements that generates the norm"?
Johan Commelin (Jan 26 2021 at 21:21):
are those the extremal vertices of the "unit convex set"?
Peter Scholze (Jan 26 2021 at 21:21):
Yes
Peter Scholze (Jan 26 2021 at 21:21):
Assumed integral, up to rescaling
Johan Commelin (Jan 26 2021 at 21:22):
I'll sleep a bit more on what is a suitable way of phrasing this in Lean.
Johan Commelin (Jan 26 2021 at 21:22):
I think if we literally write , then it will not be very usable.
Peter Scholze (Jan 26 2021 at 21:23):
I think one could also just keep track of those finitely many elements, instead of the norm
Johan Commelin (Jan 26 2021 at 21:23):
That's probably easier. I tried to write a definition like that... but that first approach was too naive.
Peter Scholze (Jan 26 2021 at 21:23):
Yes, I think the reals are a red herring here
Peter Scholze (Jan 26 2021 at 21:23):
in some sense
Johan Commelin (Jan 26 2021 at 21:24):
Johan Commelin said:
I think if we literally write , then it will not be very usable.
Not because doesn't work. But because it will push us towards statements that use everywhere. Which doesn't seem what we want.
Johan Commelin (Jan 26 2021 at 21:25):
Anyway, I'll go to bed before I fire up Lean again :grinning:
Peter Scholze (Jan 26 2021 at 21:25):
Good night!
Peter Scholze (Jan 26 2021 at 21:25):
I'll also leave for tonight, see you all!
Heather Macbeth (Jan 27 2021 at 01:56):
Peter Scholze said:
A "pseudo-normed abelian group" is an abelian group together with subsets for all satisfying , and . In particular, for . (I do not ask that and although that is usually satisfied. I also do not ask that .)
Is this the same as the following data on an abelian group?
- an "extended seminorm", i.e. a map with which satisfies and the triangle inequality ; plus
- a set satisfying , if and only if , and the slightly odd condition that if we have equality in the triangle inequality () then and imply
It's fiddly to check, but I would imagine constructing such data from the filtration by observing that for all the set is an interval, then defining to be its (= left endpoint) and to contain according to whether the interval contains its left endpoint.
If this is a correct characterization, I'd be curious to know whether the data of the set is ever used in the constructions about pseudo-normed abelian groups, or whether in fact only the extended seminorm matters.
Damiano Testa (Jan 27 2021 at 05:41):
For what is worth, I also agree that we should probably only keep track of the linear functions on , rather than forcing the coefficients to go up to ℝ
. This way, we can also go through ℚ
easily, when we need to. In terms of formalizing, this I think would make statements and proofs easier.
In fact, in my experience, real points of rational polyhedra/polytopes are very rarely actually useful. Most of what they do is confuse me and make me think that every real polytope can be approximated by a rational one, which is not true!
Johan Commelin (Jan 27 2021 at 05:58):
Snap! Thanks for setting me straight there. Do you have an easy example of a real polytope that can't be approximated by a rational one? Would like to add it to my internal list of "useful counterexamples".
Damiano Testa (Jan 27 2021 at 06:01):
I can give you a simple example of what goes wrong. If you take a cube, and move slightly up one of the vertices on the bottom face, then you break one square face into two triangles. Such things can mess stuff up a lot!
Damiano Testa (Jan 27 2021 at 06:02):
The main point is what "approximate" means: you cannot preserve the face structures.
Johan Commelin (Jan 27 2021 at 06:05):
I'll have to play with the example a bit more. But thanks for pointing this out
Bryan Gin-ge Chen (Jan 27 2021 at 06:33):
Johan Commelin said:
Snap! Thanks for setting me straight there. Do you have an easy example of a real polytope that can't be approximated by a rational one? Would like to add it to my internal list of "useful counterexamples".
There's some discussion of this in this MO question. Apparently the nonrational polytopes exist only in with , though there are non-rational polyhedral surfaces in : see this paper by Ziegler.
Johan Commelin (Jan 27 2021 at 06:35):
merci
Damiano Testa (Jan 27 2021 at 06:46):
The way I think about it, is that non-triangular faces impose non-trivial equations on the location of the vertices of your polytope. The content of these results is that if you pile up enough such equations, then the rational solutions are no longer dense in the real solutions. In the most extreme cases, the sets of rational solutions will be empty, while there will be real solutions.
The final step is some form of Mnëv universality that tells you that you can encode sufficiently complicated algebraic equations simply using the requirement that some points are/aren't coplanar.
In the discussion above, transcendental real numbers do not play a role: algebraic numbers suffice. I do not know whether there are any exotic real polytopes that are not algebraic, but this would seem very surprising: after all, you can "specialize generically" transcendental numbers... I am not sure!
Johan Commelin (Jan 27 2021 at 07:21):
Anyway, the current definition in the repo is very wrong. This discussion has been helpful.
Johan Commelin (Jan 27 2021 at 07:24):
I replaced the defn with some sorry
s. We'll have to experiment to find a defn that is usable and correct
Damiano Testa (Jan 27 2021 at 07:50):
In my experience with lattices and polytopes, what I had to do often was to consider at the same time two cones, one in the initial lattice and one in its dual. These cones would be dual, that is the pairing between any element of one with any element of the other would always be non-negative. Maybe we could try to formalize pairs of lattices with a pairing among them and then consider pairs of dual cones.
In "mathematical practice", the two cones serve the purpose of bounding each other: enlarging one, means that you may have to shrink the other. The eventual goal was always (in my cases) to prove that they were the exact duals of each other, namely, tha the supporting hyperplanes of one were contained in the other and conversely.
Peter Scholze (Jan 27 2021 at 09:47):
@Heather Macbeth Hmm, let me explain why I phrased the definition in that way. The main reason is that internally in a topos, a pseudo-normed thing is very different from a normed thing, as there are problems with taking the infimum. In more down-to-earth terms, think about the case where is a topological vector space like a Smith space(=Waelbroeck space), that is the dual of a Banach space with respect to the compactly generated weak-$\ast$ topology. This can be written as a similar such union , where each is compact Hausdorff. There is a corresponding map , but it is not continuous, as we did not endow with the norm topology. Maybe a better question is: What is this kind of norm that you have on a Smith space called?
Johan Commelin (Feb 05 2021 at 11:27):
class polyhedral_lattice (Λ : Type*) [normed_group Λ] :=
(tf : torsion_free Λ)
(polyhedral' [] : ∃ (s : finset (Λ →+ ℝ)) (hs : s.nonempty),
∀ x : Λ, ∥x∥ = finset.max' (s.image $ λ f, f x) (hs.image _))
Does this definition look ok :up: ?
Johan Commelin (Feb 05 2021 at 11:30):
It doesn't mention , but instead talks directly about a norm on .
Peter Scholze (Feb 05 2021 at 11:30):
I think one wants to be finitely generated? And why do you need nonempty? In order to ensure that it is a reasonable norm, one would much stronger hypothesis anyway, but maybe the fact that the norm is of this form already implies everything. But then it should also imply nonemptyness
Johan Commelin (Feb 05 2021 at 11:31):
And it claims that there is a finite (nonempty) set of additive maps , such that the norm of every is the maximum of the values , for .
Peter Scholze (Feb 05 2021 at 11:31):
Or just to be able to take the maximum?
Johan Commelin (Feb 05 2021 at 11:31):
@Peter Scholze I think you're in the wrong thread...
Peter Scholze (Feb 05 2021 at 11:31):
Whoops sorry!
Peter Scholze (Feb 05 2021 at 11:31):
Can one change the threads of messages?
Peter Scholze (Feb 05 2021 at 11:32):
OK done :-)
Johan Commelin (Feb 05 2021 at 11:33):
So, hs : s.nonempty
is indeed so that max'
makes sense.
Peter Scholze (Feb 05 2021 at 11:33):
Is it implicit in normed_group that norm 0 implies element 0?
Peter Scholze (Feb 05 2021 at 11:34):
Then I think it's almost OK, but you should replace the by maps to instead of
Johan Commelin (Feb 05 2021 at 11:35):
Ooh, you are right!
Johan Commelin (Feb 05 2021 at 11:35):
And I'll add fin.gen, I guess?
Peter Scholze (Feb 05 2021 at 11:35):
I think so
Peter Scholze (Feb 05 2021 at 11:35):
Then it looks like a good definition to me
Johan Commelin (Feb 05 2021 at 11:35):
class polyhedral_lattice (Λ : Type*) [normed_group Λ] :=
(tf : torsion_free Λ)
(fg : module.finite ℤ Λ)
(polyhedral' [] : ∃ (s : finset (Λ →+ ℚ)) (hs : s.nonempty),
∀ x : Λ, ∥x∥ = finset.max' (s.image $ λ f, f x) (hs.image _))
Peter Scholze (Feb 05 2021 at 11:37):
It looks to me like that should be a formalization that's up to the job. I'd be surprised if we actually need that is finite free as an abelian group, just torsion free and finitely generated should be the information we actually use. And the norm is always accessed via such a set
Peter Scholze (Feb 05 2021 at 11:38):
OK, lunch time for me! See you around.
Johan Commelin (Feb 05 2021 at 11:45):
instance int : polyhedral_lattice ℤ :=
{ fg := sorry, -- module.finite.self has the wrong instances :sad:
tf := λ m hm n h,
begin
rw [← nsmul_eq_smul, nsmul_eq_mul, mul_eq_zero] at h,
simpa only [hm, int.coe_nat_eq_zero, or_false, int.nat_cast_eq_coe_nat] using h
end,
polyhedral' :=
begin
refine ⟨{int.cast_add_hom ℚ, -int.cast_add_hom ℚ}, finset.insert_nonempty _ _, _⟩,
intro x,
simp only [finset.image_insert, rat.cast_neg, finset.image_singleton,
add_monoid_hom.neg_apply, rat.cast_coe_int, int.coe_cast_add_hom],
sorry
end }
Another two candy
ssorry
s
Johan Commelin (Feb 05 2021 at 11:49):
If someone wants to debug what it takes to make module.finite.self
work, that would be great. The trouble is that it's considering as a -module in two ways: once because it's an additive group, the other because it's the base ring.
Johan Commelin (Feb 05 2021 at 11:50):
Ooh, convert
does the job :octopus: :muscle:
Johan Commelin (Feb 05 2021 at 12:29):
Peter Scholze said:
Is it implicit in normed_group that norm 0 implies element 0?
So in mathlib, this is true. But for polyhedral lattices, I guess you want this in general, right? Otherwise the norm could be 0
on all of , and then wouldn't be profinitely filtered pseudo normed group, right?
Peter Scholze (Feb 05 2021 at 13:49):
I definitely want the polyhedral lattices to be "separated", yeah
Heather Macbeth (Feb 05 2021 at 15:13):
@Johan Commelin For the sorry,
polyhedral' :=
begin
refine ⟨{int.cast_add_hom ℚ, -int.cast_add_hom ℚ}, finset.insert_nonempty _ _, _⟩,
intro x,
simp only [finset.image_insert, rat.cast_neg, finset.image_singleton,
add_monoid_hom.neg_apply, rat.cast_coe_int, int.coe_cast_add_hom],
convert (finset.max'_insert (x:ℝ) {-(x:ℝ)} _).symm,
rw finset.max'_singleton,
rw max_comm,
refl,
end }
where
section for_finset
variables {α : Type*} [linear_order α]
lemma finset.max'_insert (a : α) (s : finset α) (H : s.nonempty) :
(insert a s).max' (s.insert_nonempty a) = max (s.max' H) a :=
sorry
end for_finset
Johan Commelin (Feb 05 2021 at 15:16):
@Heather Macbeth Feel free to push this to master
.
Heather Macbeth (Feb 05 2021 at 15:42):
Will do ... just finished the finset sorry
Johan Commelin (Feb 05 2021 at 15:42):
Ok, thanks!
Johan Commelin (Feb 06 2021 at 06:06):
We now have
namespace polyhedral_lattice
open pseudo_normed_group
variables (Λ : Type*) (r' : ℝ≥0) (M : Type*)
variables [normed_group Λ] [polyhedral_lattice Λ]
variables [profinitely_filtered_pseudo_normed_group_with_Tinv r' M]
instance : discrete_topology Λ :=
sorry
lemma filtration_finite (c : ℝ≥0) : (filtration Λ c).finite :=
sorry
instance filtration_fintype (c : ℝ≥0) : fintype (filtration Λ c) :=
(filtration_finite Λ c).fintype
instance : profinitely_filtered_pseudo_normed_group Λ :=
{ compact := λ c, by apply_instance, -- compact of finite
continuous_add' := λ _ _, continuous_of_discrete_topology,
continuous_neg' := λ _, continuous_of_discrete_topology,
continuous_cast_le := λ _ _ _, continuous_of_discrete_topology,
.. (show pseudo_normed_group Λ, by apply_instance) }
instance : profinitely_filtered_pseudo_normed_group_with_Tinv r' (Λ →+ M) :=
sorry
end polyhedral_lattice
Johan Commelin (Feb 06 2021 at 15:13):
What is the shortest way to proving that is discrete? Should we prove that a countable normed group is discrete?
Johan Commelin (Feb 06 2021 at 15:15):
Also, I wonder what is the best way to topologize in Lean:
instance add_monoid_hom.profinitely_filtered_pseudo_normed_group :
profinitely_filtered_pseudo_normed_group (Λ →+ M) :=
{ topology := λ c, sorry,
t2 := sorry,
td := sorry,
compact := sorry,
continuous_add' := sorry,
continuous_neg' := sorry,
continuous_cast_le := sorry,
.. (show pseudo_normed_group (Λ →+ M), by apply_instance) }
Johan Commelin (Feb 06 2021 at 16:15):
Does mathlib even know that is countable?
Shing Tak Lam (Feb 06 2021 at 16:51):
Is it docs#equiv.int_equiv_nat ?
Bryan Gin-ge Chen (Feb 06 2021 at 16:55):
Or possibly docs#denumerable.int ?
Johan Commelin (Feb 09 2021 at 12:45):
I pushed a little bit of progress here
Johan Commelin (Feb 09 2021 at 12:45):
Now we need to prove that a bunch of maps are continuous. But all the data has been filled in.
Johan Commelin (Feb 09 2021 at 12:46):
We are getting dangerously close to the statement of 9.5 :octopus:
Johan Commelin (Feb 09 2021 at 16:14):
I pushed some more wip
Johan Commelin (Feb 09 2021 at 16:15):
If people are looking for some relatively easy sorry
s, the see src/polyhedral_lattice/pseudo_normed_group.lean
Riccardo Brasca (Feb 09 2021 at 16:26):
What prevents M_{≤c}
to be equal to M
for all c
?
Riccardo Brasca (Feb 09 2021 at 16:33):
I'm asking because I don't see why lemma filtration_finite (c : ℝ≥0) : (filtration Λ c).finite
is true.
Johan Commelin (Feb 09 2021 at 18:19):
For polyhedral lattices this cannot happen, but for arbitrary M
it can.
Johan Commelin (Feb 09 2021 at 18:19):
I do think that this is the sorry
that will require the most work, though
Riccardo Brasca (Feb 09 2021 at 19:01):
OK, I didn't realize that the structure of pseudo_normed_group
of Λ
comes from normed_group Λ
. Then it is clear
Riccardo Brasca (Feb 09 2021 at 19:39):
I've done 0 < ε
. Can I just push it? (I have to golf it a little bit and then it should be ok)
Johan Commelin (Feb 09 2021 at 19:54):
sure!
Riccardo Brasca (Feb 09 2021 at 20:00):
pushed
Johan Commelin (Feb 09 2021 at 20:03):
I also pushed
Johan Commelin (Feb 09 2021 at 20:29):
I pushed more wip. @Riccardo Brasca I'll stop working on this file for a bit.
Riccardo Brasca (Feb 09 2021 at 20:30):
OK, I doing the last sorry
in discrete_topology Λ
that seems easy.
Johan Commelin (Feb 09 2021 at 21:15):
I killed the other one
Riccardo Brasca (Feb 09 2021 at 21:15):
discrete_topology Λ
is now sorry free... but I agree that the real difficulty is hidden in filtration_finite
.
Johan Commelin (Feb 09 2021 at 21:16):
oops, you just beat me
Johan Commelin (Feb 09 2021 at 21:20):
Another todo is to prove that Λ →+ M
is isomorphic to M
.
Johan Commelin (Feb 09 2021 at 21:20):
by providing profinitely_filtered_pseudo_normed_group_with_Tinv_hom
s in both directions
Johan Commelin (Feb 09 2021 at 22:02):
There are now a couple of other sorry
s that should all be easy. But I'm off to bed.
Johan Commelin (Feb 09 2021 at 22:03):
The .finite
sorry is not a roadblock for the statement of 9.5. So we can postpone that for now.
Johan Commelin (Feb 10 2021 at 08:15):
I moved this sorry to a new file on topology, it is orthogonal to the statement of 9.5
Johan Commelin (Feb 12 2021 at 16:13):
/-- A finite family `x : ι → Λ` generates the norm on `Λ`
if for every `l : Λ`,
there exists a scaling factor `d : ℕ`, and coefficients `c : ι → ℕ`,
such that `d • l = ∑ i, c i • x i` and `d * ∥l∥ = ∑ i, (c i) * ∥x i∥`.
-/
def generates_norm {Λ ι : Type*} [normed_group Λ] [fintype ι] (x : ι → Λ) :=
∀ l : Λ, ∃ (d : ℕ) (c : ι → ℕ),
(d • l = ∑ i, c i • x i) ∧ ((d : ℝ) * ∥l∥ = ∑ i, (c i : ℝ) * ∥x i∥)
Does this definition make sense :up:
Is the fundamental property that we want from our polyhedral lattices that they admit such a family of elements? Should we turn it into the defining property?
Johan Commelin (Feb 12 2021 at 16:15):
@Peter Scholze Does the proof ever explicitly use the rational linear functions defining the norm? It seems to me that 9.8 is the crucial place where the polyhedral property is used, and there it seems to be used in the form of a "family that generates the norm" only.
Peter Scholze (Feb 12 2021 at 17:38):
I think the idea is right, but I haven't yet math-processed the details of what you're written.
Peter Scholze (Feb 12 2021 at 17:40):
OK, I think that sounds right
Patrick Massot (Feb 12 2021 at 17:46):
Processing time: 2 minutes...
Patrick Massot (Feb 12 2021 at 17:46):
Friday evening is always a bit slow for me too.
Peter Scholze (Feb 12 2021 at 17:47):
Just finished teaching my class, and am in another zoom meeting ;-)
Johan Commelin (Feb 12 2021 at 20:19):
Just thinking out loud here: what do we need from polyhedral lattices? The answers should inform which point of view is best for the definition
- lemma 9.7 and 9.8. This uses the
generates_norm
kind of statement that I wrote above - is a polyhedral lattice. This should be easy no matter which definition we use.
- products/powers of polyhedral lattices are polyhedral lattices.
- If is a polyhedral lattice, then with norm is also a polyhedral lattice
- Quotients of polyhedral lattices, at least in the following special form: where embeds into diagonally
Peter Scholze (Feb 12 2021 at 20:20):
Actually, products are less important here, and the key are coproducts. So you want a definition that's nicely compatible with colimits, which points to a "generated by ..." perspective
Johan Commelin (Feb 12 2021 at 20:28):
I think the "generated by" perspective works quite well if we need to construct instances for the constructions listed above
-
If and are polyhedral lattices with norm generated by and respectively, then has norm generated by
-
If has norm generated by , then has norm generated by
-
For the quotient I haven't yet worked out what happens. I need to make some tea first (-;
Peter Scholze (Feb 12 2021 at 20:31):
It's a quotient! Just project generators :-)
Peter Scholze (Feb 12 2021 at 20:36):
Ah, I just realize that your definition is invariant under rescaling the norm by any real, while we want the norm to be rational on . (This is not really important, but I think for the precise statement of 9.8 it is required.)
Johan Commelin (Feb 12 2021 at 20:36):
Another fact that we need is for and
Johan Commelin (Feb 12 2021 at 20:38):
So we need to add a condition that for the generators, or something like that.
Johan Commelin (Feb 12 2021 at 20:38):
Alternatively we could even have a function qnorm
that factors the norm through .
Peter Scholze (Feb 12 2021 at 20:43):
I think(?) follows from the given condition, but yeah, adding seems like a good idea
Peter Scholze (Feb 12 2021 at 20:43):
I think it isn't necessary to get down to changing what a norm is, there's really only one small place where the condition will be important
Johan Commelin (Feb 18 2021 at 11:15):
I've pushed an attempt at changing the definition to the branch poly-latte-2
Johan Commelin (Feb 18 2021 at 11:16):
I'm coming around to the idea that maybe we should record a poset of submonoids. I think this might be what @Damiano Testa wanted to do
Damiano Testa (Feb 18 2021 at 11:31):
I suspect that what you are saying is very similar to what I had in mind. What I usually think, is that, when creating submonoids in this context, you normally define them by inequalities arising from taking "scalar products" with a few fixed elements of the lattice.
In this setting, it is usually simpler to create them in pairs. You build one in a lattice and the corresponding one in the dual lattice is the submonoid of all elements with non-negative pairing with the first. Now this construction is entirely symmetric (modulo the order_dual
stuff, I guess) and so you can now play each one against the other.
The lattices of submonoids will be in an inclusion reversing duality.
Am I talking about the same thing that you are referring to, Johan?
Johan Commelin (Feb 18 2021 at 12:33):
@Damiano Testa I think you are. Does anybody know a Bourbaki-style reference for these kind of objects?
Damiano Testa (Feb 18 2021 at 15:15):
I know a few references, but none that I would call "Bourbaki-style". Many of the references that I know, end up talking about toric varieties. I wonder if Fulton's book might have a reasonable blueprint for what we want...
Johan Commelin (Feb 20 2021 at 10:12):
We've been exploring some basics on the toric
branch yesterday...
Johan Commelin (Feb 20 2021 at 10:13):
My experiments so far seem to suggest that we probably can't really take shortcuts with these polyhedral lattices.
Johan Commelin (Feb 20 2021 at 10:13):
So we should just take a step back and thoroughly develop the basics
Kevin Buzzard (Feb 20 2021 at 16:51):
The toric branch doesn't compile for me -- I get four errors in the file 2021_02_19_toric/toric.lean
. @Damiano Testa is this to be expected?
Kevin Buzzard (Feb 20 2021 at 16:53):
e.g.
type mismatch at application
mul_pos ra0
term
ra0
has type
ra ≠ 0
but is expected to have type
0 < ?m_3
on line 44.
Damiano Testa (Feb 20 2021 at 16:53):
Kevin, yes, sorry, this is to be expected: we had decided to use regular elements which are in their way to mathlib, but not there yet...
Damiano Testa (Feb 20 2021 at 16:54):
Indeed, we decided to replace a positivity assumption, by a non-zero-divisor one...
Kevin Buzzard (Feb 20 2021 at 16:57):
I am totally confused. If R=R_0 is not an integral domain then the saturation as you defined it isn't a submodule, e.g. the saturation of {0} as an ideal of Z/10Z is just all the zero divisors, so it contains 2 and 5 but not 7, right?
Kevin Buzzard (Feb 20 2021 at 16:58):
Basically my question is: I have some time, how can I help?
Kevin Buzzard (Feb 20 2021 at 16:59):
You can always make the toric branch of the liquid project depend on a branch of mathlib (even one for which the PR isn't accepted), so you can have regular elements if you want.
Kevin Buzzard (Feb 20 2021 at 17:00):
You just make the toml of the toric branch have a different commit of mathlib than the toml of the master branch
Kevin Buzzard (Feb 20 2021 at 17:02):
I think saturated submodules are closed under arbitrary intersection, so you can just define the saturation to be the intersection of the saturated modules containing the given module
Damiano Testa (Feb 20 2021 at 17:08):
Our idea was to use R for the base ring, likely \Z or \Q or \R, and R0 would be the nonnegative elements in there. However, we decided that the order was probably a red herring and so we were going to instead use R0 for a type mapping to the nonnegative elements.
Damiano Testa (Feb 20 2021 at 17:09):
And replace the positivity assumptions on elements of R with regularity of elements of R0
Kevin Buzzard (Feb 20 2021 at 17:12):
got it
Kevin Buzzard (Feb 20 2021 at 17:12):
shall I fix it all up?
Damiano Testa (Feb 20 2021 at 17:12):
Do you want to work on it now? I have some time as well
Damiano Testa (Feb 20 2021 at 17:12):
Together, I meant!
Kevin Buzzard (Feb 20 2021 at 17:13):
sure
Kevin Buzzard (Feb 20 2021 at 17:13):
discord?
Damiano Testa (Feb 20 2021 at 17:14):
Sure, let me try to see if i can join with my phone and computer at the same time!
Kevin Buzzard (Feb 20 2021 at 22:53):
OK so the revised approach (with R0 -> R a semiring hom with R0 being thought of as the "non-negative elements" of R, and the model being ) works really well. There's a missing is_scalar_tower instance which I'm about to PR and which slowed us down a bit, and Damiano had to leave before he told me how to prove the last lemma, so there is a chance that it's false in thsi generality; it's the statement that the dual of the dual of an R0-saturated sub is itself. In fact surely this isn't true, this needs some degeneracy condition on f. I've pushed to the toric
branch.
Damiano Testa (Feb 21 2021 at 06:17):
I am looking at this right now: I think that we will need to assume something more now.
Damiano Testa (Feb 21 2021 at 06:18):
I am trying to see if adding one further layer of duals around the equality works (i.e. dual flip.dual dual S = dual S
)
Damiano Testa (Feb 21 2021 at 06:19):
But in any case, the pairing that I have in mind the whole time is non-degenerate and at the moment it could be anything at all, even zero as Kevin remarks!
Damiano Testa (Feb 21 2021 at 06:31):
Sorry, my mistake!
Damiano Testa (Feb 21 2021 at 06:42):
Ok, this seems to work:
lemma dual_dual_dual_of_saturated {S : submodule R₀ M} (Ss : S.saturated R₀) :
f.dual_set P₀ (f.flip.dual_set P₀ (f.dual_set P₀ S)) = f.dual_set P₀ S :=
le_antisymm (λ m hm n hn, hm _ ((le_dual_set_iff f).mpr rfl.le hn)) (λ m hm n hn, hn m hm)
This is really looking like we should prove that these duals are a closure operation (which might be called a Galois connection in Lean)
So, what we seem to be missing now is the assumption that f.dual_set
is appropriately "injective".
My guess is that this is the assumption that the pairing f
is non-degenerate.
I pushed to toric
, since I may have to leave soon.
It may be the time now to start proving statements about the case in which P = R
(i.e. the target of the duality map is the base ringR
and possibly also assuming that R₀
injects into R
.
Damiano Testa (Feb 21 2021 at 07:04):
When I push, I get errors with CI, but I cannot understand where the problem is. It seems to be in files that I did not touch: I only modified toric
and the toml
. This last one, without understanding what I did, so my guess is that this is the source of the problem.
Kevin Buzzard (Feb 21 2021 at 09:00):
Why did you touch the toml?
Damiano Testa (Feb 21 2021 at 09:07):
Because when I pulled toric, there were differences in the toml that I did not see, and when I pushed I think that I updated it back with the wrong one... Sorry
Damiano Testa (Feb 21 2021 at 09:08):
I am not really comfortable with git
Kevin Buzzard (Feb 21 2021 at 09:16):
Fair enough. If you were comfortable with git it would be easy to fix :-) I'll fix it when I'm at a computer
Kevin Buzzard (Feb 21 2021 at 09:24):
All of my commits yesterday also have a red X so it doesn't look like your fault
Kevin Buzzard (Feb 21 2021 at 14:43):
It looks to me like my bumping of mathlib has broken things in the sense that some PR's have been merged which means that right now the repo doesn't compile with mathlib master, the problem is not with the toric file but with other files. The simplest solution is probably to bump mathlib on master and then merge to the toric branch. Unfortuately I don't have time to do this today :-/
Damiano Testa (Feb 21 2021 at 15:05):
Ok, I am relieved that it may not have been my fault: I know very little about git, and I always feel a little uneasy using it. However, I was getting confident that I can use the basic push/pull commands with very few issues!
Adam Topaz (Feb 21 2021 at 16:14):
@Damiano Testa @Kevin Buzzard I'm playing around with the toric branch right now, but I'm really confused. Why should dual_dual_of_saturated
be true at all (even under some nondegeneracy assumption on f
). The dual_set
is defined in terms of P_0
, but saturated
and saturation
don't mention P_0
at all! What happens if I take P_0 = \top
?
Damiano Testa (Feb 21 2021 at 16:18):
When proofs will start being more substantial, P will be R (i.e. the integers \Z) and P_0 will be the naturals \N. The pairing modules M and N are going to be a module and its dual, and the pairing the natural evaluation map. By then, I hope that the API being developed right now will be useful and will start making more sense!
Damiano Testa (Feb 21 2021 at 16:19):
Of course, right now it is possible that there are some extraneous assumptions/issues, but hopefully the endgoal is clearer now!
Adam Topaz (Feb 21 2021 at 16:20):
Okay, but say is a vector space, is the perfect pairing between and , (the base field) and .
If you take any proper subspace of , it's dual set is everything, so the lemma cannot hold true, right?
Adam Topaz (Feb 21 2021 at 16:20):
I guess I'm saying that one should rethink the definition of saturated
to include P_0
somehow.
Damiano Testa (Feb 21 2021 at 16:21):
The flexibility of having generic R_0 and R is so that we can apply this more freely also to Q_\geq 0, \real,... And so on
Adam Topaz (Feb 21 2021 at 16:22):
Oh, I completely agree that it will be useful to keep P_0
around.
Damiano Testa (Feb 21 2021 at 16:24):
At the moment, I suspect that also the map R_0 -> R may need to have no kernel: this is a case that I have not yet thought about.
Damiano Testa (Feb 21 2021 at 16:26):
However, try seeing if the lemmas make sense when R=P=Z, R_0=P_0=N, since this is what we really care about!
Adam Topaz (Feb 21 2021 at 16:27):
I'm suggesting this definition:
def saturated' (s : submodule R₀ M) : Prop :=
∀ (m : M), (∀ (n : N), f m n ∈ P₀) → m ∈ s
Adam Topaz (Feb 21 2021 at 16:28):
I.e it's "saturated" with respect to f
and P_0
Adam Topaz (Feb 21 2021 at 16:30):
At least this way if then being saturated would ensure that the submodule is the whole module to begin with.
Adam Topaz (Feb 21 2021 at 16:32):
This also ensures that the left kernel with respect to the induced pairing taking values in P/P_0
must be contained in the module (of course, this only makes sense in the case where everything is a ring, not just a semiring)
Damiano Testa (Feb 21 2021 at 16:33):
Hmm, I need to think about this: the "saturation" that I had in mind was with respect to multiplication by R_0
, not with respect to the bilinear map...
Adam Topaz (Feb 21 2021 at 16:34):
I think saturated'
implies saturated
?
Damiano Testa (Feb 21 2021 at 16:34):
The two certainly should play well with each other, but for the moment, this has not been an issue.
Damiano Testa (Feb 21 2021 at 16:36):
I guess that saturated'
might imply saturated
in the case in which the pairing is "non-degenerate", but for the moment, this has not been an assumption. For instance, we still did not have to assume that f
is not identically zero. (Of course, I do not especially care about this case, but not having to assume it is a plus!)
Damiano Testa (Feb 21 2021 at 16:37):
I think that saturated'
and saturated
play with different notions of "saturation", one using the multiplication and one using the pairing. We will want that they are compatible, at some point, but for the moment, I think that the weaker one is what makes this part of the proofs work.
Damiano Testa (Feb 21 2021 at 16:38):
However, I think that the last lemma, which has a sorry
has gotten to a stage where we are going to need some further assumption, since I think that, at the level of generality where it is stated, it is not true.
Damiano Testa (Feb 21 2021 at 16:39):
It may be that with saturated'
it is true, but I have not thought about this yet. However, just like you say, my idea was to add conditions on the pairing to get this to be true and provable!
Adam Topaz (Feb 21 2021 at 16:45):
Hmm... okay, I now think saturated' is not quite right.
Damiano Testa (Feb 21 2021 at 16:51):
I think that, once there will be a reasonable definition of "non-degenerate" on the pairing f
, something along the lines of saturated'
will also be true.
Damiano Testa (Feb 21 2021 at 16:52):
but with the "positivity" of P_0
, the condition "pairing with all the elements gives something positive", will not play well with having the possibility of pairing with a vector and its opposite.
Damiano Testa (Feb 21 2021 at 16:59):
Something closer to what you are suggesting might be introducing a "positive" structure M_0
on M
and one on N_0
, and then pairing only elements from these "positive sublattices". However, we are going to want to have polyhedra inside there, so we will not want to fix a global positive sublattice, but allow the positivity to vary depending on context.
Maybe. If I understand correctly what you are suggesting! :smile:
Adam Topaz (Feb 21 2021 at 17:00):
I'm still not sure what I'm actually suggesting ;)
Adam Topaz (Feb 21 2021 at 17:00):
I'm just trying to catch up in understanding what the approach is here.
Damiano Testa (Feb 21 2021 at 17:03):
Ok, so I think that keeping in mind the example in which
R = P = ℤ, R₀ = P₀ = ℕ, M = ℤ ^ n, N = Hom (ℤ ^ n, ℤ), f = natural pairing,
might be clarifying.
Damiano Testa (Feb 21 2021 at 17:04):
the lemmas that are now stated proved, happen to not require many of the hypotheses that are implicit in that example.
Damiano Testa (Feb 21 2021 at 17:05):
Part of the motivation for not assuming this from the very beginning is that we do not want to commit to integrality, and will also want to extend scalars to the rationals/reals.
Damiano Testa (Feb 21 2021 at 17:06):
Besides, the proofs so far are very close to term mode, so it is not surprising that they are
- very general
- maybe not very advanced mathematically
- will require further build up to be more meaningful!
Damiano Testa (Feb 21 2021 at 17:08):
Ah, one more thing: using is_regular
is a way of avoiding mention of an underlying order on the ring R
. We had started with inequalities, but it turned out to be not so effective, which is why we shifted to regular
(i.e. non-zero-divisor). However, if it helps with the intuition, this assumption can be seen as a positivity assumption, in the final context mentioned above.
Johan Commelin (Feb 22 2021 at 07:29):
Thanks a lot for all the work here!
Johan Commelin (Feb 22 2021 at 16:23):
The poly-latte-2
branch contains some progress in the direction of direct sums of polyhedral lattices. This is now going smoother then before, thanks to a helpful pointer by Peter.
Peter Scholze (Feb 22 2021 at 16:25):
(deleted)
Adam Topaz (Feb 22 2021 at 19:08):
I'm looking at the first sorry in toric.lean
. I'm confused about the intuition for has_extremal_ray
. How should I think about this definition?
Adam Topaz (Feb 22 2021 at 19:08):
The docstring says: An extremal ray r
is a cyclic submodule that can only be "reached" by vectors in r
. What?
Adam Topaz (Feb 22 2021 at 19:09):
I also changed the definition of is_cyclic to use @Heather Macbeth 's nice notation:
def is_cyclic (s : submodule R₀ M) : Prop := ∃ m : M, (R₀ ∙ m) = s
Adam Topaz (Feb 22 2021 at 19:23):
Should has_extremal_ray
also assume that r
is contained in s
?
Damiano Testa (Feb 22 2021 at 20:08):
The intuition for an extremal ray is an edge of a cone: two vectors of the cone that have their sum contain in the ray must both belong to the ray. Does this help?
Adam Topaz (Feb 22 2021 at 20:09):
Sure, but should it be ccontained in s
?
Damiano Testa (Feb 22 2021 at 20:16):
I think that you are right, yes, we forgot that condition: if r is not contained in s, then the condition can be vacuously satisfied by having no pair of elements of s ever sum into r!
Adam Topaz (Feb 22 2021 at 20:16):
I mean should the definition be:
def has_extremal_ray (s r : submodule R₀ M) : Prop :=
r ≤ s ∧ r.is_cyclic ∧ ∀ {x y : M}, x ∈ s → y ∈ s → x + y ∈ r → (x ∈ r ∧ y ∈ r)
Adam Topaz (Feb 22 2021 at 20:16):
Okay. I'll fix the definition then. I think I'll also make it into a structure if you're okay with that....
Adam Topaz (Feb 22 2021 at 20:17):
structure has_extremal_ray (s r : submodule R₀ M) : Prop :=
(incl : r ≤ s)
(is_cyclic : r.is_cyclic)
(mem_mem_of_sum_mem : ∀ {x y : M}, x ∈ s → y ∈ s → x + y ∈ r → (x ∈ r ∧ y ∈ r))
Damiano Testa (Feb 22 2021 at 20:17):
I do not really know the implications of having a def/structure, so sure!
Damiano Testa (Feb 22 2021 at 20:17):
Thanks!
Johan Commelin (Mar 12 2021 at 14:03):
Peter Scholze said:
Ah, I just realize that your definition is invariant under rescaling the norm by any real, while we want the norm to be rational on . (This is not really important, but I think for the precise statement of 9.8 it is required.)
I just checked, but in the statement and proof of 9.8, we never make any use of the condition
(rational : ∀ l : Λ, ∃ q : ℚ, ∥l∥ = q)
Does this mean we did something wrong? Here is the statement that we formalized:
lemma lem98 [fact (r' < 1)] (N : ℕ) (hN : 0 < N) :
∃ d, ∀ c (x : Λ →+ Mbar r' S) (hx : x ∈ filtration (Λ →+ Mbar r' S) c),
∃ y : fin N → (Λ →+ Mbar r' S),
(x = ∑ i, y i) ∧
(∀ i, y i ∈ filtration (Λ →+ Mbar r' S) (c/N + d)) :=
This depends on lem97
, which we didn't formalize yet. But lem97
doesn't mention polyhedral lattices at all.
Peter Scholze (Mar 12 2021 at 14:51):
You are right, it doesn't matter. Sorry!
Peter Scholze (Mar 12 2021 at 14:53):
I really appreciate how readable the statements of these lemmas are!
Peter Scholze (Mar 12 2021 at 14:53):
Ah, wait, one question: Where is the quantification over ?
Johan Commelin (Mar 12 2021 at 14:54):
It's in a comment above the code :sweat_smile:
Peter Scholze (Mar 12 2021 at 14:54):
It seems to appear out of the blue, but it's critical that say here is independent of , so the quantification over should happen after the quantification over
Johan Commelin (Mar 12 2021 at 14:54):
But the constant doesn't depend on S
, of course Lean doesn't know this yet.
Johan Commelin (Mar 12 2021 at 14:54):
-- the `d` below doesn't and musn't depend on `S`
-- in the future we may want to reformulate to make this explicit
Peter Scholze (Mar 12 2021 at 14:55):
OK
Peter Scholze (Mar 12 2021 at 14:55):
It is absolutely critical that the constants in 9.5 are independent of , otherwise the whole thing is completely vacuous
Peter Scholze (Mar 12 2021 at 14:55):
So maybe your current formalization of 9.5 is not actually correct?
Johan Commelin (Mar 12 2021 at 14:56):
Ok, I will refactor this asap. And I understand that if we want to take a projective limit over S
that the constants should not depend on S
Johan Commelin (Mar 12 2021 at 15:05):
Done
Peter Scholze (Mar 12 2021 at 15:06):
Great, thanks!
Johan Commelin (Mar 15 2021 at 15:00):
Here is what we seem to need about polyhedral lattices:
-
We need to settle on a definition, and define the category of polyhedral lattices: https://github.com/leanprover-community/lean-liquid/issues/11
I think morphisms should benormed_group_hom
(maybe norm-nonincreasing?) -
We need to define the following constructions: given and some , we want with the a resacled coproduct norm (= sum of norms of components), the rescaling factor being
Part of this is already done insrc/polyhedral_lattice/direct_sum.lean
-
We need to turn into a functor, at least in the first component.
See https://github.com/leanprover-community/lean-liquid/issues/10 -
We need to take certain quotients of powers of : see also https://leanprover-community.github.io/liquid/index.html#cosimplicial-lattice and assemble them into a cosimplicial polyhedral lattice https://github.com/leanprover-community/lean-liquid/issues/18
For this latter part, we need (co)simplicial objects, which are now in mathlib!
Johan Commelin (Mar 15 2021 at 15:33):
@Peter Scholze Do you think it's ok to drop the rationality hypothesis, given that we didn't use it in 9.8. As far as I can see it isn't used anywhere else either.
Peter Scholze (Mar 15 2021 at 15:38):
Yes, I think you can drop it.
Johan Commelin (Mar 16 2021 at 09:36):
I'm working on making Hom
functorial, and I think morphisms of polyhedral lattices should be norm-nonincreasing.
Peter Scholze (Mar 16 2021 at 09:36):
Sure
Johan Commelin (Mar 17 2021 at 06:31):
Johan Commelin said:
I'm working on making
Hom
functorial, and I think morphisms of polyhedral lattices should be norm-nonincreasing.
This is done now
Johan Commelin (Mar 17 2021 at 10:05):
From the bullet list above, the first three points are now done.
Johan Commelin (Mar 17 2021 at 10:21):
@Peter Scholze Should the maps between the varying be norm-nonincreasing? In that case, should there be an extra '
tacked on, that rescales the norms?
Peter Scholze (Mar 17 2021 at 10:23):
Hmm, the norms ought to be defined so that all maps are by definition norm-nonincreasing
Peter Scholze (Mar 17 2021 at 10:25):
It's just the quotient norm from
Johan Commelin (Mar 17 2021 at 10:28):
Maybe I'm confused about what the simplicial maps do, but I was thinking that they would come from maps like
and that seems norm-increasing
Peter Scholze (Mar 17 2021 at 10:29):
Hmm, no, they should just be inclusions into one summand
Peter Scholze (Mar 17 2021 at 10:30):
It's just something very general about Cech nerves in the category of polyhedral lattices, with norm-nonincreasing maps
Johan Commelin (Mar 17 2021 at 18:02):
The Cech conerve is now mostly done. The sorry
for which I don't know directly how to proceed is showing that the norm is generated by finitely many elements. But I haven't really thought about it yet. If you project norm generators down into the quotient, you still need to understand the quotient norm.
(But first I have to put some kids in bed.)
Johan Commelin (Mar 17 2021 at 18:03):
This is in polyhedral_lattice/cech.lean
Johan Commelin (Mar 17 2021 at 18:19):
Another issue is that we need to prove that the quotients are finite free. At some point we just want to know that fin.gen. + torsion-free <=> there exists a finite basis. Once we know that, we don't have to worry about which of the two definitions is best suited for the various use cases.
Kevin Buzzard (Apr 05 2021 at 13:03):
First question: Why in polyhedral_lattice/finsupp.lean
are we letting ι
be a finite type but then considering maps ι →₀ Λ
, i.e. finitely supported maps, rather than just maps ι → Λ
(which are the same thing but are surely easier to work with)? Is it just so that doing sums is easier? I've not seen this idiom before.
If we were using finsum
then we could just work with maps (but AFAIK finsum
is still on the PR queue)
Kevin Buzzard (Apr 05 2021 at 13:12):
Second question: In polyhedral_lattice/direct_sum.lean
we have a proof that a finite direct sum of polyhedral lattices is a normed group, and a commented-out half-finished proof that the direct sum is a polyhedral lattice, with a comment that this argument is now superseded by a finsupp
argument. However in polyhedhral_lattice/finsupp.lean
we only have a proof that a finite power Λ^m
of a polyhedral lattice is a a polyhedral lattice. Do we not use general direct sums?
Kevin Buzzard (Apr 05 2021 at 13:20):
Third question: I just noticed that the integers with norm sending to is a polyhedral lattice for us (but probably not for Peter if I'm reading the definition after the statement of 9.4 of analytic.pdf
correctly).
Johan Commelin (Apr 05 2021 at 13:35):
Kevin Buzzard said:
First question: Why in
polyhedral_lattice/finsupp.lean
are we lettingι
be a finite type but then considering mapsι →₀ Λ
, i.e. finitely supported maps, rather than just mapsι → Λ
(which are the same thing but are surely easier to work with)? Is it just so that doing sums is easier? I've not seen this idiom before.If we were using
finsum
then we could just work with maps (but AFAIKfinsum
is still on the PR queue)
We could/should probably delay the fintype
assumption until we start rescaling the norm by the cardinality of \iota
.
My reason for working with finsupp
is that we want to formalise coproducts, with the coproduct norm.
It seemed natural to reserve function types, aka \Pi
, for products with the product norm (= sup norm).
Kevin Buzzard (Apr 05 2021 at 13:38):
There's a fintype assumption in the definition of a polyhedral lattice, so an arbitrary direct sum of polyhedral lattices is probably not even a polyhedral lattice.
Johan Commelin (Apr 05 2021 at 13:38):
direct_sum.lean
was my first attempt. But we don't need dependent direct sums, so finsupp
is good enough. I didn't bother finishing it. I should probably have added a warning at the top of the file.
Johan Commelin (Apr 05 2021 at 13:39):
Kevin Buzzard said:
Third question: I just noticed that the integers with norm sending to is a polyhedral lattice for us (but probably not for Peter if I'm reading the definition after the statement of 9.4 of
analytic.pdf
correctly).
Because it is not rational? It turns out we don't need the rationality condition. At some point we thought we needed it for combinatorial lemma 9.8. But it turns out we don't need it. So I removed the axiom.
Last updated: Dec 20 2023 at 11:08 UTC