Zulip Chat Archive

Stream: Proofs from the book

Topic: Amsterdam


Moritz Firsching (Nov 11 2024 at 16:11):

Dhyan Aranha said:

Hi all, we are running a lean learning seminar here at the University of Amsterdam and our group would like to work on formalizing things from "proofs from the book". I was wondering if it would be possible to have a separate channel for us to discuss/work collaboratively?

Thanks @Johan Commelin for creating this channel!

I propose to use this Topic "Amsterdam" for those in Amsterdam to figure out what part to work on, and then if needed open more targeted topics.

Dhyan Aranha (Nov 12 2024 at 09:34):

I skimmed a little bit through the book, if we want to take a chapter and start working here are a couple I like:

1) Chapter 10, Hilbert's Third problem: I think this is nice because it also depends on results from chapter 8 on irrationality so maybe it would be more interesting to make a blueprint for it.

2) Chapter 24, Van der Waerden's permanent conjecture: A cursory search with moogle didn't turn up anything in mathlib.

Also if we follow Moritz's other idea and work on something with a lot of TODO's then I think Monsky's Theorem could be a nice project as well!

Moritz Firsching (Nov 12 2024 at 10:08):

Nice plans!

for 10) Not sure how difficult it would be to formalize polyhedra for this, which is not in mathlib yet?!

for 24) This seems doable with what is in mathlib right now on first sight, I added the statement just now:
https://github.com/mo271/FormalBook/pull/75

Joseph Myers (Nov 13 2024 at 04:36):

Polyhedra and polygons are one of those concepts where the same term is used informally to refer to what would be many different formal concepts. Is a polyhedron a solid body (a set of points, and if so, closed or open?)? Its surface? A combinatorial structure relating vertices, edges and faces (e.g. an abstract polytope) with some associated geometry? Does it need to be convex? If considering a surface, can it be self-intersecting? Can a polyhedron be disconnected, or empty, or unbounded? If considering a surface, is that just a set of points, or is it a continuous map from an index type such as a sphere where the choice of map matters? Does the polyhedron go in a real affine space? A real vector space (I don't really think that more restricted version is a good idea, though using an affine space limits what you can use from convexity at present)? An affine space for a real inner product space (this is probably the one you want for Hilbert's third problem, since you're dealing with rearranging solid bodies under isometries, but a lot of things about polyhedra won't care so much about the metric, only at most the topology)? An affine space over some more general ring (if you do that, it's even less clear what the right definitions are)?

I've thought a bit about these things given that I'll eventually want sets in the plane with locally polygonal boundary for aperiodic monotiles formalization. Also when working out what would need to be added to mathlib to be able to state all past IMO problems cleanly and idiomatically, various things about polygons come up frequently (and at least one past problem requires the concept of a convex polyhedron and its vertices as well).

Joseph Myers (Nov 13 2024 at 04:38):

In practice, if you want to do something with polyhedra you probably need to formalize whatever concept (making it as general as possible) best fits your use case, and expect that eventually people will formalize other related concepts as well and add API linking them together.

Lenny Taelman (Nov 13 2024 at 09:46):

From the two mentioned above, I would have a preference for ch 24 (permanent conjecture).

My personal favourite would be Chapter 22 (dissection of square in odd number of triangles). It is probably a bit more ambitious. It involves both some really nice algebra (valuations) and combinatorics (Sperner's lemma). And I don't see any obstacles right away. I think I convinced myself that the combinatorial part does not implicitly use something about planar graphs, I hope I did not miss anything.

Dhyan Aranha (Nov 13 2024 at 11:35):

One of the reasons for doing a project like this is to get some experience writing blueprints/working with GitHub. If we were to go in the direction of Chapter 22 (Monsky's theorem), I think Moritz has already laid out a blueprint for it.

I was wondering if it would make sense to clone the part of the repository pertaining to this chapter, make our own small blueprint website and work with it ourselves and when we are finished we resubmit/PR it back to Moritz's repository?

I'd be very happy to hear any opinions/advice about this!

Moritz Firsching (Nov 13 2024 at 11:37):

I think Monsky is going to be quite fun!

Moritz Firsching (Nov 13 2024 at 11:40):

Regarding making a clone of the part of the repo or not:
I think it would be easier to just work in the main one? I mean I'm happy to also add anybody interested as admin to the repo, so you can also review and approve stuff yourself and not be blocked by me. The idea should be that the bar to submitting something should be as low as possible.
I think the normal setup is to just make a fork of the main repo, push to a branch there and open a pull request.

Moritz Firsching (Nov 13 2024 at 11:46):

Regarding formalizing polytopes: I agree with everything @Joseph Myers says. Eventually V- and H-representation and a equivalence between should be done. Also one probably wants this to integrate nicely with hyperplane-arrangements, oriented matroids, once all of those concepts are there.
Personally, I only care about convex polytopes, but chapter 10/Dehn's theorem of course is an example that also works for non-convex polytopes.

Moritz Firsching (Nov 13 2024 at 12:59):

Perhaps one could build on https://github.com/Jun2M/Main-theorem-of-polytopes, see discussion here: https://leanprover.zulipchat.com/#narrow/stream/217875-Is-there-code-for-X.3F/topic/Main.20Theorem.20of.20Polytope.20Theory/near/394635860

Alex Kontorovich (Nov 13 2024 at 15:20):

We ran into similar questions some time ago with @Vláďa Sedláček and @Ian Jauslin already in the case of polygons in synthetic geometry. Does Mathlib have a reasonable structure for them? (We came up with several options, and were exploring which worked better than others in practice...)

Joseph Myers (Nov 13 2024 at 21:29):

For the use case of stating geometry problems talking about e.g. "Let ABCDE be a convex pentagon" (I've reviewed all IMO problems since 2006 with any geometrical elements to see what we still need to add to mathlib to state them cleanly), I'd be inclined to a very minimal definition of polygons

structure Polygon (P : Type*) (n : ) where
  points : Fin n  P

where doing anything useful with the definition requires P at least to be an affine space. And then there would be a series of definitions for various common polygon properties with lemmas proved for whatever properties are needed for each lemma. Such properties include e.g. "convex", "no three consecutive vertices collinear", "no two consecutive vertices equal", "boundary is a simple closed curve", "vertices are coplanar", "vertices concyclic", "polygon circumscribes a circle", "polygon is equilateral", "polygon is regular (flag-transitive under isometries)". And there would also be various functions yielding types other than Prop, such as the interior (union of bounded connected components of the complement?) as a set of points, the union of the interior with the boundary (not the same as the closure of the interior in degenerate cases) as a set of points, the boundary as a continuous function from AddCircle to P, the boundary as a set of points, the perimeter of the polygon (sum of side lengths).

Yao Liu (Nov 14 2024 at 01:27):

P at least to be an affine space

and P should be 2 dimensional? or do skew polygons come up often enough?

Joseph Myers (Nov 14 2024 at 03:21):

"vertices are coplanar" is one of the common properties I listed. It's much less likely to be relevant whether the ambient space P is two-dimensional. Polygons as faces of polyhedra may well be convenient to talk about without needing to use subtypes for a two-dimensional subspace with the face.

Yao Liu (Nov 14 2024 at 12:58):

Ha, I missed that. Sounds good, though it may be extra work to define interior. Other possibilities for P may include hyperbolic space and lattices, that have a large group acting on it to move polygons around.

Jonas van der Schaaf (Nov 15 2024 at 20:28):

I think that Monsky might actually be more difficult than expected: there is not enough lean code to construct a non-trivial valuation on R, in particular extend the 2-adic valuation. I think that one would have to prove Chevalley's theorem, which is quite non-trivial.

I guess you could just sorry the existence of such a valuation and work with it, but this feels like a bit of a shame.

Kevin Buzzard (Nov 15 2024 at 20:59):

Funnily enough I went to a talk today by @Andrew Yang announcing a proof of what an algebraic geometer would call Chevalley's theorem. Is it the same one?

Damiano Testa (Nov 15 2024 at 21:53):

Oh, I am really happy to hear that Chevalley's Theorem is approaching: what I consider to be my first mathlib project was precisely about Chevalley's Theorem! I did not get very far, but I love the result and I learned a lot about formalisation!

Jonas van der Schaaf (Nov 16 2024 at 08:55):

There are many "Chevalley's theorems", so I'm guessing it's probably not the same. I'm talking about extending a subring of a field to a valuation ring lying over some fixed prime. See theorem 2.1 or theorem 3.11 for a statement and proof.

Kevin Buzzard (Nov 16 2024 at 09:19):

Yes it's a different one -- this was about image of a constructible set. Your theorem looks much easier to formalise than what Andrew did! It's Zorn and ten lines of algebra -- what's the problem with that?

Lenny Taelman (Nov 16 2024 at 09:41):

The book has an elementary proof, avoiding Chevalley and avoiding much field theory. The crux is that for our purposes the valuation group can be any ordered abelian group.

Jonas van der Schaaf (Nov 16 2024 at 09:59):

I see that a variant of Chevalley was merged into mathlib 2 days ago, I just didn't have the latest version of mathlib. Sorry!

Thomas Koopman (Nov 19 2024 at 13:37):

Jan, Giacomo and I tried to formalize the statement for the existence of the valuation. However we got stuck when we were trying to introduce the existence of a group G.
We basically want to say there exists a valuation, but for this valuation we need a group, we'd rather not specify the group in the theorem , becasue it feels neater. To solve this we ideally would want to start the theorem with "There exists a group G ...". For now we have introduced a group as a variable and then stated the theorem, but this probably isn't going to work because it's not true for an arbitrary group. Do you have any suggestions on how to tackle this? We have pushed it to our branch Appendix_Jan_Giacomo_Thomas.

Lenny Taelman (Nov 19 2024 at 14:58):

Are you asking about how to existentially quantifier over groups? I think ∃ (G : Type) (gp : Group G), 1=1 should work. Given a type G, Group G is the type of group structures on G.

Jan Hendriks (Nov 19 2024 at 15:21):

Yes, this was indeed our question. Thanks for the answer, we could not find to quantify over typeclasses
in lean, but this works indeed.

Johan Commelin (Nov 19 2024 at 16:13):

Isn't this because valuations in mathlib are pretty general? I don't know the proof in question, but I would imagine that it just wants a rank one valuation. Is that right?

Lenny Taelman (Nov 20 2024 at 07:30):

Monsky's argument requires the existence of a valuation lifting the 2-adic one, without any constraints on the value group. I guess the original proof used standard theorems to conclude that a rank one valuation exists. The book adheres to the philosophy of trying to be as self-contained as possible, and therefore gives an elementary proof of the existence of a 2-adic valuation on the real numbers, at the expense of having no control over the value group. (Another variant would be to extend the 2-adic valuation to the finitely generated subfield of R generated by the vertices of the triangulation).

Johan Commelin (Nov 20 2024 at 07:31):

Ok. Understood. But then you likely want to quantify over docs#GroupWithZero, right?

Kevin Buzzard (Nov 20 2024 at 08:23):

Maybe even totally ordered groups with zero?

Dhyan Aranha (Nov 20 2024 at 17:01):

I think our teams are all modifying the same chapter_22.lean file in their various branches of our fork. Do people think it'd be cleaner if instead we divided this chapter into smaller chapters, say one for each group? (I just wonder if this would avoid some of the merging problems we had on Monday with the Surnames-Branch...)

Moritz Firsching (Nov 20 2024 at 19:08):

I'm happy to help with the merging/rebasing. I think usually it is best to merge early and often.

Dhyan Aranha (Nov 21 2024 at 05:49):

Moritz Firsching said:

I'm happy to help with the merging/rebasing. I think usually it is best to merge early and often.

do I just add you as a collaborator to the fork then? (sorry for all the stupid questions...before doing this I only ever used GitHub to write math papers )

Moritz Firsching (Nov 21 2024 at 07:17):

You can just open a pull request directly from a fork. So for example here: https://github.com/mo271/FormalBook/compare/main...dhyan-aranha:FormalBook:main
And then you can choose a branch where the stuff that should be added lives on your side. While making this pull request, there is a checkbox  Allow edits and access by maintainers (or something like this), which is checked by default. Keep the check for this and I will be able to work out merge conflicts if there are any.
Just give it a try and if something goes wrong we can fix it...

Jan Hendriks (Nov 21 2024 at 18:45):

Kevin Buzzard said:

Maybe even totally ordered groups with zero?

Yes, this is indeed what we did. We now ran into a related problem: We are trying to prove the theorem of the existence of a valuation v for which v(1/2) > 1. This is the theorem:

theorem valuation_on_reals : (Γ₀ : Type) (locg : LinearOrderedCommGroupWithZero Γ₀)
  (v : Valuation  Γ₀), (v (1/2)) > 1

For this we will use a lemma (which we did not yet prove) that states the existence of a valuation subring B that is a subring of the reals and that does not contain 1/2. This ring B will give a value group (B.ValueGroup) and a valuation (B.valuation) that suffices for the theorem. B.ValueGroup is an instance of an ordered commutative group with zero, so we want to do

use B.ValueGroup

and this reduces the goal to

(locg : LinearOrderedCommGroupWithZero B.ValueGroup) (v : Valuation  B.ValueGroup), (v (1/2)) > 1

The B.ValueGroup is an instance of an ordered commutative group with zero. Does anyone have a suggestion how we can tell this to lean? This is probable easy, but we were not able to find it on the internet.

Malvin Gattinger (Nov 21 2024 at 20:00):

Is this what you want?

import Mathlib.Algebra.Order.GroupWithZero.Canonical
import Mathlib.RingTheory.Valuation.Basic
import Mathlib.Data.Real.Basic
import Mathlib.RingTheory.Valuation.ValuationSubring

lemma B_Exists :
     (K : Type) (f : Field K) (B : ValuationSubring K ),
      True -- replace this with other properties you want?
      := by
  sorry

theorem valuation_on_reals :
     (Γ₀ : Type)
      (locg : LinearOrderedCommGroupWithZero Γ₀)
      (v : Valuation  Γ₀),
        (v (1/2)) > 1:= by
  -- use the Lemma:
  have fromLemma := B_Exists
  rcases fromLemma with K, f, B, B.whateverProperty
  use B.ValueGroup -- this works now?
  sorry

(I added the imports so that this is self-contained and you can now click on the top right icon in the code to open it in the live lean website.)

I guess I don't know enough math to fully understand your question. I also notice that there is docs#CommGroupWithZero and docs#LinearOrderedCommGroupWithZero but no OrderedCommGroupWithZero which you want to say about B? Or is "ordered"and "linerarly ordered" the same here?

Malvin Gattinger (Nov 21 2024 at 20:07):

I'll stop speculating after this, but maybe you need to show the instance for B in that lemma then?

import Mathlib.Algebra.Order.GroupWithZero.Canonical
import Mathlib.RingTheory.Valuation.Basic
import Mathlib.Data.Real.Basic
import Mathlib.RingTheory.Valuation.ValuationSubring

lemma B_Exists :
     (K : Type) (f : Field K) (B : ValuationSubring K) (B_is_LOCGWZ : LinearOrderedCommGroupWithZero B.ValueGroup),
      True -- replace this with other properties you want?
      := by
  sorry

theorem valuation_on_reals :
     (Γ₀ : Type)
      (locg : LinearOrderedCommGroupWithZero Γ₀)
      (v : Valuation  Γ₀),
        (v (1/2)) > 1:= by
  -- use the Lemma:
  have fromLemma := B_Exists
  rcases fromLemma with K, f, B, B_is_LOCGWZ, B.whateverProperty
  use B.ValueGroup -- this works now?
  use B_is_LOCGWZ
  sorry

Kevin Buzzard (Nov 21 2024 at 22:13):

Isn't the idea that you start with Q\mathbb{Q}, for which the 2-adic valuation has this property (taking values in the exotically-named type ℤₘ₀, i.e. integers, with group law written * and all regarded as positive, and then add a "bottom" element 0), and then use some axiom of choice nonsense to extend the valuation to R\mathbb{R} and allow the target type to grow if necessary?

Johan Commelin (Nov 22 2024 at 04:28):

@Jan Hendriks you should be able to write use inferInstance

Johan Commelin (Nov 22 2024 at 04:28):

I hope that works

Jan Hendriks (Nov 22 2024 at 07:50):

use inferInstance works indeed, thanks.

Jan Hendriks (Nov 22 2024 at 08:04):

Kevin Buzzard said:

Isn't the idea that you start with Q\mathbb{Q}, for which the 2-adic valuation has this property and then use some axiom of choice nonsense to extend the valuation to R\mathbb{R} and allow the target type to grow if necessary?

This will probably also work, but for Monsky's theorem we only need a (non-archimedean) valuation on R\mathbb{R} with the property that v(1/2)>1v(1/2) > 1. The existence of such a valuation is stated as a lemma with a proof in the appendix of chapter 22 in proofs from the book and we wanted to proof this lemma. Maybe we are doing some things that are essentially already in mathlib, but it will still be a good exercise.

Jonas van der Schaaf (Nov 22 2024 at 12:31):

(deleted)

Dhyan Aranha (Nov 23 2024 at 07:11):

What's a good definition for a line in the plane (maybe Sperner's team has thought about it)? Here is a possible definition I guess:

def line (L :  →ₗ[] ²) : Prop := L  0

I wanted to use this definition to formalize the statement that there exists no "rainbow lines" which I think would look something like

lemma no_rainbow_lines (Γ₀ : Type) (locg : LinearOrderedCommGroupWithZero Γ₀) (v : Valuation  Γ₀) :
 ¬  (L:  →ₗ[] ²) (hL : line L), Function.Surjective (painter Γ₀ locg v  L)

(painter is just the function that assigns a point in the plane a color). I didn't try and prove it yet, but I thought I'd ask here first because maybe there is something obviously asinine about all this...

Johan Commelin (Nov 23 2024 at 07:37):

Your current definition of line is parameterizing the line. I guess you could also describe lines as the kernel of a linear map R2R\R^2 \to \R. Which version is the prettiest to work with depends a bit on how you want to use them.

Dhyan Aranha (Nov 23 2024 at 08:11):

I think if we establish the connection between area of a triangle and the determinant then probably the definition in terms of the kernel is the best...right now (imo) we do something hacky and avoid the determinant by just writing down a formula for the area of a triangle.

Dhyan Aranha (Nov 23 2024 at 11:46):

Dhyan Aranha said:

What's a good definition for a line in the plane (maybe Sperner's team has thought about it)? Here is a possible definition I guess:

def line (L :  →ₗ[] ²) : Prop := L  0

I wanted to use this definition to formalize the statement that there exists no "rainbow lines" which I think would look something like

lemma no_rainbow_lines (Γ₀ : Type) (locg : LinearOrderedCommGroupWithZero Γ₀) (v : Valuation  Γ₀) :
 ¬  (L:  →ₗ[] ²) (hL : line L), Function.Surjective (painter Γ₀ locg v  L)

(painter is just the function that assigns a point in the plane a color). I didn't try and prove it yet, but I thought I'd ask here first because maybe there is something obviously asinine about all this...

unfortunately this was asinine :(, I need to also consider lines which do not go through the origin. (doh!)

Joseph Myers (Nov 23 2024 at 13:49):

A line is an AffineSubspace whose direction has finrank equal to 1. (There's also line[ℝ, P, Q] notation you can use for the affine span of two points, which is the line through them when those two points are different.)

Dhyan Aranha (Nov 24 2024 at 14:56):

Im trying to connect the formula we used for the area of a triangle with the determinant, but I'm running into the following dumb issue:

First I define my matrix:

def my_matrix (X Y Z : ²): Matrix (Fin 3) (Fin 3)  :=
![![X 0, X 1, 1], ![Y 0, Y 1, 1], ![Z 0, Z 1, 1]]

then I want to prove the following lemma:

lemma det_of_my_matrix (X Y Z : ²) :
  Matrix.det (my_matrix X Y Z) =
   (X 0 * Y 1 + X 1 * Z 0 + Y 0 * Z 1 - Y 1 * Z 0 - X 1 * Y 0 - X 0 * Z 1)

I thought this would be an easy application of first rewriting with Matrix.det_fin_three(last theorem here) and simp/ring_nf, but these tactics don't seem to reduce things down and then I get stuck with a lot of junk I don't understand in the proof state like:

Matrix.vecHead (Matrix.vecTail fun i ↦ @OfNat.ofNat ℝ 1)

any help input would be much appreciated :).

Moritz Firsching (Nov 24 2024 at 15:11):

  rw [Matrix.det_fin_three, my_matrix]
  simp [-Matrix.vecCons_const]
  ring

works for me

Moritz Firsching (Nov 24 2024 at 15:12):

(replacing ℝ² with (Fin 2 → ℝ))

Moritz Firsching (Nov 24 2024 at 15:13):

#mwe

import Mathlib


def my_matrix (X Y Z : (Fin 2  )) : Matrix (Fin 3) (Fin 3)  :=
![![X 0, X 1, 1], ![Y 0, Y 1, (1 : )], ![Z 0, Z 1, (1 : )]]

lemma det_of_my_matrix (X Y Z : (Fin 2  )) :
  Matrix.det (my_matrix X Y Z) =
   (X 0 * Y 1 + X 1 * Z 0 + Y 0 * Z 1 - Y 1 * Z 0 - X 1 * Y 0 - X 0 * Z 1) := by
  rw [Matrix.det_fin_three, my_matrix]
  simp [-Matrix.vecCons_const]
  ring

Dhyan Aranha (Nov 24 2024 at 15:49):

Moritz Firsching said:

  rw [Matrix.det_fin_three, my_matrix]
  simp [-Matrix.vecCons_const]
  ring

works for me

thanks Moritz! I didnt know about, Matrix.vecCons_const, everything works now :)

Dhyan Aranha (Nov 24 2024 at 16:01):

I'd like to understand a little bit about what's going on: it seems if I use

simp [Matrix.vecCons_const]

then I get the same problem I had before in the proof state (i.e. Matrix.vecHead (Matrix.vecTail fun i ↦ @OfNat.ofNat ℝ 1 etc. ). But somehow instead, if we write

simp [-Matrix.vecCons_const]

the problem goes away. So what is actually going on here?

Moritz Firsching (Nov 24 2024 at 16:06):

ah, the minus tells simp not to use Matrix.vecCons_const

Moritz Firsching (Nov 24 2024 at 16:06):

I haven't looked in detail what is going on here, perhaps this is actually something that should be fixed

Dhyan Aranha (Nov 24 2024 at 16:10):

ok, yeah I guess the my question should have been: why does removing Matrix.vecCons_const from the simplifier make it work?

Johan Commelin (Nov 25 2024 at 04:26):

You can ask which lemmas simp uses by appending a question mark simp?

Moritz Firsching (Nov 25 2024 at 07:25):

yes, probably I should have explained how I came up with excluding Matrix.vecCons_const. As Johann said I did simp? to get

lemma det_of_my_matrix (X Y Z : (Fin 2  )) :
  Matrix.det (my_matrix X Y Z) =
   (X 0 * Y 1 + X 1 * Z 0 + Y 0 * Z 1 - Y 1 * Z 0 - X 1 * Y 0 - X 0 * Z 1) := by
  rw [Matrix.det_fin_three, my_matrix]
  simp only [Matrix.cons_val', Matrix.cons_val_zero, Matrix.empty_val',
    Matrix.cons_val_fin_one, Matrix.cons_val_one, Matrix.head_cons, Matrix.cons_val_two,
    Nat.succ_eq_add_one, Nat.reduceAdd, Matrix.tail_cons, Matrix.vecCons_const, Matrix.tail_val',
    Matrix.head_val', Matrix.head_fin_const, mul_one, one_mul]

And then I noticed that Matrix.tail_cons cannot properly work when the vecCons is simplified away by Matrix.vecCons_const. Therefore I tried to exclude it and this was already enough.
I think an argument could be made that Matrix.vecCons_const should not even be a simp lemma, or even a simp lemma in the other direction.

Dhyan Aranha (Nov 25 2024 at 10:06):

yeah this was roughly the explanation I was looking for...If you do simp?, it wasn't clear to me at all why out of all of those things Matrix.vecCons_const was causing problems. Thanks!

(Also, it seems to me a little bit disheartening that the simplifier needs all those things in the first place )

Moritz Firsching (Nov 25 2024 at 11:14):

Seems like removing simp from Matrix.vecCons_const doesn't break anything:
https://github.com/leanprover-community/mathlib4/pull/19457

Moritz Firsching (Nov 25 2024 at 11:14):

with this fix, simp should just work for your lemma...

Bhavik Mehta (Nov 25 2024 at 20:09):

(cc @Eric Wieser, this looks like a similar situation to one we'd talked about with simp and explicit matrix indices in the past)

Dhyan Aranha (Nov 28 2024 at 16:23):

A general question about blueprints: Now that we have started to formalize Monsky's result we have introduced a lot of auxiliary definitions and lemmas. Do people typically reflect all these small things in the latex part of the blueprint or do people just tex the main lemmas/theorems and definitions?

Johan Commelin (Nov 28 2024 at 20:25):

"It depends"

Johan Commelin (Nov 28 2024 at 20:26):

Certainly for a first project I would use the blueprint foremost as an aid for the formal counterpart

Maris Ozols (Nov 30 2024 at 14:55):

I was trying to submit a small pull request to our Amsterdam git repository but looks like I accidentally submitted a big one to the main mo271 repository with all our contributions so far. Sorry about this... :sweat_smile:

Maris Ozols (Nov 30 2024 at 15:01):

OK, all is good. I just figured out how to close it, and I made a new one to our local Amsterdam repository.

Moritz Firsching (Nov 30 2024 at 17:03):

no worries, and I'm happy to merge the stuff you are working on in the mo271 repo anytime...

Maris Ozols (Dec 07 2024 at 19:51):

I want to use a lemma (Valuation.map_sum_eq_of_lt) that involves a finite set s, a hypothesis for each element of s, and a sum over s. I am trying to use it for a specific set with 4 elements. But somehow at the very last step of the proof I'm getting a very strange error - the very last argument hf has "type mismatch": it contains (@instSDiff ℕ instDecidableEqNat) instead of (@instSDiff ℕ fun a b ↦ propDecidable (a = b)). I have no idea what this means or how to fix this. (Also, maybe I'm doing something completely stupid and there is a much simpler way to apply this lemma.) Here my full code:

import Mathlib.Tactic
import Mathlib.RingTheory.Valuation.Basic

open Finset

variable {Γ₀ : Type*}
variable [LinearOrderedCommGroupWithZero Γ₀] (v : Valuation  Γ₀) {x y : }

lemma vmax (Γ₀ : Type) (locg : LinearOrderedCommGroupWithZero Γ₀) (v : Valuation  Γ₀)
  (A a₁ a₂ a₃ a₄: )
  (h1 : v (A) > v (a₁))
  (h2 : v (A) > v (a₂))
  (h3 : v (A) > v (a₃))
  : v (A + a₁ + a₂ + a₃) = v (A) := by
  -- Create a finite set to represent the terms inside v(...)
  let s : Finset  := {0, 1, 2, 3}
  -- Make 0 correspond to A, 1 to a₁, 2 to a₂, etc.
  let f :    := λ i => match i with
    | 0 => A
    | 1 => a₁
    | 2 => a₂
    | 3 => a₃
    | _ => 0
  -- Let j = 0 and show that j ∈ s
  have hj : 0  s := by
    exact mem_insert_self 0 s
  -- Prove the first necessary condition for map_sum_eq_of_lt
  have h0 : v (f 0)  0 := by
    exact ne_zero_of_lt h1
  have hf :  i  s \ {0}, v (f i) < v (f 0) := by
    unfold s
    simp only [mem_sdiff, mem_insert, mem_singleton, and_imp, forall_eq_or_imp, not_true_eq_false,
      lt_self_iff_false, imp_self, one_ne_zero, not_false_eq_true, forall_const,
      OfNat.ofNat_ne_zero, forall_eq, true_and]
    exact h1,h2,h3
  change v (f 0 + f 1 + f 2 + f 3) = v (f 0)
  -- Now we can apply the theorem with the right arguments
  exact Valuation.map_sum_eq_of_lt _ hj h0 hf

Kevin Buzzard (Dec 07 2024 at 20:40):

Finset is "constructive finite sets" and the error you're seeing stems from this. To talk about Finsets in Lean you need an algorithm to decide whether two elements of the set are equal or not. Here's one "algorithm" to decide whether x = y or not: by the law of the excluded middle, either x = y or x \not=y is true, and you just choose whichever one is true and go on from there. As you can probably guess, what is going on here is not really (classical) mathematics, this is just some foundational nonsense. Unfortunately for the naturals, there is another algorithm, which is this: any natural number is by definition succ (succ (succ ... (succ 0))))...) and to figure out if two naturals are equal you just cancel the succs and see if both naturals end up at 0 at the same time or not.

You will be overjoyed to know that if you define a finite set of naturals using one of these algorithms and then try to prove something about it using the other one, Lean will barf, with an inscrutable error message which you just saw. One possible proof is to use Set.Finite, which is nonconstructive finiteness, so that none of this nonsense is an issue. But usually there are other way around it; hang on...

Kevin Buzzard (Dec 07 2024 at 20:45):

Here's a workaround:

  change v (f 0 + f 1 + f 2 + f 3) = v (f 0)
  -- Now we can apply the theorem with the right arguments
  convert Valuation.map_sum_eq_of_lt _ hj h0 _
  · simp_all [s, f]
    ring
  · convert hf

Kevin Buzzard (Dec 07 2024 at 20:50):

Here's how I'd prove the result: I would avoid finsets completely.

lemma vmax (Γ₀ : Type) (locg : LinearOrderedCommGroupWithZero Γ₀) (v : Valuation  Γ₀)
    (A a₁ a₂ a₃ : ) (h1 : v (A) > v (a₁)) (h2 : v (A) > v (a₂)) (h3 : v (A) > v (a₃)) :
    v (A + a₁ + a₂ + a₃) = v (A) := by
  -- move brackets to the right
  repeat rw [add_assoc]
  -- apply criterion for v(A+B)=v(A)
  apply Valuation.map_add_eq_of_lt_left
  -- ⊢ v (a₁ + (a₂ + a₃)) < v A
  -- apply "v(a+b) < x if v(a) < x and v(b) < x"
  apply Valuation.map_add_lt v h1 ?_
  -- and then apply it again
  apply Valuation.map_add_lt v h2 h3

Kevin Buzzard (Dec 07 2024 at 20:51):

The philosophy is that you should believe that the basic API for valuations is solid, so any random useful fact like "v(A+B)=v(A) if v(B)<v(A)" or "v(A+B)<x if v(A)<x and v(B)<x" will already be proved somewhere in the library. In this particular case I found them with apply?.

Kevin Buzzard (Dec 07 2024 at 20:53):

Here's a golfed version.

open Valuation -- to save on typing

lemma vmax (Γ₀ : Type) (locg : LinearOrderedCommGroupWithZero Γ₀) (v : Valuation  Γ₀)
    (A a₁ a₂ a₃ : ) (h1 : v (A) > v (a₁)) (h2 : v (A) > v (a₂)) (h3 : v (A) > v (a₃)) :
    v (A + a₁ + a₂ + a₃) = v (A) := by
  -- move brackets to the right
  repeat rw [add_assoc]
  -- Now just write the function representing the proof term directly.
  exact map_add_eq_of_lt_left v <| map_add_lt v h1 <| map_add_lt v h2 h3

Dhyan Aranha (Dec 10 2024 at 17:15):

I've really been struggling to work with category theory in Lean. Are there any good notes/guides available for this (something like Buzzard's notes would be wonderful!)? Today I tried as an exercise to formalize the notion of derived completeness and failed miserably...

Johan Commelin (Dec 11 2024 at 08:44):

I don't think we have such a guide. Some of the LftCM workshops have had talks/intros to category theory in mathlib, and some of those talks are on youtube. But I'm afraid they might be a bit outdated.

Johan Commelin (Dec 11 2024 at 08:44):

So asking questions on zulip is probably your best bet at the moment.

Dhyan Aranha (Dec 12 2024 at 06:13):

This morning I did manage to at least find some exercises: https://github.com/adamtopaz/CopenhagenMasterclass2023/tree/master/CategoryTheoryExercises

Johan Commelin (Dec 12 2024 at 06:52):

Aah right! I should have thought of the Copenhagen masterclass!

Moritz Firsching (Dec 12 2024 at 07:06):

Yes, me too, I was also a participant there...

Dhyan Aranha (Dec 13 2024 at 18:01):

Ok, even after doing some of the exercises very shamefully I still don't know how to do the following: How do I view (ℕ, \geq) as a category in Lean?

Johan Commelin (Dec 13 2024 at 18:18):

Every ordered type is automatically a category in mathlib: with a hom xyx \to y iff xyx \le y.

Johan Commelin (Dec 13 2024 at 18:18):

So it should just work :tm:

Johan Commelin (Dec 13 2024 at 18:18):

Do you have a #mwe of what you are trying to do?

Maris Ozols (Dec 15 2024 at 09:22):

I am trying to prove some simple inequalities using existing Mathlib lemmas from ordered monoids. The first inequality goes through without problems but the second says "failed to synthesize MulLeftStrictMono Γ₀". Can somebody suggest how to fix this?

import Mathlib.Tactic

local notation "ℝ²" => EuclideanSpace  (Fin 2)

variable {Γ₀ : Type} [LinearOrderedCommGroupWithZero Γ₀]
variable (v : Valuation  Γ₀)

lemma tmp1 (X Z : ²)
  (hx : v (X 1)  v (X 0)) :
  v (X 1) * v (Z 0)  v (X 0) * v (Z 0) := by
  exact mul_le_mul_right' hx _

lemma tmp2 (X Z : ²)
  (hz : v (Z 0) < 1) :
  v (X 1) * v (Z 0) < v (X 1) * 1 := by
  exact mul_lt_mul_left' hz _

Maris Ozols (Dec 15 2024 at 11:01):

I managed to do it in a different way (though I'm still curious how the above can be fixed).

Kevin Buzzard (Dec 15 2024 at 11:48):

mul_lt_mul_left' needs MulLeftStrictMono Γ₀ as a hypothesis, which is the axiom b₁ < b₂ → a * b₁ < a * b₂. But your Γ₀ is a GroupWithZero, which doesn't satisfy this axiom because a can be 0 (and in particular it looks to me like tmp2 is false because X 1 can be 0)

Dhyan Aranha (Dec 15 2024 at 15:00):

Johan Commelin said:

Do you have a #mwe of what you are trying to do?

Sorry for the late reply, the in-laws where over this weekend! Given a commutative ring R, and ideal I and module M over R. I wanted to try and write down a functor from ℕ to ModuleCat R , which sends n to M/I^(n+1)M and on m \geq n I want the obvious map from M/I^m M to M/I^nM.

So I started with the following (I probably imported way too many things):

import Mathlib.Tactic
import Mathlib.Algebra.Category.ModuleCat.Basic
import Mathlib.Algebra.Category.ModuleCat.Abelian
import Mathlib.CategoryTheory.Abelian.Basic
import Mathlib.LinearAlgebra.Quotient.Basic
import Mathlib.RingTheory.Ideal.Operations
import Mathlib.RingTheory.Ideal.Basic

variable (R : Type) (N : Type) [CommRing R] [AddCommGroup N] [Module R N] (I : Ideal R)

def higher_order (n :) := (I ^ n   : Submodule R N)

lemma filt (n m : ) (h: n  m) : higher_order R N I n  higher_order R N I m := by sorry

def N_approx (n : ) := N  higher_order R N I n

def θ (n m: ) (h: n m) : N_approx R N I n →ₗ[R] N_approx R N I m := by sorry

Im still having trouble with the definition of \theta (I want to use Submodule.mapQ but I can't seem to get it to work for R-linear maps, if I remove the R-linearity from the statement it works) Anyway lets put that aside for now. I now want to define a functor as:

def my_functor:   ModuleCat R where
  obj n := ModuleCat.of R (N_approx R N I n)
  map {n , m} (α : n  m)   := ModuleCat.ofHom (θ R N I \alpha)
  map_id :=  ??

but it seems lean is okay with what I do on objects but morphisms it seems to have a problem. I am pretty sure even the syntax is just wrong there (copilot suggested to me many wrong things here).

Kevin Buzzard (Dec 15 2024 at 15:46):

Here's some code review:

import Mathlib -- why not? When you're developing. When you've finished, use `#min_imports`

-- make R implicit because you can figure it out from I
variable {R : Type} (N : Type) [CommRing R] [AddCommGroup N] [Module R N] (I : Ideal R)

def higher_order (n :) := (I ^ n   : Submodule R N)

-- add this lemma because every def you make needs API
lemma higher_order_def (n : ) : higher_order N I n = I ^ n   := rfl

-- make n, m implicit becuase they can be read off from h
lemma filt {n m : } (h: n  m) : higher_order N I n  higher_order N I m := by
  simp only [higher_order_def]
  apply Submodule.smul_mono -- found with `suffices I ^ n ≤ I ^ m by exact?
  · exact Ideal.pow_le_pow_right h -- found with `exact?`
  · rfl

-- golfed variant
lemma filt' {n m : } (h: n  m) : higher_order N I n  higher_order N I m :=
  Submodule.smul_mono (Ideal.pow_le_pow_right h) le_rfl

-- don't make it a definition or Lean won't know the first thing about it (e.g. that
-- it's an R-module -- Lean doesn't look through definitions by default)
abbrev N_approx (n : ) := N  higher_order N I n

-- #loogle ?A ⧸ ?B →ₗ[?C] ?A ⧸ ?D
-- look at results, see that `Submodule.mapQ` is what we need

def θ {n m: } (h: n  m) : N_approx N I n →ₗ[R] N_approx N I m :=
  Submodule.mapQ _ _ (LinearMap.id) (filt' N I h)

Kevin Buzzard (Dec 15 2024 at 15:57):

As for your functor, it's contravariant, and Lean's functors are covariant so you'll never do what you're trying to do like that. I honestly wouldn't recommend LLMs for Lean at this level, you're much better off learning and asking when you're stuck, that way you will overtake LLMs very quickly. This is more like it:

open CategoryTheory Opposite

def my_functor: ᵒᵖ  ModuleCat R where
  obj n := ModuleCat.of R (N_approx N I (unop n))
  map {n m} (α : n  m) := ModuleCat.asHom (θ N I α.unop.1.1) -- not sure if that's the idiomatic way to get the inequality
  map_id := sorry
  map_comp := sorry

Dhyan Aranha (Dec 15 2024 at 18:57):

Kevin Buzzard said:

As for your functor, it's contravariant, and Lean's functors are covariant so you'll never do what you're trying to do like that. I honestly wouldn't recommend LLMs for Lean at this level, you're much better off learning and asking when you're stuck, that way you will overtake LLMs very quickly. This is more like it:

open CategoryTheory Opposite

def my_functor: ᵒᵖ  ModuleCat R where
  obj n := ModuleCat.of R (N_approx N I (unop n))
  map {n m} (α : n  m) := ModuleCat.asHom (θ N I α.unop.1.1) -- not sure if that's the idiomatic way to get the inequality
  map_id := sorry
  map_comp := sorry

thanks this was very helpful!

Johan Commelin (Dec 16 2024 at 08:59):

  map {n m} (α : n  m) := ModuleCat.ofHom (θ N I α.unop.le)

is slightly more idiomatic, I would say.

Johan Commelin (Dec 16 2024 at 08:59):

Also, docs#Monotone.functor allows you to turn a monotone map into a functor. You may or may not want to use that instead.

Dhyan Aranha (Dec 19 2024 at 12:11):

Ok, back on topic with Monsky :). I was trying this morning to prove a version of the pigeonhole lemma but was getting stuck:

lemma finset_infinite_pigeonhole {α β : Type} [Infinite α] {f : α  β} {B : Finset β}
    (hf :  a, f a  B) :
     b  B, Set.Infinite (f⁻¹' {b}) := by

It's very similar to Finite.exists_infinite_fiber in mathlib, but I couldn't get the proof to work by naively applying Finite.exists_infinite_fiber .

One idea I had was to try and mimic the proof of finite.exists_infinite_fiber in mathlib, which proceeds by contradiction, but I get stuck trying to make \alpha a FinType. That is I think I the proof could go something like

lemma finset_infinite_pigeonhole {α β : Type} [Infinite α] {f : α  β} {B : Finset β}
    (hf :  a, f a  B) :
     b  B, Set.Infinite (f⁻¹' {b}) := by
  by_contra hC
  have hC' :  b  B, Set.Finite (f⁻¹' {b}) := by
    intro b hb
    exact
  let a : Fintype α  :=
    { elems := (univ.biUnion fun b   B => Set.Finite (f ⁻¹' {b})),
      complete := by simp [hC', hf] }

but I can't seem to properly write the let statement.

Thijs Maessen (Dec 19 2024 at 12:26):

Hello, I am a working on the part of the proof that says that the sum of the areas of triangles of a dissection should be the area of the square. For this, I would like to show that the line starting in (0,0)(0,0) and ending in (1,0)(1,0) segment has (Lebesgue) measure zero. When browsing through mathlib I found that there is a theorem stating that the R2\mathbb{R}^2 interval [0,0]×[1,0][0,0] \times [1,0] has measure zero (given by the function Set.Icc). I managed to show this set was equal to the originial line segment. However, then I ran into the problem that these sets do not have the same type. The line segment is in the Euclidean space EuclideanSpace ℝ (Fin 2), whereas the latter is in Fin 2 → ℝ. This did not give me an issue when proving these sets were equal, but it does give me a problem when I want to use the equality to show that the line segment has measure zero, as the MeasureTheory.volume map depends on the type.

I found the proposition EuclideanSpace.volume_preserving_measurableEquiv (Fin 2), which in combination with MeasureTheory.MeasurePreserving.map_eq I hoped would do the trick. However, I have not really been able to get it to work.

theorem volume_closed_unit_segment : (MeasureTheory.volume (closed_hull unit_segment)).toReal = 0 := by
  -- This first part is essentially showing 0 = (MeasureTheory.volume (Set.Icc point0 point1)).toReal
  have h0 :  i : (Fin 2), (point1 i - point0 i) = 0
  rw[ Fin.prod_univ_two]
  unfold point0 point1
  linarith
  rw[  h0]
  have h1: point0  point1
  intro i
  fin_cases i <;> dsimp <;> rw[ point0, point1] ; linarith
  rw[  Real.volume_Icc_pi_toReal h1]
  -- Now I try to show (MeasureTheory.volume (closed_hull unit_segment)).toReal = (MeasureTheory.volume (Set.Icc point0 point1)).toReal
  -- But the left-hand side Measuretheory.volume is not the same as the right-hand side
  have h2 : MeasureTheory.Measure.map ((EuclideanSpace.measurableEquiv (Fin 2))) MeasureTheory.volume  (Set.Icc point0 point1) = MeasureTheory.volume (Set.Icc point0 point1)
  rw[ MeasureTheory.MeasurePreserving.map_eq (EuclideanSpace.volume_preserving_measurableEquiv (Fin 2))]
  rw[  h2]
  rw[ closed_unit_segment_is_box] --This is the theorem stating closed_hull unit_segment = Set.Icc point0 point1
  sorry

I could try to find to find the right mathlib proposition for a Euclidean version of Set.Icc, but I feel like it also should be possible with the above. Does anyone have suggestions?

Sébastien Gouëzel (Dec 19 2024 at 12:35):

You have docs#MeasureTheory.Measure.addHaar_submodule saying that the measure of any strict subspace is zero. Since your line is contained in such a subspace, this should give the result with docs#MeasureTheory.measure_mono_null

Moritz Firsching (Dec 19 2024 at 13:15):

Dhyan Aranha said:

Ok, back on topic with Monsky :). I was trying this morning to prove a version of the pigeonhole lemma but was getting stuck:
[...]
but I can't seem to properly write the let statement.

import Mathlib.Data.Fintype.Card

lemma finset_infinite_pigeonhole {α β : Type} [Infinite α] {B : Finset β} {f : α  β}
    (hf :  a, f a  B) :
     b  B, Set.Infinite (f⁻¹' {b}) := by
  have : Finite B := by exact Finite.of_fintype { x // x  B }
  let f_B := fun (a : α) => (f a, hf a : B)
  have b, hb := Finite.exists_infinite_fiber f_B
  use b
  constructor
  · exact Finset.coe_mem b
  · convert Set.infinite_coe_iff.mp hb
    aesop

This works, but of course can be golfed...

Moritz Firsching (Dec 19 2024 at 13:31):

Just curious: where in the proof of Monsky is this needed?

Dhyan Aranha (Dec 19 2024 at 13:56):

Moritz Firsching said:

Just curious: where in the proof of Monsky is this needed?

I think @Pjotr Buys is more qualified to explain this than me (my branch has been largely finished and I migrated to his branch the "Sperners team" to help fill the easier sorries). But my understanding is that this is used in the proof of the following lemma: Given a covering of the square by triangles and a line segment L such that none of the vertices of the the triangles lie on its interior, the number of triangles in my covering which contain L on their boundary is at most 2. (my sincerest apologies to Pjotr if I messed that up)

Pjotr Buys (Dec 20 2024 at 08:35):

No this is essentially right. It has to do with the following sentence of the proof of Lemma 2 in the book:

Since every (red-blue) segment in the interior of the square is counted twice.

This one sentence has cost me about 1000 lines of code (and still counting). Where the pigeonhole is used is the following: Given such a (basic-)segment S on the interior of the square (a (basic-)segment is defined for us as the convex hull of a pair of points taken from the set of vertices of the triangles that does not pass through the interior of any triangle nor through any other vertices), I take a point x \in R^2 on the interior of S and direction y \in R^2 not parallel with S. For large enough n \in N all points of the form x + (1/n) * y lie in the square and thus also inside some triangle. Since there are finitely many triangles covering the square there is a triangle T (by pigeonhole) containing infinitely many points of the form x + (1/n) * y and then by some closure argument the point x must also be in T and then also the whole segment S must lie on the boundary of T (with difficulty). The same argument applies to points of the form x - (1/n) * y and thus this also gives a triangle T' with S on the boundary of T'. Given all this setup it is not so hard to show that T and T' are indeed different and there is no other triangle with S on its boundary.

It is not clear to me if this is the best way (or even a good way) to do this, but at least it seems to work :).

Kevin Buzzard (Dec 20 2024 at 11:05):

This one sentence has cost me about 1000 lines of code (and still counting).

Noone ever said formalization was easy!

In 2021, 5 lines of Scholze took us 13 months :-)

Joseph Myers (Dec 20 2024 at 13:11):

Pjotr Buys said:

No this is essentially right. It has to do with the following sentence of the proof of Lemma 2 in the book:

Since every (red-blue) segment in the interior of the square is counted twice.

This one sentence has cost me about 1000 lines of code (and still counting).

That does tend to increase my confidence in my guess (item 13 in the AperiodicMonotilesLean README) that formalizing the link between the discrete and geometric sides of the aperiodic monotile work (Appendix A of "An aperiodic monotile") could be as much work as the entire rest of that project put together. (Basic facts such as "in a locally finite plane tiling by polygons (in fact true for general topological disks, though I don't need that generality), the boundary of each polygon is a finite alternating sequence of edges and vertices, where each edge has exactly one tile on each side of the edge" form only a small part of the argument, and that fact seems similar in spirit to what you have here.)

Dhyan Aranha (Dec 20 2024 at 15:55):

Most of the time when I try to use aesop it never works, or if it does I could have just as easily done without it. Which I suppose means I'm trying to invoke it in the wrong situations. Is there some kind heuristic/philosophy about what kinds of things aesop is good at proving?

For example in the last line of

import Mathlib.Data.Fintype.Card

lemma finset_infinite_pigeonhole {α β : Type} [Infinite α] {B : Finset β} {f : α  β}
    (hf :  a, f a  B) :
     b  B, Set.Infinite (f⁻¹' {b}) := by
  have : Finite B := by exact Finite.of_fintype { x // x  B }
  let f_B := fun (a : α) => (f a, hf a : B)
  have b, hb := Finite.exists_infinite_fiber f_B
  use b
  constructor
  · exact Finset.coe_mem b
  · convert Set.infinite_coe_iff.mp hb
    aesop

is there a particular reason for applying aesop here? In this example if I do aesop? I don't think I would have been able to come up with what aesop did so here was a good place to apply it. But I don't have any intuition about why.

Thijs Maessen (Dec 20 2024 at 16:32):

Sébastien Gouëzel zei:

You have docs#MeasureTheory.Measure.addHaar_submodule saying that the measure of any strict subspace is zero. Since your line is contained in such a subspace, this should give the result with docs#MeasureTheory.measure_mono_null

Thanks!

Kevin Buzzard (Dec 20 2024 at 17:37):

Dhyan Aranha said:

is there a particular reason for applying aesop here?

You can simplify the output of aesop? to

    ext x
    cases b
    simp [f_B]

You're trying to prove two sets are equal, so ext is a good start. There's an \u b in the goal so maybe take b apart to see what's going on. And now you can see it's just basically obvious, given the definition of f_B, so it's not surprising that the simplifier can do it.

Dhyan Aranha (Dec 20 2024 at 18:01):

Thanks! So it was not really something where aesop was really needed after all (lesson to myself: don't get demoralized by what aesop? shows me:))!

Kevin Buzzard (Dec 20 2024 at 19:10):

yeah aesop? proofs are often highly golfable!

Dion Leijnse (Dec 23 2024 at 14:28):

I am a bit stuck on the following. I am given a line segment S in the unit square, and a decomposition of this segment S as a union of smaller segments S_i, with i running from 1 to n. I want to obtain from this an 'ordered decomposition', in the sense that we have a function T : Fin (n + 1) -> \R^2 such that the values T i are the endpoints of the segments S_j and occur increasingly along the segment. This can be reduced to showing that, if I have a finite collection of n points x_i in the interval [0, 1], there exists an order-preserving function Fin n -> [0, 1] having as its image exactly the x_i. It seems like Finset.orderEmbOfFin can do this for me. However, it requires the target to be a LinearOrder, and this requires the relation to be decidable. As far as I understand, this is not the case for something over the real numbers, such as the interval [0, 1]. Is there a way around this, or am I maybe missing something?

Moritz Firsching (Dec 23 2024 at 14:32):

do you have a #mwe of the parts that work?

Dion Leijnse (Dec 23 2024 at 14:47):

Essentially I want to prove the following:

def Interval : Set  := {x :  | 0  x  x  1}

lemma exists_ordered_embedding (X : Finset Interval) (h : X  ) :
   n : ,  f : Fin (n + 1)  Interval,
  ( i : Fin (n + 1), i  n  f i < f (i + 1))  (Finset.image f ) = X := by
  sorry

Johan Commelin (Dec 23 2024 at 15:25):

Does this help

import Mathlib

def I : Set  := {x :  | 0  x  x  1}

lemma exists_ordered_embedding' (X : Finset I) :
     n : ,  f : Fin n  I,
    StrictMono f  Finset.image f  = X := by
  let e := monoEquivOfFin X rfl
  use Fintype.card { x // x  X }, Subtype.val  e
  constructor
  · apply StrictMono.comp (fun _ _ h  h) e.strictMono
  · ext x
    simp only [Finset.top_eq_univ, Finset.mem_image, Finset.mem_univ, Function.comp_apply, true_and]
    constructor
    · rintro a, rfl
      simp
    · intro h
      use e.symm x, h
      simp

Johan Commelin (Dec 23 2024 at 15:26):

It is a slightly different statement, I admit.

Dion Leijnse (Dec 23 2024 at 15:50):

Thanks, that does what I need!
I was wondering, it seems that monoEquivOfFin X uses that X is linearly ordered. But it inherits the ordering of \R, which is not linearly ordered as far as I understand due to a linear ordering requiring decidability. Does this mean that Lean automatically inferred that a Finset of R is linearly ordered?

Johan Commelin (Dec 23 2024 at 15:53):

No, it means that we cheated with \R (-;

Johan Commelin (Dec 23 2024 at 15:54):

The decidability in the case of \R is fake.

Dion Leijnse (Dec 23 2024 at 15:57):

Thanks!

Dhyan Aranha (Jan 20 2025 at 20:29):

There was a proof that was written in our project about a month ago which compiled but with the most recent mathlib update does not work anymore. It seems the simp tactic works slightly differently now? I was wondering if there is a standard way to troubleshoot this?

Kevin Buzzard (Jan 20 2025 at 20:43):

Did you have a non-terminal simp? There are all sorts of reasons why this might have happened, you'll probably be best off posting a #mwe .

Kevin Buzzard (Jan 20 2025 at 21:00):

Did you try squeezing the simp before bumping?

Dhyan Aranha (Jan 21 2025 at 09:54):

Here is the minimal working example: I've sorry'd the proofs of the auxiliary lemmas. The result we are interested in is det_0_triangle_imp_triv, the code actually works when I use the live.lean environment, but when we tried to run it on our machines yesterday in the seminar we still had an unproven goal

import Mathlib
import Mathlib.Tactic

local notation "ℝ²" => EuclideanSpace  (Fin 2)
local notation "Triangle" => Fin 3  ²
local notation "Segment" => Fin 2  ²

open Classical
open BigOperators
open Finset

def closed_simplex (n : )  : Set (Fin n  ) := {α | ( i, 0  α i)   i, α i = 1}
def closed_hull {n : } (f : Fin n  ²) : Set ² := (fun α   i, α i  f i) '' closed_simplex n

def det (T : Triangle) : 
  := (T 0 1 - T 1 1) * (T 2 0) + (T 1 0 - T 0 0) * (T 2 1) + ((T 0 0) * (T 1 1) - (T 1 0) * (T 0 1))

lemma linear_combination_det_first {n : } {y z : ²} {P : Fin n  ²} {α : Fin n  }
    ( :  i, α i = 1) :
  det (fun | 0 => ( i, α i  P i) | 1 => y | 2 => z) =
   i, (α i * det (fun | 0 => (P i) | 1 => y | 2 => z)) := by sorry

lemma linear_combination_det_middle {n : } {x z : ²} {P : Fin n  ²} {α : Fin n  }
    ( :  i, α i = 1) :
  det (fun | 0 => x | 1 => ( i, α i  P i) | 2 => z) =
   i, (α i * det (fun | 0 => x | 1 => (P i) | 2 => z)) := by sorry

lemma linear_combination_det_last {n : } {x y : ²} {P : Fin n  ²} {α : Fin n  }
    ( :  i, α i = 1) :
  det (fun | 0 => x | 1 => y | 2 => ( i, α i  P i)) =
   i, (α i * det (fun | 0 => x | 1 => y | 2 => (P i))) := by sorry

 lemma det_zero_perm {T : Triangle} (hT  : det T = 0) :
     i j k, det (fun | 0 => T i | 1 => T j | 2 => T k) = 0 := by sorry

lemma det_0_triangle_imp_triv {T : Triangle} (hT : det T = 0) :
     x y z, x  closed_hull T  y  closed_hull T  z  closed_hull T 
      det (fun | 0 => x | 1 => y | 2 => z) = 0 := by
  intro x y z ⟨_, ⟨_, hαx , hx ⟨_, ⟨_, hαy , hy ⟨_, ⟨_, hαz , hz
  simp [hx,  hy, hz, linear_combination_det_first hαx,
    linear_combination_det_middle hαy, linear_combination_det_last hαz, det_zero_perm hT]

on my machine the infoview says that there is still the following goal remaining

( x : Fin 3, w✝² x *  i : Fin 3, w✝¹ i * det fun x_1  match x_1 with | 0 => T x | 1 => T i | 2 =>  x : Fin 3, w x  T x) = 0

Moritz Firsching (Jan 21 2025 at 10:14):

what mathlib version are you running? The last simp seems to work at current head

Dhyan Aranha (Jan 21 2025 at 16:40):

im using: leanprover/lean4:v4.16.0-rc2

Kevin Buzzard (Jan 21 2025 at 17:44):

That looks good but the bottom line is that if you click on "view in lean 4 playground" in the top right hand corner of the code you posted, it works fine, so the solution is probably to update your mathlib.

Dhyan Aranha (Jan 21 2025 at 18:38):

ok, im just a bit confused because I did exactly what was written here: https://leanprover-community.github.io/install/project.html, about updating Mathlib in a GitHub project. But still it doesn't compile. I'm probably doing something dumb with GitHub I think.

Malvin Gattinger (Jan 21 2025 at 20:01):

To reproduce the problem in your repository, can you tell us which file and line it is happening there? Does this CI build show the problem or are the errors there unrelated? https://github.com/dhyan-aranha/FormalBook/actions/runs/12893788642/job/35950948983

Edit: at least one of those errors seems to happen because https://github.com/leanprover-community/mathlib4/pull/17651 has been merged into mathlib and now the file FormalBook/Mathlib/WeightedDoubleCounting.lean is redundant / creating duplicate definitions. It seems Moritz has already taken this into account in his repository, as you can see via the "6 commits ahead" page here. I think to avoid duplicate work it would be clean now to rebase your repository on top of the one by Moritz, but that may be hard given the 91 commits ahead in the other direction.

Malvin Gattinger (Jan 22 2025 at 16:23):

The problem is interesting, see #Proofs from the book > Mystery: import breaks simp

Dhyan Aranha (Feb 14 2025 at 14:28):

I am trying to prove the following lemma:

lemma aux_colin_decomp_eq {α q : }{u w : ²}: u + (α  w + -(α  u) + q  w) + (-(q  u + q  α  w) + - -(q  α  u)) =
  u + (α  w + q  w - (q  α)  w + -(α  u + (q  u - (q  α)  u))) := by

and right now I am just getting lost in applying add_assoc, add_comm smul_whatever...etc. I have also tried ring and field_simp. Is there an easier way to proceed?

Riccardo Brasca (Feb 14 2025 at 14:37):

Does module work? Anyway, can you write a #mwe?

Riccardo Brasca (Feb 14 2025 at 14:41):

The following

import Mathlib

lemma aux_colin_decomp_eq {V : Type*} [AddCommGroup V] [Module  V] {α q : }{u w : V} :
    u + (α  w + -(α  u) + q  w) + (-(q  u + q  α  w) + - -(q  α  u)) =
    u + (α  w + q  w - (q  α)  w + -(α  u + (q  u - (q  α)  u))) := by
  rw [smul_eq_mul]
  module

works. It should also work in your case.

Dhyan Aranha (Feb 14 2025 at 14:44):

yep it works, thanks Riccardo! I didn't know about this module tactic :D

Riccardo Brasca (Feb 14 2025 at 14:44):

It's the same as ring, but for modules. If you try it directly it doesn't work, but the error message suggests to use smul_eq_mul.

Dhyan Aranha (Feb 14 2025 at 14:45):

nice, this really helped me :)

Dhyan Aranha (Apr 16 2025 at 19:42):

Hey all! I am just writing to say that today the last sorry of the Monsky project was filled....there is still a ton of refactoring to do. At any rate, I'd like to take the opportunity to thank the Zulip community for patiently answering all of our questions and welcoming us so warmly!

(our working repo is a mess, and im not sure that we can merge the last branch into main without a lot of debugging so here is the whole project collected in one clean repo: Monsky)

Kevin Buzzard (Apr 16 2025 at 21:38):

I will tell the story of how I came to know about this theorem.

One day in the early 90s me and Alex Selby showed up to a Part III (MSc) lecture in Cambridge but the lecturer never showed up. I had another lecture 1 hour later so there was no point going home, so we both had an hour to kill in the lecture room. He said he'd give me a problem to solve so I said I'd give him a problem to solve. He asked me whether you could chop up a square into an odd number of triangles of equal area. I asked him whether you could chop up a square into triangles all of whose angles were strictly less than 90 degrees. We both worked on these problems to pass the time; Alex solved mine (I won't tell you the answer), but I couldn't solve his (which I later learnt was Monsky's theorem). When the next lecture was about to start, Alex told me that it was impossible to do but that the proof used the axiom of choice. I half suspected that he was trolling me but I was busy doing my MSc so had no more time to think about it. I then forgot about the problem for a while but it came up again in conversation with some other mathematicians when I was a post-doc in Berkeley (at that time I still didn't know how to prove it, or that it was Monsky's theorem, and of course this was in the very early days of web before good search engines and before stuff was well-documented so it wasn't so easy to search for information in those days). I mentioned that I'd heard that it was impossible and that the proof used the axiom of choice and Hendrik Lenstra said "Oh come on! All you really need is the existence of the 2-adic norm". By this point I had become convinced that I was a victim of some kind of cruel joke where excellent mathematicians just tell me that some stupid question about triangles followed from some arbitrary random thing. It was only some years later that I stumbled upon Monsky's paper and all became clear.

Congratulations on the formalization!


Last updated: May 02 2025 at 03:31 UTC