Zulip Chat Archive

Stream: mathlib4

Topic: TU block matrix #19607


Martin Dvořák (Dec 09 2024 at 10:08):

I've been struggling with proving that "a block matrix that has two totally unimodular matrices along the diagonal and zeros elsewhere" is totally unimodular:

import Mathlib

lemma Matrix.fromBlocks_isTotallyUnimodular
    {m m' n n' R : Type*} [LinearOrderedCommRing R]
    [Fintype m] [DecidableEq m] [Fintype m'] [DecidableEq m']
    [Fintype n] [DecidableEq n] [Fintype n'] [DecidableEq n']
    {A₁ : Matrix m n R} {A₂ : Matrix m' n' R}
    (hA₁ : A₁.IsTotallyUnimodular) (hA₂ : A₂.IsTotallyUnimodular) :
    (fromBlocks A₁ 0 0 A₂).IsTotallyUnimodular := by
  sorry

We start the proof by considering a square submatrix of (fromBlocks A₁ 0 0 A₂) and asking whether it takes the same number of rows and columns from A₁ (and from A₂ which is equivalent). If so, we call it "square case" because, on conceptual level, we work with a square submatrix of A₁ and a square submatrix of A₂ as well.

Square case

We would like to say that a submatrix of a block matrix is a block matrix made of submatrices. Unfortunately, it does not hold in the literal sense. Therefore, I came up with a workaround. We decompose a function f : α → β₁ ⊕ β₂ into a function and two bijections α → α₁ ⊕ α₂ ≃ β₁ ⊕ β₂ as implemented in #19323 (the types α₁ and α₂ are preïmages of β₁ and β₂ respectively). More precisely, the API is as follows:

def Function.decomposeSum (f : α  β₁  β₂) :
    α  { x₁ : α × β₁ // f x₁.fst = Sum.inl x₁.snd }
       { x₂ : α × β₂ // f x₂.fst = Sum.inr x₂.snd }
(...)

lemma Function.eq_comp_decomposeSum (f : α  β₁  β₂) :
    f = Sum.elim (Sum.inl  (·.val.snd)) (Sum.inr  (·.val.snd))  f.decomposeSum := by
  aesop

Now we express the submatrix of our block matrix the following block matrix:

lemma Matrix.fromBlocks_submatrix
    (A₁ : Matrix m n R) (A₂ : Matrix m' n' R)
    (f : α  m  m') (g : α  n  n') :
    (fromBlocks A₁ 0 0 A₂).submatrix f g =
    (Matrix.fromBlocks
      (A₁.submatrix
        ((·.val.snd) : { x₁ : α × m // f x₁.fst = Sum.inl x₁.snd }  m)
        ((·.val.snd) : { y₁ : α × n // g y₁.fst = Sum.inl y₁.snd }  n)
      ) 0 0
      (A₂.submatrix
        ((·.val.snd) : { x₂ : α × m' // f x₂.fst = Sum.inr x₂.snd }  m')
        ((·.val.snd) : { y₂ : α × n' // g y₂.fst = Sum.inr y₂.snd }  n')
      )
     ).submatrix f.decomposeSum g.decomposeSum :=
  (...)

We continue by constructing bijections from the knowledge of "the same number of rows and columns". We massage the bijections into the blocks so that we obtain (roughly) something like:

(Matrix.fromBlocks
  A₁.submatrix 0 0 A₂.submatrix
).submatrix
  bijection_based_on_f
  bijection_based_on_g

Since the absolute value of determinant is preserved by the "outer submatrix", we express it as a product of determinants of respective submatrices. By total unimodularity of A₁ and A₂ the determinants of submatrices in blocks are 0 or ±1 each. We are done #19607 with the square case.

Non-square case

Here we want to prove that the determinant is always zero. After "regrouping" the rows and columns in a similar manner as we did above, we have a block matrix made of [rectangle, zeros, zeros, rectangle] where neither rectangle is a square; in particular, one rectangle has width>height and the other has width<height. Since the rank of each rectangle is upper bounded by its shorter dimension, we get:
rank [rectangle, zeros, zeros, rectangle] ≤ rank [rectangle, zeros] + rank [zeros, rectangle] < size of original submatrix
As #Is there code for X? > Zero det of rank lt
establishes, the determinant of the singular matrix (rank < size) is zero.

Unfortunately, I got so lost in the proof of the non-square case that I don't have a code that would be worth sharing. Hence I am asking for your help with completing the proof. Please, feel free to directly push into the #19607 branch.

Martin Dvořák (Dec 09 2024 at 16:30):

In case you want to see an informal proof, @Ivan S. wrote this sketch:
image.png

Johan Commelin (Dec 13 2024 at 07:12):

I finally had some time to think about this. We have docs#Matrix.BlockTriangular (which maybe should be IsBlockTriangular). I think it would be good to also have Matrix.IsBlockDiagonal, with a similar definition. That could be used to state a generalization of your result, and I'm hopeful using a predicate instead of a construction will also simplify the proof.

Martin Dvořák (Dec 17 2024 at 09:05):

Do you suggest the following approach?

import Mathlib

variable {m₁ m₂ n₁ n₂ : Type*}

section zero

variable {Z : Type*} [Zero Z]

def Matrix.IsBlockDiagonal (A : Matrix (m₁  m₂) (n₁  n₂) Z) : Prop :=
  ( i₁ : m₁,  j₂ : n₂, A (Sum.inl i₁) (Sum.inr j₂) = 0) 
  ( i₂ : m₂,  j₁ : n₁, A (Sum.inr i₂) (Sum.inl j₁) = 0)

lemma Matrix.fromBlocks_isBlockDiagonal (A₁ : Matrix m₁ n₁ Z) (A₂ : Matrix m₂ n₂ Z) :
    (fromBlocks A₁ 0 0 A₂).IsBlockDiagonal := by
  simp [Matrix.IsBlockDiagonal]

end zero

section commring

variable {R : Type*} [CommRing R]

lemma Matrix.isTotallyUnimodular_of_isBlockDiagonal {A : Matrix (m₁  m₂) (n₁  n₂) R}
    (hA₁ : (A.submatrix Sum.inl Sum.inl).IsTotallyUnimodular)
    (hA₂ : (A.submatrix Sum.inr Sum.inr).IsTotallyUnimodular)
    (hA : A.IsBlockDiagonal) :
    A.IsTotallyUnimodular := by
  sorry -- heavy lifting here

theorem Matrix.fromBlocks_isTotallyUnimodular {A₁ : Matrix m₁ n₁ R} {A₂ : Matrix m₂ n₂ R}
    (hA₁ : A₁.IsTotallyUnimodular) (hA₂ : A₂.IsTotallyUnimodular) :
    (fromBlocks A₁ 0 0 A₂).IsTotallyUnimodular := by
  apply Matrix.isTotallyUnimodular_of_isBlockDiagonal
  · convert hA₁
  · convert hA₂
  · apply Matrix.fromBlocks_isBlockDiagonal

end commring

Johan Commelin (Dec 17 2024 at 09:48):

I think you can generalize from 2x2 blocks to kxk blocks.

Johan Commelin (Dec 17 2024 at 09:48):

See how it's done for block triangular matrices

Martin Dvořák (Dec 17 2024 at 09:53):

In order to simplify things or to make Matrix.IsBlockDiagonal more broadly applicable?

Johan Commelin (Dec 17 2024 at 09:59):

Both. I think work with generic indexing types will be easier than the Sum that you are currently forced to work with

Johan Commelin (Dec 17 2024 at 09:59):

But that's my instinct speaking. I might be wrong

Martin Dvořák (Dec 17 2024 at 16:35):

Would you rather do something like

import Mathlib

section zero

variable {Z : Type*} [Zero Z]

def Matrix.IsBlockDiagonal {m n : Type*} (A : Matrix m n Z) {α : Type*} (ι : m  α) (γ : n  α) : Prop :=
   i : m,  j : n, ι i  γ j  A i j = 0

lemma Matrix.fromBlocks_isBlockDiagonal {m₁ m₂ n₁ n₂ : Type*}
    (A₁ : Matrix m₁ n₁ Z) (A₂ : Matrix m₂ n₂ Z) :
    (fromBlocks A₁ 0 0 A₂).IsBlockDiagonal (α := Fin 2) (Sum.casesOn · 0 1) (Sum.casesOn · 0 1) := by
  simp [Matrix.IsBlockDiagonal]

end zero

section commring

variable {R : Type*} [CommRing R]

theorem Matrix.isTotallyUnimodular_of_isBlockDiagonal {m n : Type*}
    {A : Matrix m n R} {α : Type*} {ι : m  α} {γ : n  α}
    (hA : A.IsBlockDiagonal ι γ)
    (hAa :  a : α, (A.submatrix
        (fun (I : { i : m // ι i = a }) => I.val)
        (fun (J : { j : n // γ j = a }) => J.val)
      ).IsTotallyUnimodular):
    A.IsTotallyUnimodular := by
  sorry

theorem Matrix.fromBlocks_isTotallyUnimodular {m₁ m₂ n₁ n₂ : Type*}
    [Fintype m₁] [DecidableEq m₁] [Fintype n₁] [DecidableEq n₁]
    [Fintype m₂] [DecidableEq m₂] [Fintype n₂] [DecidableEq n₂]
    {A₁ : Matrix m₁ n₁ R} (hA₁ : A₁.IsTotallyUnimodular)
    {A₂ : Matrix m₂ n₂ R} (hA₂ : A₂.IsTotallyUnimodular) :
    (fromBlocks A₁ 0 0 A₂).IsTotallyUnimodular := by
  apply Matrix.isTotallyUnimodular_of_isBlockDiagonal (Matrix.fromBlocks_isBlockDiagonal A₁ A₂)
  intro k
  fin_cases k
  · have hA :
      (fromBlocks A₁ 0 0 A₂).submatrix
        (fun (I : { i : m₁  m₂ // i.casesOn 0 1 = (0 : Fin 2) }) => I.val)
        (fun (J : { j : n₁  n₂ // j.casesOn 0 1 = (0 : Fin 2) }) => J.val)
      = A₁.reindex
          (Equiv.ofBijective (fun i₁ => Sum.inl i₁, rfl)
            fun _ _ _ => by aesop, fun _ => by aesop)
          (Equiv.ofBijective (fun j₁ => Sum.inl j₁, rfl)
            fun _ _ _ => by aesop, fun _ => by aesop)
    · ext i j
      cases hi : i.val
      · cases hj : j.val
        · aesop
        · simpa [hj] using j.property
      · simpa [hi] using i.property
    sorry
    -- TODO why cannot we `rw [hA]` here?
  · sorry

end commring

instead?

Johan Commelin (Dec 17 2024 at 17:32):

Yeah, that was roughly what I had in mind.

Johan Commelin (Dec 17 2024 at 17:33):

But those final subproofs look daunting. There should be an isTotallyUnimodular_reindex.

Martin Dvořák (Dec 17 2024 at 17:42):

I hoped to apply isTotallyUnimodular_reindex right after the rewriting — which turned out not to work!

Johan Commelin (Dec 17 2024 at 17:50):

Hmm, here should be a lemma that closes that subgoal, given an Equiv of indexing types, and the assumption hA₁...

Martin Dvořák (Dec 17 2024 at 19:08):

What do you mean?

Johan Commelin (Dec 18 2024 at 07:55):

Sorry, I have to grade two exams. So I don't have time to hack on this atm.

Martin Dvořák (Dec 18 2024 at 08:45):

The last theorem does work! It turns out that erw was all I needed!

import Mathlib

section zero

variable {Z : Type*} [Zero Z]

def Matrix.IsBlockDiagonal {m n : Type*} (A : Matrix m n Z) {α : Type*} (ι : m  α) (γ : n  α) : Prop :=
   i : m,  j : n, ι i  γ j  A i j = 0

lemma Matrix.fromBlocks_isBlockDiagonal {m₁ m₂ n₁ n₂ : Type*}
    (A₁ : Matrix m₁ n₁ Z) (A₂ : Matrix m₂ n₂ Z) :
    (fromBlocks A₁ 0 0 A₂).IsBlockDiagonal (α := Fin 2) (Sum.casesOn · 0 1) (Sum.casesOn · 0 1) := by
  simp [Matrix.IsBlockDiagonal]

end zero

section commring

variable {R : Type*} [CommRing R]

theorem Matrix.isTotallyUnimodular_of_isBlockDiagonal {m n : Type*}
    {A : Matrix m n R} {α : Type*} {ι : m  α} {γ : n  α}
    (hA : A.IsBlockDiagonal ι γ)
    (hAa :  a : α, (A.submatrix
        (fun (I : { i : m // ι i = a }) => I.val)
        (fun (J : { j : n // γ j = a }) => J.val)
      ).IsTotallyUnimodular):
    A.IsTotallyUnimodular := by
  sorry

theorem Matrix.fromBlocks_isTotallyUnimodular {m₁ m₂ n₁ n₂ : Type*}
    [Fintype m₁] [DecidableEq m₁] [Fintype n₁] [DecidableEq n₁]
    [Fintype m₂] [DecidableEq m₂] [Fintype n₂] [DecidableEq n₂]
    {A₁ : Matrix m₁ n₁ R} (hA₁ : A₁.IsTotallyUnimodular)
    {A₂ : Matrix m₂ n₂ R} (hA₂ : A₂.IsTotallyUnimodular) :
    (fromBlocks A₁ 0 0 A₂).IsTotallyUnimodular := by
  apply Matrix.isTotallyUnimodular_of_isBlockDiagonal (Matrix.fromBlocks_isBlockDiagonal A₁ A₂)
  intro k
  fin_cases k
  · have hA :
      (fromBlocks A₁ 0 0 A₂).submatrix
        (fun (I : { i : m₁  m₂ // i.casesOn 0 1 = (0 : Fin 2) }) => I.val)
        (fun (J : { j : n₁  n₂ // j.casesOn 0 1 = (0 : Fin 2) }) => J.val)
      = A₁.reindex
          (Equiv.ofBijective (fun i₁ => Sum.inl i₁, rfl)
            fun _ _ _ => by aesop, fun _ => by aesop)
          (Equiv.ofBijective (fun j₁ => Sum.inl j₁, rfl)
            fun _ _ _ => by aesop, fun _ => by aesop)
    · ext i j
      cases hi : i.val
      · cases hj : j.val
        · aesop
        · simpa [hj] using j.property
      · simpa [hi] using i.property
    erw [hA, Matrix.reindex_isTotallyUnimodular]
    exact hA₁
  · have hA :
      (fromBlocks A₁ 0 0 A₂).submatrix
        (fun (I : { i : m₁  m₂ // i.casesOn 0 1 = (1 : Fin 2) }) => I.val)
        (fun (J : { j : n₁  n₂ // j.casesOn 0 1 = (1 : Fin 2) }) => J.val)
      = A₂.reindex
          (Equiv.ofBijective (fun i₂ => Sum.inr i₂, rfl)
            fun _ _ _ => by aesop, fun _ => by aesop)
          (Equiv.ofBijective (fun j₂ => Sum.inr j₂, rfl)
            fun _ _ _ => by aesop, fun _ => by aesop)
    · ext i j
      cases hi : i.val
      · simpa [hi] using i.property
      · cases hj : j.val
        · simpa [hj] using j.property
        · aesop
    erw [hA, Matrix.reindex_isTotallyUnimodular]
    exact hA₂

end commring

Martin Dvořák (Dec 23 2024 at 11:58):

To follow on @Johan Commelin 's recommendation, the natural thing to ask is what the determinant of a block-diagonal matrix equals. I tried to state the theorem but it ended up having weird decidability assumptions and overall being awkward:

import Mathlib

def Matrix.IsBlockDiagonal {m n Z : Type*} [Zero Z]
    (A : Matrix m n Z) {α : Type*} (ι : m  α) (γ : n  α) : Prop :=
   i : m,  j : n, ι i  γ j  A i j = 0

theorem Matrix.IsBlockDiagonal.det {m α R : Type*} [CommRing R]
    [DecidableEq m] [Fintype m] [DecidableEq α] [Fintype α]
    {A : Matrix m m R} {ι γ : m  α}
    [ a : α, Decidable ( e : (ι ⁻¹' {a})  (γ ⁻¹' {a}), True)]
    [ a : α, DecidablePred (Membership.mem (ι ⁻¹' {a}))]
    (hA : A.IsBlockDiagonal ι γ) :
    A.det =
    if hιγ :  a : α,  e : (ι ⁻¹' {a})  (γ ⁻¹' {a}), True then
       a : α, (A.submatrix (Subtype.val : (ι ⁻¹' {a}  m)) ((Subtype.val : (γ ⁻¹' {a}  m))  (hιγ a).choose)).det
    else
      0 := by
  sorry

What would be a better way to state it?

Edward van de Meent (Dec 23 2024 at 12:54):

i feel like it might be useful to go via something like this:

def Matrix.IsSquareDiagonal {m : Type*} [Zero Z]
    (A : Matrix m m Z) {α : Type*} (ι : m  α) : Prop :=
   i j: m, ι i  ι j  A i j = 0

theorem Matrix.IsSquareDiagonal.det {m R: Type*} [CommRing R]
    [DecidableEq m] [Fintype m] [DecidableEq α] [Fintype α]
    {A : Matrix m m R} {ι : m  α}
    [ a : α, DecidablePred (·  (ι ⁻¹' {a}))]
    (hA : A.IsSquareDiagonal ι) : A.det =
       a:α, (A.submatrix (Subtype.val : ι ⁻¹' {a}  m) (Subtype.val : ι ⁻¹' {a}  m)).det :=
  sorry

Edward van de Meent (Dec 23 2024 at 12:54):

maybe reordering some arguments, but i hope the idea is clear?

Johan Commelin (Dec 23 2024 at 15:06):

Note that you don't need a new definition. You can use the general definition, but state the theorem about the m = n case.

Martin Dvořák (Dec 23 2024 at 15:13):

All right! For now, I will focus on the following lemma:

import Mathlib
set_option autoImplicit false

def Matrix.IsBlockDiagonal {m n Z : Type*} [Zero Z]
    (A : Matrix m n Z) {α : Type*} (ι : m  α) (γ : n  α) :
    Prop :=
   i : m,  j : n, ι i  γ j  A i j = 0

lemma Matrix.det_of_isSquareDiagonal_2 {m R : Type*} [CommRing R]
    [DecidableEq m] [Fintype m]
    {A : Matrix m m R} {ι : m  Fin 2}
    [ k : Fin 2, DecidablePred (·  (ι ⁻¹' {k}))]
    (hA : A.IsBlockDiagonal ι ι) :
    A.det =
      (A.submatrix (Subtype.val : ι ⁻¹' {0}  m) (Subtype.val : ι ⁻¹' {0}  m)).det *
      (A.submatrix (Subtype.val : ι ⁻¹' {1}  m) (Subtype.val : ι ⁻¹' {1}  m)).det := by
  sorry

Eric Wieser (Dec 24 2024 at 01:27):

If we really want IsBlockDiagonal, then it might be worth taking a step back and writing the tens of API lemmas about it first

Eric Wieser (Dec 24 2024 at 01:31):

You might find it easier to prove for docs#Matrix.blockDiagonal', then you can add a theorem that transforms every IsBlockDiagonal matrix into that form

Martin Dvořák (Dec 24 2024 at 09:21):

Perhaps I could go directly from Matrix.blockDiagonal' to Matrix.fromBlocks roughly as follows?

import Mathlib
set_option autoImplicit false

theorem Matrix.blockDiagonal'_isTotallyUnimodular (m n : Fin 2  Type*)
    (R : Type) [CommRing R]
    [ k : Fin 2, DecidableEq (m k)] [ k : Fin 2, Fintype (m k)]
    [ k : Fin 2, DecidableEq (n k)] [ k : Fin 2, Fintype (n k)]
    (M : Π k : Fin 2, Matrix (m k) (n k) R)
    (hM :  k : Fin 2, (M k).IsTotallyUnimodular) :
    (Matrix.blockDiagonal' M).IsTotallyUnimodular := by
  sorry -- TODO heavy lifting here

instance {ι : Fin 2  Type*} [hι0 : DecidableEq (ι 0)] [hι1 : DecidableEq (ι 1)] :
     k : Fin 2, DecidableEq (ι k) := by
  intro k
  if hk : k = 0 then
    simpa [hk] using hι0
  else
    simpa [k.eq_one_of_neq_zero hk] using hι1

theorem Matrix.fromBlocks_isTotallyUnimodular {m₁ m₂ n₁ n₂ R : Type} [CommRing R]
    [Fintype m₁] [DecidableEq m₁] [Fintype n₁] [DecidableEq n₁]
    [Fintype m₂] [DecidableEq m₂] [Fintype n₂] [DecidableEq n₂]
    {A₁ : Matrix m₁ n₁ R} (hA₁ : A₁.IsTotallyUnimodular)
    {A₂ : Matrix m₂ n₂ R} (hA₂ : A₂.IsTotallyUnimodular) :
    (fromBlocks A₁ 0 0 A₂).IsTotallyUnimodular := by
  convert
    Matrix.IsTotallyUnimodular.reindex
      fun i, m => match i with | 0 => Sum.inl m | 1 => Sum.inr m,
        (Sum.casesOn · (0, ·⟩) (1, ·⟩)),
        fun _ => by aesop, fun m => by cases m <;> aesop
      fun i, n => match i with | 0 => Sum.inl n | 1 => Sum.inr n,
        (Sum.casesOn · (0, ·⟩) (1, ·⟩)),
        fun _ => by aesop, fun n => by cases n <;> aesop
      (Matrix.blockDiagonal'_isTotallyUnimodular ![m₁, m₂] ![n₁, n₂] R
        (match · with | 0 => A₁ | 1 => A₂)
        (match · with | 0 => hA₁| 1 => hA₂))
  swap -- Why cannot I do the same with `Fintype` as I did with `DecidableEq` ?
  · intro k
    if hk : k = 0 then
      simp [hk]
      assumption
    else
      have hk' := k.eq_one_of_neq_zero hk
      subst hk'
      assumption
  swap -- Why cannot I do the same with `Fintype` as I did with `DecidableEq` ?
  · intro k
    if hk : k = 0 then
      simp [hk]
      assumption
    else
      have hk' := k.eq_one_of_neq_zero hk
      subst hk'
      assumption
  ext i j
  cases i <;> cases j <;> simp [Matrix.blockDiagonal']

Do you think Matrix.blockDiagonal'_isTotallyUnimodular will be easier to prove than the previous attempts?

Martin Dvořák (Dec 31 2024 at 08:46):

Is there a reason to believe that proving Matrix.blockDiagonal'_isTotallyUnimodular will be easier than proving Matrix.fromBlocks_isTotallyUnimodular directly?

Edward van de Meent (Dec 31 2024 at 11:51):

as a reminder to me, the argument for 'blockdiagonal matrices of TU blocks are TU' is:
(0. it suffices to prove this for blockdiagonal matrices consisting of 2 blocks on the diagonal)

  1. it suffices to prove all finite square submatrices have determinant in {0,-1,1}
  2. such a submatrix is itself blockdiagonal (with the constituent blocks formed by submatrices of the original blocks). (annoying to prove, since casting)
    • if its 4 blocks are square, the determinant is the product of the determinant of its constituent blocks, which are submatrices of the original blocks (again, annoying), and therefore have determinant in {0,-1,1}

    • if its 4 blocks arent square, it has determinant 0
      (as then you can reindex to find a 4block matrix with either upper right or lower left quadrant all zeroes, meaning the det will be the product of the dets of the blocks on the diagonal, but one of those will have either a column or row of zeroes, giving it determinant zero.)

  • doing step 0 can be a bit annoying, since you have some casting to do.
  • doing step 1 is by definition of TU, not much to do.
  • doing step 2 is annoying, since you (again) have to reindex to prove the "submatrix-ness". it can be done. make use of docs#Equiv.sigmaFiberEquiv if you go the Matrix.blockDiagonal' route.
  • for step 3, you really would like to be working with Matrix.blockDiagonal or Matrix.fromBlocks, since blockDiagonal' has (afaict) no good api with det.
  • Particularly for 3.2, you get to use Matrix.det_fromBlocks_zero₂₁ (after again reindexing to get the right square matrix)

final verdict: having to cast all the time with these kinds of matrices is annoying, and makes me appreciate all the API around Subgroup and the like. as for blockDiagonal' vs. fromBlocks, i guess if you use fromBlocks you get the "two blocks"-ness automatically, but i think you will still have to cast around to get the blocks you want...

Martin Dvořák (Dec 31 2024 at 15:22):

Thinking again about the fromBlocks approach, I think I will need (for the part 3.2) a new indexing along the following lines:

import Mathlib

example {α β : Type} [Fintype α] [Fintype β] {n : } (alfa : Fintype.card α < n) (beta : n  Fintype.card α + Fintype.card β) :
     b : Finset β, Fintype.card (α  b) = n := by
  sorry

Any idea how to construct it?

Edward van de Meent (Dec 31 2024 at 15:39):

zorn?

Edward van de Meent (Dec 31 2024 at 15:40):

(overpowered, but still)

Edward van de Meent (Dec 31 2024 at 15:43):

isn't it much easier to say "wlog, we have the indexing type for rows and columns equal?"

Edward van de Meent (Dec 31 2024 at 15:43):

because then i think we don't have issues with this?

Martin Dvořák (Dec 31 2024 at 15:46):

Edward van de Meent said:

isn't it much easier to say "wlog, we have the indexing type for rows and columns equal?"

I don't think I can.

Martin Dvořák (Dec 31 2024 at 15:47):

MWE here: #Is there code for X? > Finset of given cardinality exists

Martin Dvořák (Dec 31 2024 at 16:06):

import Mathlib.Tactic

example {α β : Type} [Fintype α] [Fintype β] {n : } (alfa : Fintype.card α < n) (beta : n  Fintype.card α + Fintype.card β) :
     b : Finset β, Fintype.card (α  b) = n := by
  have beta' : n - Fintype.card α  Fintype.card β
  · omega
  obtain s, hs :  s : Finset β, s.card = n - Fintype.card α :=
    (Finset.exists_subset_card_eq beta').imp (by simp)
  use s
  rw [Fintype.card_sum, Fintype.card_coe, hs]
  omega

Martin Dvořák (Jan 01 2025 at 16:31):

Alright, I figured all the hard parts of the proof. Tomorrow I'll finish it, and in a few days it will be mathlib-ready.

Martin Dvořák (Jan 03 2025 at 14:49):

The finished proof is here:
https://github.com/Ivan-Sergeyev/seymour/blob/a70acc25b670504bd5af9f4c5944a8ad4cba1884/Seymour/ForMathlib/MatrixTU.lean#L250

Before we get it into Mathlib, I need these prerequisites:
#19323
#20428
#20433

Yaël Dillies (Jan 03 2025 at 14:52):

If you write one PR number per message, the bot will add status emoji

Kevin Buzzard (Jan 03 2025 at 14:56):

For #19323 all the { x₁ : α × β₁ // f x₁.fst = Sum.inl x₁.snd }s look very weird to me. Do we have the term of type Set (β₁ ⊕ β₂) corresponding to β₁? (I guess it's just range inl?). Then these types (which just seem to be auxiliary constructions) could just be the preimage of those sets without having to take the product with beta_1.

Kevin Buzzard (Jan 03 2025 at 15:00):

def Function.decomposeSum' (f : α  β₁  β₂) :
    α  f ⁻¹' (Set.range Sum.inl)  f ⁻¹' (Set.range Sum.inr) := sorry

just looks a lot less convoluted (but I didn't think about applications).

Martin Dvořák (Jan 13 2025 at 13:47):

I refactored it as @Kevin Buzzard suggested.
The entire proof in a single file is here:
https://github.com/madvorak/matrix-tu-experimental/blob/main/MatrixTuExperimental.lean


Last updated: May 02 2025 at 03:31 UTC