Zulip Chat Archive

Stream: Is there code for X?

Topic: Gaussian Elimination for Matrices


Vivek Rajesh Joshi (May 10 2024 at 03:22):

Has the Gaussian elimination algorithm been formalised in Mathlib4? Moogle isn't working for me, and I've gone through parts of Mathlib.LinearAlgebra.Matrix and haven't found any mention of it.

Vivek Rajesh Joshi (May 10 2024 at 04:04):

@Sébastien Gouëzel

Kim Morrison (May 10 2024 at 04:49):

Something a bit strange is going on here.

Somehow, Gaussian elimination doesn't seem to be particularly important, contrary to the impression that we give undergraduates. I've wondered for a long time why we haven't needed/wanted it.

There are a lot of different ways one might set up Gaussian elimination, in order to prove things about it. I would want to be pretty confident that whatever way someone choose was actually going to be useful in applications, rather than for its own sake. However, I still don't know what those applications would be. :-)

Kim Morrison (May 10 2024 at 04:49):

A long time ago I was trying to set up the theory of semisimple objects in excessive generality (I got derailed) but I did get as far as writing src#CategoryTheory.Biprod.gaussian, which is somewhat related.

Siddhartha Gadgil (May 10 2024 at 05:05):

Is there some proved algorithm for solving linear equations (or determining they have no solution)?

Kim Morrison (May 10 2024 at 05:24):

No, that's not really the sort of thing Mathlib goes for, historically!

Kim Morrison (May 10 2024 at 05:31):

(but also Gaussian elimination is not necessarily the choice once you're interested in algorithms per se!)

Sébastien Gouëzel (May 10 2024 at 05:33):

Still, see docs#Matrix.Pivot.exists_list_transvec_mul_diagonal_mul_list_transvec, which proves that any matrix is a product of elementary matrices and diagonal matrices. This is a version of Gaussian elimination, although it is formulated as an existence statement, not an algorithm. But the proof is algorithmic, and done via Gaussian elimination.

Scott Carnahan (May 10 2024 at 06:12):

Do we have Bruhat decomposition for GLn?

Kim Morrison (May 10 2024 at 06:39):

Scott Carnahan said:

Do we have Bruhat decomposition for GLn?

Pretty sure we don't.

Ruben Van de Velde (May 10 2024 at 08:21):

Speaking of Gaussian elimination, we got it in #12014 (Mathlib/Tactic/Linarith/SimplexAlgorithm/Gauss.lean)

Shreyas Srinivas (May 10 2024 at 08:43):

The pivot step of gaussian elimination is useful beyond just solving linear inequalitites.

Jz Pan (May 10 2024 at 09:25):

I think Gaussian elimination is also used in Hermite normal form and Smith normal form. They are useful, at least in designing number theory algorithms. For example, for number fields there is certain generalization of Smith normal form by Henri Cohen.

Antoine Chambert-Loir (May 10 2024 at 22:53):

Kim Morrison said:

Scott Carnahan said:

Do we have Bruhat decomposition for GLn?

Pretty sure we don't.

Do we have Jordan-Hölder? If formulated well-enough (two finite chains of modules with simple successive subquotients, one gets a bijection with explicit isomorphisms), then it implies the Bruhat decomposition for GL_n. (See my (Mostly) Commutative Algebra, remark 6.2.13 b), page 248.)
image.png

Yaël Dillies (May 11 2024 at 06:01):

Yes we have it. Grep for "Jordan-Hölder"

Antoine Chambert-Loir (May 11 2024 at 11:07):

Oh, that's great! Specifically, it is docs#CompositionSeries.jordan_holder
Regarding the existence of composition series, is there anything like modules of finite length? Presumably, they could be accessed using the the docs#JordanHolderLattice of submodules. (We need a notion of height/coheight in such a thing.)

Yaël Dillies (May 11 2024 at 11:41):

Is docs#height applicable here?

Yves Jäckle (May 12 2024 at 08:16):

Vivek Rajesh Joshi said:

Has the Gaussian elimination algorithm been formalised in Mathlib4? Moogle isn't working for me, and I've gone through parts of Mathlib.LinearAlgebra.Matrix and haven't found any mention of it.

I cam accross this repo once: https://github.com/jjcrawford/lean-gaussian-elimination
It's in Lean 3 and I can't judge if its complete.

Dean Young (May 22 2024 at 01:31):

Kim Morrison said:

Something a bit strange is going on here.

Somehow, Gaussian elimination doesn't seem to be particularly important, contrary to the impression that we give undergraduates. I've wondered for a long time why we haven't needed/wanted it.

There are a lot of different ways one might set up Gaussian elimination, in order to prove things about it. I would want to be pretty confident that whatever way someone choose was actually going to be useful in applications, rather than for its own sake. However, I still don't know what those applications would be. :-)

The factorization of a matrix into elementary matrices Eᵢ such that ||Eᵢ - Id|| < 1 is potentially the simplest way to prove that GLₙ (ℂ) is path connected, from which we can define the logarithm given a path up to homotopy.

I'd like to ask about SVD as well, since this is used a lot in applied math, in which we can save memory by keeping only the eigenvectors corresponding to the largest eigenvalues, discarding the other eigenvectors and eigenvalues. Maybe people haven't been interested since these benefits would only manifest in a situation which uses particular numbers.

Yaël Dillies (May 22 2024 at 01:55):

Dean Young said:

I'd like to ask about SVD

Are you aware of #6042?

Dean Young (May 22 2024 at 16:09):

@Yaël Dillies thanks for this, I hadn't seen it.

It also motivates the Gaussian elimination algorithm in how it uses the eigenvectors of A * A† - λI and A† * A - λI.

Yaël Dillies (May 22 2024 at 18:12):

If you want to help, #6042 needs a rewrite to deemphasize matrices and emphasize linear maps instead

Dean Young (May 28 2024 at 07:50):

Antoine Chambert-Loir said:

Oh, that's great! Specifically, it is docs#CompositionSeries.jordan_holder
Regarding the existence of composition series, is there anything like modules of finite length? Presumably, they could be accessed using the the docs#JordanHolderLattice of submodules. (We need a notion of height/coheight in such a thing.)

@Brendan Seamas Murphy also has an interesting Jordan-Holder to check out-

https://github.com/Shamrock-Frost/jordan-holder

Dean Young (May 28 2024 at 07:55):

Yaël Dillies said:

If you want to help, #6042 needs a rewrite to deemphasize matrices and emphasize linear maps instead

I think I don't fully understand how the computational tasks relate to the proofs here yet. Does Lean simply never do the sorts of tasks that Matlab does?

Yaël Dillies (May 28 2024 at 08:00):

No, not really, although that could eventually change

Kim Morrison (May 28 2024 at 08:56):

could should!

Martin Dvořák (Feb 14 2025 at 13:23):

Kim Morrison said:

Something a bit strange is going on here.

Somehow, Gaussian elimination doesn't seem to be particularly important, contrary to the impression that we give undergraduates. I've wondered for a long time why we haven't needed/wanted it.

There are a lot of different ways one might set up Gaussian elimination, in order to prove things about it. I would want to be pretty confident that whatever way someone choose was actually going to be useful in applications, rather than for its own sake. However, I still don't know what those applications would be. :-)

Here is something I don't know how to prove without Gaussian elimination:

import Mathlib

lemma Matrix.linearIndendent_iff_exists_submatrix_unit {X Y F : Type}
    [DecidableEq X] [Fintype X] [Field F]
    (A : Matrix X Y F) :
    LinearIndependent F A   f : X  Y, IsUnit (A.submatrix id f) := by
  sorry

It surely can be done. I just don't know how.

Junyan Xu (Feb 14 2025 at 14:02):

Mathlib knows the row rank is equal to the column rank Matrix.rank_transpose and you probably have to use that for the forward direction. For the backward direction docs#LinearIndependent.of_comp should be enough (take the linear map to be LinearMap.funLeft F F f).

Martin Dvořák (Feb 25 2025 at 09:28):

It was a really good advice, thanks!
The proof is finished here
https://github.com/Ivan-Sergeyev/seymour/blob/2a05beaf5d044fb19e7473bb82ff347a8639e65e/Seymour/Matrix/LinearIndependence.lean#L42
modulo one lemma I don't know how to prove.

import Mathlib.Data.Matrix.Rank

variable {X Y F : Type} [DecidableEq X] [Fintype X] [DecidableEq Y] [Fintype Y] [Field F]

lemma Matrix.exists_submatrix_rank (A : Matrix X Y F) :
     r : Fin A.rank  X, (A.submatrix r id).rank = A.rank := by
  simp only [Matrix.rank_eq_finrank_span_row]
  sorry

Can anybody help please?

Eric Wieser (Feb 25 2025 at 09:52):

Ideally the answer would be "choose r as the obvious equivalence" and then "use a lemma about rank (Matrix.reindex _ _ _)

Eric Wieser (Feb 25 2025 at 09:53):

If that second lemma doesn't exist, please make a PR with it!

Eric Wieser (Feb 25 2025 at 09:53):

@loogle Matrix.reindex, Matrix.rank

loogle (Feb 25 2025 at 09:53):

:search: Matrix.rank_reindex

Martin Dvořák (Feb 25 2025 at 10:19):

How do you construct the obvious equivalence?

Eric Wieser (Feb 25 2025 at 10:25):

I misread A.rank as Fintype.card X

Eric Wieser (Feb 25 2025 at 10:26):

Either way, I created #22277 which might help

Martin Dvořák (Feb 25 2025 at 10:30):

Thank for the PR but I already have that a few lines higher in the same file.

Rida Hamadani (Mar 12 2025 at 03:26):

Martin Dvořák said:

It was a really good advice, thanks!
The proof is finished here
https://github.com/Ivan-Sergeyev/seymour/blob/2a05beaf5d044fb19e7473bb82ff347a8639e65e/Seymour/Matrix/LinearIndependence.lean#L42
modulo one lemma I don't know how to prove.

import Mathlib.Data.Matrix.Rank

variable {X Y F : Type} [DecidableEq X] [Fintype X] [DecidableEq Y] [Fintype Y] [Field F]

lemma Matrix.exists_submatrix_rank (A : Matrix X Y F) :
     r : Fin A.rank  X, (A.submatrix r id).rank = A.rank := by
  simp only [Matrix.rank_eq_finrank_span_row]
  sorry

Can anybody help please?

This follows from the fact that the rank of a matrix is equal to the number of its linearly independent rows. I really can't think of an informal proof that is not too annoying to formalize. The only proof I found depends on writing the matrix in row echelon form, and using the fact that (1) the first rank A rows of an row echelon form are linearly independent, and (2) the number of linearly independent rows in A is preserved during Gaussian elimination.

Has any of these prerequisites been formalized already? Does anybody have an idea that makes this more amenable for formalization?

Peter Nelson (Mar 12 2025 at 13:15):

From a smaller groupchat : the material in this file defines row and column bases (as sets), and proves the API that you need for this kind of thing. It shows that row bases exist , they are equicardinal with size equal to the rank of A, and they sets they index are minimal spanning the row space. Taking an arbitrary row-basis and putting it into bijection with Fin A.rank will prove the above lemma.

The proofs don't use Gaussian elimination, or anything fiddly, really. #22226 (which I think is ready to go but has been sitting on the queue for a while) is the first step towards PRing them.


Last updated: May 02 2025 at 03:31 UTC