# Zulip Chat Archive

## Stream: Is there code for X?

### Topic: Gram-Schmidt orthogonalization

#### Apurva Nakade (Aug 15 2020 at 14:55):

Would anyone be interested in writing this together? Or is anyone working on this already?

#### Apurva Nakade (Aug 15 2020 at 14:56):

The process itself is algorithmic so I'm hoping the Lean code wouldn't be that hard.

#### Apurva Nakade (Aug 15 2020 at 14:57):

@Jalex Stark @Kenji Nakagawa @Michael Hahn ?

#### Kevin Buzzard (Aug 15 2020 at 15:03):

You should work in a branch of mathlib rather than in a separate repo

#### Kevin Shu (Aug 15 2020 at 16:10):

Hi! I would be very interested in working on this. I'm fairly new though, so I would probably ask a lot of silly questions.

#### Kevin Buzzard (Aug 15 2020 at 16:20):

Silly questions are welcome in the new members stream

#### Patrick Massot (Aug 15 2020 at 16:21):

But wait first for Alex to PR his work.

#### Michael Hahn (Aug 15 2020 at 16:36):

Yeah I think I would like to work on this as well :smile:

#### Apurva Nakade (Aug 15 2020 at 20:05):

Kevin Buzzard said:

You should work in a branch of mathlib rather than in a separate repo

Will do

Patrick Massot said:

But wait first for Alex to PR his work.

Yes, will wait and see how that one goes

#### Kenji Nakagawa (Aug 15 2020 at 22:31):

It should be fun to look at :smile:

#### Reid Barton (Aug 16 2020 at 13:49):

What statement exactly did you have in mind to formalize?

#### Apurva Nakade (Aug 16 2020 at 14:26):

just that given a basis of a finite dimensional vector space with an inner product, we can explicitly construct an orthonormal basis

#### Reid Barton (Aug 16 2020 at 14:31):

And what does "explicitly" mean?

#### Apurva Nakade (Aug 16 2020 at 14:49):

use this algorithm: https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process#The_Gram%E2%80%93Schmidt_process

#### Reid Barton (Aug 16 2020 at 15:05):

I guess what I mean to say is:

Giving an "explicit definition" is not practically useful if that definition is (hypothetically) 200 lines of tactic mode. At that point you might as well be proving a mere existence result.

So, there should be at least one application that you have in mind to verify that your formalization is indeed useful.

#### Apurva Nakade (Aug 16 2020 at 15:13):

Ah, I see. The original motivation was to go from a spectral theorem for linear operators to a spectral theorem for matrices. This requires you to orthogonalize the eigenbasis to get a orthogonal/unitary matrix.

#### Kevin Buzzard (Aug 16 2020 at 15:22):

@Apurva Nakade the point is that an algorithm is a definition and each definition comes with a cost, in some sense. It seems that thus far we have managed to do ok without needing this definition. Of course one can prove the theorem that if there exists a basis then there exists an ON basis, but this follows immediately from the statement that there exists an ON basis, which is a theorem (proved using Gram Schmidt). The question is why one needs the algorithm as a definition and not just the theorem. There might be a good answer to this.

#### Kevin Buzzard (Aug 16 2020 at 15:23):

But I don't really know exactly what these spectral theorems you're talking about are (the devil is in the detail)

#### Reid Barton (Aug 16 2020 at 15:27):

One application I know is that the multiplication map from O(n) x {upper triangular matrices with positive diagonal entries} to GL(n) is a homeomorphism (also a diffeomorphism, but I don't think we can even state that yet) and so in particular the inclusion O(n) -> GL(n) is a homotopy equivalence.

#### Apurva Nakade (Aug 16 2020 at 15:38):

I'm sorry for not being clear. I just saw this on the list of undergrad_todos and thought that this does not exist because it is too easy.

The theorem I have in mind is that if `M`

is a normal (symmetric) matrix then there exists a unitary (orthogonal) matrix `U`

such that `U^* M U`

is diagonal. If we have a spectral theorem for linear operators i.e. the linear operator for `M`

has an eigenbasis, then we can get `U`

from the eigenbasis by doing GS orthogonalization.

#### Apurva Nakade (Aug 16 2020 at 15:38):

Reid Barton said:

One application I know is that the multiplication map from O(n) x {upper triangular matrices with positive diagonal entries} to GL(n) is a homeomorphism (also a diffeomorphism, but I don't think we can even state that yet) and so in particular the inclusion O(n) -> GL(n) is a homotopy equivalence.

This is also great! A direct application of GS.

#### Apurva Nakade (Aug 16 2020 at 15:44):

I will dig more and come up with a list of useful math theorems that rely on GS.

I use linear algebra so much that I never question as to where its theorems are used :P I was also going to ask about SVD but I think I'll hold on to that for now :)

#### Patrick Massot (Aug 16 2020 at 15:57):

This homeomorphism doesn't need GS. I guess people interested in algorithms can simply program GS and prove its properties, but this is independent from the theoretical applications.

#### Mario Carneiro (Aug 16 2020 at 16:20):

Seconded. Gram-Schmidt as understood by graphics programmers bears only a passing resemblance to Gram-Schmidt as understood in linear algebra (especially once it goes infinite dimensional)

#### Mario Carneiro (Aug 16 2020 at 16:22):

Even wikipedia acknowledges that the mathematician's G-S is numerically unstable and proposes an alternative algorithm that is more useful for algorithmic implementation

#### Kevin Buzzard (Aug 16 2020 at 16:45):

Do you want an algorithm which produces U or just a theorem saying that U exists? I'd be happy with the latter

#### Apurva Nakade (Aug 16 2020 at 17:20):

Just a theorem that says U exists. In fact, as Reid put it all I need is a continuous map from GL(n) to O(n) and in the complex case to U(n). After thinking about this for a while, I am now realizing that for me this is what GS is. it's utility is only in going from an arbitrary basis to an orthogonal basis in a continuous way.

#### Jalex Stark (Aug 16 2020 at 19:14):

my model of why apurva started talking about algorithms is just to say "numerical linear algebra folks have implemented algorithmic corollaries of this theorem a bunch of times, and the proof is mathematically easy, so it should be relatively easy to prove in Lean". Then the consensus response to that idea was something like "yes, it should be easy, and the first part of your argument is not especially relevant".

Last updated: May 16 2021 at 05:21 UTC