Zulip Chat Archive
Stream: new members
Topic: Linear algebra problem
Radu Zevri (Apr 26 2022 at 14:51):
Hello! My name is Radu Zevri and I am an undergraduate student working on my 4th year project. I've started learning Lean a few weeks ago and I am struggling to understand it. I am currently trying to prove that positive definite Hermitian matrices have positive eigenvalues, and I am stuck. Apologies if it's a silly question, but how can I multiply by v conjugate transposed at vmul? (I have vmul: M.mul_vec v = e • v
and want to multiply by vec_conj vᵀ
).
theorem pos_def_pos_eigenvalues (M : matrix n n ℂ) (e : ℂ) (h : M.is_pos_def) (ev : has_eigenvalue M e) : e > 0 :=
begin
rw is_pos_def at h,
rw has_eigenvalue at ev,
cases ev with v ep,
rw has_eigenpair at ep,
cases h with herm mul,
rw is_Hermitian at herm,
cases ep with vnz vmul,
have h : (0 < dot_product v (M.mul_vec v)):=
begin
apply mul,
exact vnz,
end,
rw vmul at h,
rw dot_product_smul at h,
specialize mul v,
specialize mul vnz,
rw vmul at mul,
sorry,
end
Thank you very much for helping me out. I am looking forward to hearing back from you! Please let me know if you have any suggestions.
Kevin Buzzard (Apr 26 2022 at 14:54):
Can you edit your post to make it a #mwe (note: this is a clickable link)? Adding all the imports, variables etc you used will make it easier for others to answer your question.
Radu Zevri (Apr 26 2022 at 14:56):
Yes, I will do so shortly. Thank you very much for your advice.
Kevin Buzzard (Apr 26 2022 at 14:57):
Thanks! It's much easier to answer questions if I can easily run your code locally.
Radu Zevri (Apr 26 2022 at 14:59):
import linear_algebra.matrix.symmetric
import linear_algebra.matrix
import linear_algebra.matrix.trace
import linear_algebra.matrix.to_lin
import linear_algebra.matrix.nonsingular_inverse
import data.complex.basic
import linear_algebra.eigenspace
variables {α n m R : Type*} [fintype n] [fintype m]
namespace matrix
open_locale matrix
def is_Hermitian [has_star α] (A : matrix n n α) : Prop := Aᴴ = A
section definite
open_locale complex_order
def is_pos_def (M : matrix n n ℂ):= M.is_Hermitian ∧ ∀ v : n → ℂ, v ≠ 0 → 0 < dot_product v (M.mul_vec v)
def has_eigenpair (M : matrix n n ℂ) (e : ℂ) (x : n → ℂ ) := x ≠ 0 ∧ mul_vec M x = e • x
def has_eigenvector (M : matrix n n ℂ) (x : n → ℂ ) := ∃ e : ℂ , has_eigenpair M e x
def has_eigenvalue (M : matrix n n ℂ) (e : ℂ) := ∃ x : n → ℂ , has_eigenpair M e x
theorem pos_def_pos_eigenvalues (M : matrix n n ℂ) (e : ℂ) (h : M.is_pos_def) (ev : has_eigenvalue M e) : e > 0 :=
begin
rw is_pos_def at h,
rw has_eigenvalue at ev,
cases ev with v ep,
rw has_eigenpair at ep,
cases h with herm mul,
rw is_Hermitian at herm,
cases ep with vnz vmul,
have h : (0 < dot_product v (M.mul_vec v)):=
begin
apply mul,
exact vnz,
end,
rw vmul at h,
rw dot_product_smul at h,
specialize mul v,
specialize mul vnz,
rw vmul at mul,
sorry,
--vmul: v M.mul_vec v = e • v
end
end definite
end matrix
I believe this should work as a mwe
Kevin Buzzard (Apr 26 2022 at 15:08):
I don't know my way around this part of the library but searching for "dot product" in the docs (because you just want to take the dot product, right?) gives me matrix.dot_product
. I don't know whether this is the idiomatic way to take the complex conjugate of a vector, but apply_fun matrix.dot_product (conj ∘ v) at vmul,
does what you want mathematically, although you have to open_locale complex_conjugate
to make conj
work.
Radu Zevri (Apr 26 2022 at 15:12):
Thank you very much! I will try to work my way from here. Is it ok if I ask further questions if I get stuck?
Kevin Buzzard (Apr 26 2022 at 15:16):
sure -- that's exactly the point of the #new members stream. If it's about the same code you can keep asking questions in this thread. Hopefully someone will show up who actually knows this part of the library.
Eric Wieser (Apr 26 2022 at 15:29):
The idiomatic spelling for the complex conjugate of a vector is star v
I think
Radu Zevri (Apr 26 2022 at 15:47):
Yes, that's right! Thank you!
Radu Zevri (Apr 26 2022 at 16:12):
Is there any code for the norm/magnitude of a vector? I've tried searching for vec norm and vec mag, but I haven't found anything yet. If there is nothing written yet, I could start.
Eric Wieser (Apr 26 2022 at 16:18):
Yes, you can use docs#norm on (pi_Lp.equiv _ 2).symm v
to get the L2-norm (aka magnitude) of a vector
Eric Wieser (Apr 26 2022 at 16:19):
Note that if you just use ∥v∥
you'll get the largest absolute value of the coefficients, which probably isn't what you want
Eric Wieser (Apr 26 2022 at 16:20):
(although until #13569 gets merged we don't have any lemmas about docs#pi_Lp.equiv)
Eric Wieser (Apr 26 2022 at 16:21):
I don't think we have a name for ∥v∥⁻¹ • v
anywhere
Radu Zevri (Apr 26 2022 at 16:40):
Apologies, but I can't make it work.
import linear_algebra.matrix.symmetric
import linear_algebra.matrix
import linear_algebra.matrix.trace
import linear_algebra.matrix.to_lin
import linear_algebra.matrix.nonsingular_inverse
import data.complex.basic
import linear_algebra.eigenspace
import tactic.find
import analysis.normed.group.basic
import analysis.inner_product_space.pi_L2
run_cmd tactic.skip
--#find has_norm,
variables {α n m R : Type*} [fintype n] [fintype m]
namespace matrix
open_locale matrix
def is_Hermitian [has_star α] (A : matrix n n α) : Prop := Aᴴ = A
section definite
open_locale complex_order complex_conjugate
def is_pos_def (M : matrix n n ℂ):= M.is_Hermitian ∧ ∀ v : n → ℂ, v ≠ 0 → 0 < dot_product v (M.mul_vec v)
def has_eigenpair (M : matrix n n ℂ) (e : ℂ) (x : n → ℂ ) := x ≠ 0 ∧ mul_vec M x = e • x
def has_eigenvector (M : matrix n n ℂ) (x : n → ℂ ) := ∃ e : ℂ , has_eigenpair M e x
def has_eigenvalue (M : matrix n n ℂ) (e : ℂ) := ∃ x : n → ℂ , has_eigenpair M e x
def vector_conj (x : n → ℂ) : n → ℂ := λ i : n, conj (x i)
def vector_magnitude (v : n → ℂ) := has_norm((pi_Lp.equiv _ 2).symm v)
theorem pos_def_pos_eigenvalues (M : matrix n n ℂ) (e : ℂ) (h : M.is_pos_def) (ev : has_eigenvalue M e) : e > 0 :=
begin
rw is_pos_def at h,
rw has_eigenvalue at ev,
cases ev with v ep,
rw has_eigenpair at ep,
cases h with herm mul,
rw is_Hermitian at herm,
cases ep with vnz vmul,
have h : (0 < dot_product v (M.mul_vec v)):=
begin
apply mul,
exact vnz,
end,
rw vmul at h,
rw dot_product_smul at h,
specialize mul v,
specialize mul vnz,
rw vmul at mul,
have h : dot_product (star v) (e • v) = e * (dot_product (star v) v) :=
begin
simp,
end,
have h_1 : dot_product (star v) (e • v) = e *
--vmul: v M.mul_vec v = e • v
end
end definite
end matrix
Would it be acceptable to define the magnitude of a vector directly, and use that to reason about it?
Eric Wieser (Apr 26 2022 at 16:41):
You wrote has_norm
where I said has_norm.norm
Eric Wieser (Apr 26 2022 at 16:42):
def vector_magnitude (v : n → ℂ) := ∥(pi_Lp.equiv _ 2).symm v∥
should work (∥x∥
is notation for has_norm.norm x
)
Radu Zevri (Apr 26 2022 at 16:50):
My bad, I've replaced it now. There still seems to be something missing, and I can't figure out what that is.
import linear_algebra.matrix.symmetric
import linear_algebra.matrix
import linear_algebra.matrix.trace
import linear_algebra.matrix.to_lin
import linear_algebra.matrix.nonsingular_inverse
import data.complex.basic
import linear_algebra.eigenspace
import tactic.find
import analysis.normed.group.basic
import analysis.inner_product_space.pi_L2
run_cmd tactic.skip
--#find has_norm,
variables {α n m R : Type*} [fintype n] [fintype m]
namespace matrix
open_locale matrix
def is_Hermitian [has_star α] (A : matrix n n α) : Prop := Aᴴ = A
section definite
open_locale complex_order complex_conjugate
def is_pos_def (M : matrix n n ℂ):= M.is_Hermitian ∧ ∀ v : n → ℂ, v ≠ 0 → 0 < dot_product v (M.mul_vec v)
def has_eigenpair (M : matrix n n ℂ) (e : ℂ) (x : n → ℂ ) := x ≠ 0 ∧ mul_vec M x = e • x
def has_eigenvector (M : matrix n n ℂ) (x : n → ℂ ) := ∃ e : ℂ , has_eigenpair M e x
def has_eigenvalue (M : matrix n n ℂ) (e : ℂ) := ∃ x : n → ℂ , has_eigenpair M e x
def vector_conj (x : n → ℂ) : n → ℂ := λ i : n, conj (x i)
def vector_magnitude (v : n → ℂ) := ∥(pi_Lp.equiv _ 2).symm v∥
theorem pos_def_pos_eigenvalues (M : matrix n n ℂ) (e : ℂ) (h : M.is_pos_def) (ev : has_eigenvalue M e) : e > 0 :=
begin
rw is_pos_def at h,
rw has_eigenvalue at ev,
cases ev with v ep,
rw has_eigenpair at ep,
cases h with herm mul,
rw is_Hermitian at herm,
cases ep with vnz vmul,
have h : (0 < dot_product v (M.mul_vec v)):=
begin
apply mul,
exact vnz,
end,
rw vmul at h,
rw dot_product_smul at h,
specialize mul v,
specialize mul vnz,
rw vmul at mul,
have h : dot_product (star v) (e • v) = e * (dot_product (star v) v) :=
begin
simp,
end,
have h_1 : dot_product (star v) (e • v) = e *
--vmul: v M.mul_vec v = e • v
end
end definite
end matrix
Eric Wieser (Apr 26 2022 at 16:55):
Whoops, I got the _ 2
arguments backwards
Eric Wieser (Apr 26 2022 at 16:55):
noncomputable def vector_magnitude (v : n → ℂ) := ∥(pi_Lp.equiv 2 _).symm v∥
Radu Zevri (Apr 26 2022 at 21:24):
Thank you, it works now!
Radu Zevri (Apr 27 2022 at 13:56):
import algebra.big_operators.pi
import algebra.module.pi
import algebra.module.linear_map
import algebra.big_operators.ring
import algebra.star.basic
--import data.equiv.ring
import data.fintype.card
import data.matrix.dmatrix
import linear_algebra.matrix.symmetric
import linear_algebra.matrix
import linear_algebra.matrix.trace
import linear_algebra.matrix.to_lin
import linear_algebra.matrix.nonsingular_inverse
import data.complex.basic
import linear_algebra.eigenspace
import tactic.find
import analysis.normed.group.basic
import analysis.inner_product_space.pi_L2
run_cmd tactic.skip
--#find conj,
set_option trace.simplify true
set_option trace.simplify.rewrite_failure false
variables {α n m R : Type*} [fintype n] [fintype m]
namespace matrix
open_locale matrix
def is_Hermitian [has_star α] (A : matrix n n α) : Prop := Aᴴ = A
section definite
open_locale complex_order complex_conjugate
def is_pos_def (M : matrix n n ℂ):= M.is_Hermitian ∧ ∀ v : n → ℂ, v ≠ 0 → 0 < dot_product v (M.mul_vec v)
def has_eigenpair (M : matrix n n ℂ) (e : ℂ) (x : n → ℂ ) := x ≠ 0 ∧ mul_vec M x = e • x
def has_eigenvector (M : matrix n n ℂ) (x : n → ℂ ) := ∃ e : ℂ , has_eigenpair M e x
def has_eigenvalue (M : matrix n n ℂ) (e : ℂ) := ∃ x : n → ℂ , has_eigenpair M e x
def vector_conj (x : n → ℂ) : n → ℂ := λ i : n, conj (x i)
noncomputable def vector_magnitude (v : n → ℂ) : ℂ := --has_norm.norm((pi_Lp.equiv 2 _).symm v)
dot_product (star v) v
theorem vector_magnitude_geq_zero (v : n → ℂ) : vector_magnitude v ≥ 0 :=
begin
rw vector_magnitude,
simp,
rw dot_product,
simp,
have h : ∀ (x : n), conj (v x) * (v x) ≥ 0 :=
begin
intro x,
have h_im : (conj (v x) * (v x)).im = 0 :=
begin
rw complex.mul_im,
simp,
nlinarith,
end,
have h_re : (conj (v x) * (v x)).re ≥ 0 :=
begin
rw complex.mul_re,
simp,
nlinarith,
end,
sorry,
end,
apply finset.sum_nonneg,
intro x,
intro _,
specialize h x,
simp at h,
exact h,
end
end definite
end matrix
I am currently trying to prove that the magnitude of a complex vector is greater than or equal to 0. I have proven that its real part is geq 0, and that its imaginary part is 0, but I am unsure how to proceed.
Eric Wieser (Apr 27 2022 at 14:25):
Note that's probably not the definition of vector_magnitude
you want (aside from really being the squared magnitude), since it has values in C not R
Eric Wieser (Apr 27 2022 at 14:25):
I would imagine (dot_product (star v) v).re
would be a better choice
Eric Wieser (Apr 27 2022 at 14:26):
(avoiding pi_Lp is fine for simplicity / pedagogy)
Eric Wieser (Apr 27 2022 at 14:27):
But to answer your sorry
, rw [ge, complex.le_def]
makes good progress
Eric Wieser (Apr 27 2022 at 14:28):
Not that mathlib has almost no lemmas about ≥
(docs#ge), because it expects you to use ≤
instead and flip your statements around
Radu Zevri (Apr 27 2022 at 14:48):
Thank you! I've now proven that the magnitude of a vector is geq 0. (I've left it as a complex number fornow, as turning it into a real caused issues elsewhere in the code. I might be able to do something with coercions eventually). I know it's not much, but it's a start :)
import algebra.big_operators.pi
import algebra.module.pi
import algebra.module.linear_map
import algebra.big_operators.ring
import algebra.star.basic
--import data.equiv.ring
import data.fintype.card
import data.matrix.dmatrix
import linear_algebra.matrix.symmetric
import linear_algebra.matrix
import linear_algebra.matrix.trace
import linear_algebra.matrix.to_lin
import linear_algebra.matrix.nonsingular_inverse
import data.complex.basic
import linear_algebra.eigenspace
import tactic.find
import analysis.normed.group.basic
import analysis.inner_product_space.pi_L2
run_cmd tactic.skip
--#find conj,
set_option trace.simplify true
set_option trace.simplify.rewrite_failure false
variables {α n m R : Type*} [fintype n] [fintype m]
namespace matrix
open_locale matrix
def is_Hermitian [has_star α] (A : matrix n n α) : Prop := Aᴴ = A
section definite
open_locale complex_order complex_conjugate
def is_pos_def (M : matrix n n ℂ):= M.is_Hermitian ∧ ∀ v : n → ℂ, v ≠ 0 → 0 < dot_product v (M.mul_vec v)
def has_eigenpair (M : matrix n n ℂ) (e : ℂ) (x : n → ℂ ) := x ≠ 0 ∧ mul_vec M x = e • x
def has_eigenvector (M : matrix n n ℂ) (x : n → ℂ ) := ∃ e : ℂ , has_eigenpair M e x
def has_eigenvalue (M : matrix n n ℂ) (e : ℂ) := ∃ x : n → ℂ , has_eigenpair M e x
def vector_conj (x : n → ℂ) : n → ℂ := λ i : n, conj (x i)
noncomputable def vector_magnitude (v : n → ℂ) : ℂ := --has_norm.norm((pi_Lp.equiv 2 _).symm v)
(dot_product (vector_conj v) v)
theorem vector_magnitude_geq_zero (v : n → ℂ) : vector_magnitude v ≥ 0 :=
begin
rw vector_magnitude,
simp,
rw dot_product,
--simp,
have h : ∀ (x : n), conj (v x) * (v x) ≥ 0 :=
begin
intro x,
have h_im : (conj (v x) * (v x)).im = 0 :=
begin
rw complex.mul_im,
simp,
nlinarith,
end,
have h_re : (conj (v x) * (v x)).re ≥ 0 :=
begin
rw complex.mul_re,
simp,
nlinarith,
end,
rw [ge,complex.le_def],
split,
simp,
nlinarith,
simp,
nlinarith,
end,
apply finset.sum_nonneg,
intro x,
intro _,
specialize h x,
simp at h,
exact h,
end
end definite
end matrix
Kevin Buzzard (Apr 27 2022 at 17:03):
Nice! Experimenting with simple-looking questions like this is a great way to learn Lean.
Kevin Buzzard (Apr 27 2022 at 17:04):
If you're interested in what the "black box" simp
tactic is doing, your can change it to squeeze_simp
and see the lemmas it's actually applying. Sometimes you can shorten your proof this way.
Radu Zevri (Apr 29 2022 at 13:07):
Thank you for this advice! I was looking for a way to get rid of some simp's, I'm sure this will help!
I've been trying to find some equivalence of the form
(λ x , f x) = f
So that I can use it in a goal of the form
... (λ x , f x) i ... = ...
So far, something that sometims works is change, but that requires me to copy the whole goal, which sometimes leads to errors. Is there something more applicable that could be used?
Kevin Buzzard (Apr 29 2022 at 13:12):
dsimp only
should do it.
Radu Zevri (Apr 29 2022 at 15:11):
Greetings! Is there any theorem that says I can take the real part in and out of a finset sum? I've assumed that (and the same thing for the imaginary part) to show that one can take the conjugate of a sum or the conjugate of the element of a sum ,but I'm not sure how to start here. I plan to use that result to show some equalities related to the conjugate of products of matrices and vectors. Thank you in advance!
import algebra.big_operators.pi
import algebra.module.pi
import algebra.module.linear_map
import algebra.big_operators.ring
import algebra.star.basic
--import data.equiv.ring
import data.fintype.card
import data.matrix.dmatrix
import linear_algebra.matrix.symmetric
import linear_algebra.matrix
import linear_algebra.matrix.trace
import linear_algebra.matrix.to_lin
import linear_algebra.matrix.nonsingular_inverse
import data.complex.basic
import linear_algebra.eigenspace
import tactic.find
import analysis.normed.group.basic
import analysis.inner_product_space.pi_L2
run_cmd tactic.skip
--#find conj,
set_option trace.simplify true
set_option trace.simplify.rewrite_failure false
universe u
variables {α n m R : Type*} [fintype n] [fintype m]
namespace matrix
open_locale matrix
section definite
open_locale complex_order complex_conjugate
lemma sum_re {v : n → ℂ} {f : ℂ → ℂ }: finset.univ.sum (λ i, (f (v i)).re) = (finset.univ.sum (λ i, f(v i))).re:=
begin
sorry,
end
lemma sum_im {v : n → ℂ} {f : ℂ → ℂ }: finset.univ.sum (λ i, (f (v i)).im) = (finset.univ.sum (λ i, f(v i))).im:=
begin
sorry,
end
lemma sum_conj {v : n → ℂ } {f : ℂ → ℂ }: conj (finset.univ.sum (λ i, f (v i))) = finset.univ.sum (λ i, conj (f (v i))) :=
begin
ext,
rw complex.conj_re,
rw ← sum_re,
change finset.univ.sum (λ (i : n), (conj (f (v i))).re) = (finset.univ.sum (λ (i : n), conj (f (v i)))).re,
rw sum_re,
rw ← sum_im,
simp only [complex.conj_im, finset.sum_neg_distrib, neg_inj],
rw sum_im,
end
end definite
end matrix
Ruben Van de Velde (Apr 29 2022 at 15:14):
lemma sum_re {v : n → ℂ} {f : ℂ → ℂ }: finset.univ.sum (λ i, (f (v i)).re) = (finset.univ.sum (λ i, f(v i))).re:=
begin
rw ←complex.re_lm_coe,
rw linear_map.map_sum,
end
Radu Zevri (Apr 29 2022 at 15:19):
Thank you very much!
Eric Wieser (Apr 29 2022 at 15:19):
I feel like that should probably be added as a lemma in mathlib, although probably as complex.re_sum
stated in reverse
Eric Wieser (Apr 29 2022 at 15:21):
It came up before in this thread
Radu Zevri (Apr 30 2022 at 17:15):
I've finally proven a "real" theorem! Namely, I've proven that the eigenvalues of a Hermitian matrix are all real. The next thing on my list would be to prove that a positive definite matrix has positive eigenvalues. Thank you for helping me out, I fell like I'm starting to understand Lean.
import algebra.big_operators.pi
import algebra.module.pi
import algebra.module.linear_map
import algebra.big_operators.ring
import algebra.star.basic
--import data.equiv.ring
import data.fintype.card
import data.matrix.dmatrix
import linear_algebra.matrix.symmetric
import linear_algebra.matrix
import linear_algebra.matrix.trace
import linear_algebra.matrix.to_lin
import linear_algebra.matrix.nonsingular_inverse
import data.complex.basic
import linear_algebra.eigenspace
import tactic.find
import analysis.normed.group.basic
import analysis.inner_product_space.pi_L2
run_cmd tactic.skip
--#find conj,
set_option trace.simplify true
set_option trace.simplify.rewrite_failure false
universe u
variables {α n m R : Type*} [fintype n] [fintype m]
namespace matrix
open_locale matrix
section definite
open_locale complex_order complex_conjugate
def vector_conj (v : n → ℂ) : n → ℂ := λ i : n, conj (v i)
def matrix_conj (M : matrix n n ℂ) : matrix n n ℂ := λ i : n, vector_conj (M i)
def is_Hermitian (M : matrix n n ℂ) : Prop := matrix_conj Mᵀ = M
def is_pos_def (M : matrix n n ℂ):= M.is_Hermitian ∧ ∀ v : n → ℂ, v ≠ 0 → 0 < dot_product v (M.mul_vec v)
def has_eigenpair (M : matrix n n ℂ) (e : ℂ) (x : n → ℂ ) := x ≠ 0 ∧ mul_vec M x = e • x
def has_eigenvector (M : matrix n n ℂ) (x : n → ℂ ) := ∃ e : ℂ , has_eigenpair M e x
def has_eigenvalue (M : matrix n n ℂ) (e : ℂ) := ∃ x : n → ℂ , has_eigenpair M e x
noncomputable def vector_magnitude (v : n → ℂ) : ℂ := --has_norm.norm((pi_Lp.equiv 2 _).symm v)
(dot_product (vector_conj v) v)
theorem vector_magnitude_geq_zero (v : n → ℂ) : vector_magnitude v ≥ 0 :=
begin
rw vector_magnitude,
rw dot_product,
have h : ∀ (x : n), conj (v x) * (v x) ≥ 0 :=
begin
intro x,
have h_im : (conj (v x) * (v x)).im = (0:ℂ).im :=
begin
rw complex.mul_im,
rw complex.conj_re,
rw complex.conj_im,
rw complex.zero_im,
nlinarith,
end,
have h_re : (conj (v x) * (v x)).re ≥ (0:ℂ).re :=
begin
rwa complex.mul_re,
rwa complex.conj_re,
rwa complex.conj_im,
rw neg_mul,
rw sub_neg_eq_add,
rw complex.zero_re,
nlinarith,
end,
rw [ge,complex.le_def],
split,
assumption,
rw eq_comm,
assumption,
end,
apply finset.sum_nonneg,
intro x,
intro _,
specialize h x,
rw ← ge,
exact h,
end
lemma vector_conj_element (v: n → ℂ) (i : n) : (vector_conj v) i = conj (v i) :=
begin
rw vector_conj,
end
lemma sum_re {v : n → ℂ} {f : ℂ → ℂ }: finset.univ.sum (λ i, (f (v i)).re) = (finset.univ.sum (λ i, f(v i))).re:=
begin
rw ← complex.re_lm_coe,
rw linear_map.map_sum,
end
lemma sum_im {v : n → ℂ} {f : ℂ → ℂ }: finset.univ.sum (λ i, (f (v i)).im) = (finset.univ.sum (λ i, f(v i))).im:=
begin
rw ← complex.im_lm_coe,
rw linear_map.map_sum,
end
lemma sum_conj {v : n → ℂ } {f : ℂ → ℂ }: conj (finset.univ.sum (λ i, f (v i))) = finset.univ.sum (λ i, conj (f (v i))) :=
begin
ext,
rw complex.conj_re,
rw ← sum_re,
change finset.univ.sum (λ (i : n), (conj (f (v i))).re) = (finset.univ.sum (λ (i : n), conj (f (v i)))).re,
rw sum_re,
rw ← sum_im,
simp only [complex.conj_im, finset.sum_neg_distrib, neg_inj],
rw sum_im,
end
lemma conj_sum {v : n → ℂ} : conj (finset.univ.sum (λ i, (v i))) = finset.univ.sum (λ i, conj ((v i))) :=
begin
have h : conj (finset.univ.sum (λ i, id (v i))) = finset.univ.sum (λ i, conj (id (v i))),
exact sum_conj,
dsimp at h,
exact h,
end
lemma mul_conj (a : ℂ ) (b : ℂ ) : conj (a * b) = (conj a) * (conj b) :=
begin
ext;
simp only [complex.mul_re, complex.conj_re, complex.conj_im, mul_neg, neg_mul, neg_neg],
simp only [complex.mul_im, neg_add_rev, complex.conj_re, complex.conj_im, mul_neg, neg_mul],
rw add_comm,
end
lemma dot_product_conj (v : n → ℂ) (w : n → ℂ) : conj (dot_product v w) = dot_product (vector_conj v) (vector_conj w) :=
begin
repeat {rw dot_product},
repeat {rw vector_conj},
dsimp only,
rw conj_sum,
apply finset.sum_congr,
refl,
intros _ _,
rw mul_conj,
end
lemma vec_mul_conj (v : n → ℂ) (M : matrix n n ℂ) : vector_conj (vec_mul v M) = vec_mul (vector_conj v) (matrix_conj M):=
begin
have vm : vec_mul (vector_conj v) M.matrix_conj = λ i , dot_product (vector_conj v) (λ j, conj (M j i)),
ext;
rw vec_mul;
rw dot_product;
rw dot_product;
rw matrix_conj;
simp only;
rw vector_conj;
simp only;
simp_rw vector_conj,
rw vm,
ext;
rw vector_conj;
rw vector_conj;
dsimp only;
rw vec_mul;
rw dot_product_conj;
repeat {rw vector_conj},
end
lemma mul_vec_conj (v : n → ℂ) (M : matrix n n ℂ) : vector_conj (mul_vec M v) = mul_vec (matrix_conj M) (vector_conj v) :=
begin
have mv : mul_vec (matrix_conj M) (vector_conj v) = λ i , dot_product (vector_conj v) (λ j, conj (M i j)),
ext;
rw mul_vec;
rw dot_product;
rw dot_product;
rw matrix_conj;
simp only;
rw vector_conj;
simp only;
simp_rw mul_comm,
rw mv,
ext;
rw vector_conj;
rw vector_conj;
dsimp only;
rw mul_vec;
rw dot_product_conj;
repeat {rw vector_conj};
simp_rw dot_product;
simp_rw mul_comm,
end
lemma vector_conj_conj (v : n → ℂ) : vector_conj (vector_conj v) = v :=
begin
apply funext,
rw vector_conj,
intro x,
rw vector_conj,
change conj (conj (v x)) = v x,
rw complex.conj_conj,
end
lemma matrix_conj_conj (M : matrix n n ℂ) : matrix_conj (matrix_conj M) = M :=
begin
apply funext,
rw matrix_conj,
rw matrix_conj,
intro x,
change vector_conj (vector_conj (M x)) = M x,
rw vector_conj_conj,
end
lemma vec_smul_conj (v : n → ℂ) (x : ℂ) : vector_conj (x • v) = (conj x) • (vector_conj v) :=
begin
repeat {rw vector_conj},
simp only [pi.smul_apply, algebra.id.smul_eq_mul, map_mul],
simp_rw mul_conj,
refl,
end
lemma mul_vec_trans (M : matrix n n ℂ ) (v : n → ℂ ) : (mul_vec Mᵀ v) = (vec_mul v M) :=
begin
ext;
rw mul_vec;
rw vec_mul;
rw dot_product;
rw dot_product;
simp_rw matrix.transpose;
rw finset.sum_congr,
refl,
intros _ _,
rw mul_comm,
refl,
intros _ _,
rw mul_comm,
end
lemma vec_mul_trans (M : matrix n n ℂ) (v : n → ℂ ) : (vec_mul v Mᵀ ) = mul_vec M v :=
begin
have h : M.mul_vec v = Mᵀᵀ.mul_vec v,
rw transpose_transpose,
rw h,
rw mul_vec_trans,
end
lemma dot_product_mul_vec_vec_mul (M : matrix n n ℂ) (v w : n → ℂ) :
dot_product v (mul_vec M w) = dot_product (vec_mul v M) w :=
begin
have mv : M.mul_vec w = λ j, dot_product (λ i, M j i) w,
ext ;
rw mul_vec,
have vm : vec_mul v M = λ j, dot_product v (λ i, M i j),
ext ;
rw vec_mul,
rw mv,
change (v ⬝ᵥ λ (j : n), M j ⬝ᵥ w) = vec_mul v M ⬝ᵥ w,
rw ← dot_product_assoc,
repeat {rw [dot_product]},
apply finset.sum_congr,
refl,
intro x,
intro _,
rw vec_mul,
end
lemma vector_zero_iff_magnitude_zero (v : n → ℂ) : dot_product v (vector_conj v) = 0 ↔ v=0 :=
begin
split,
intro prod,
rw dot_product at prod,
have gez : ∀ i : n, v i * vector_conj v i ≥ 0 :=
begin
intro i,
rw vector_conj,
dsimp,
rw ge_iff_le,
rw complex.le_def,
split,
rw complex.mul_re,
simp only [complex.zero_re, complex.conj_re, complex.conj_im, mul_neg, sub_neg_eq_add],
nlinarith,
rw complex.mul_im,
rw complex.conj_im,
rw complex.conj_re,
simp only [complex.zero_im, mul_neg],
rw mul_comm,
rw add_left_neg,
end,
have important : finset.univ.sum(λ i , v i * vector_conj v i ) = 0 ↔ ∀ i ∈ finset.univ, v i * vector_conj v i = 0,
apply finset.sum_eq_zero_iff_of_nonneg,
intros i _,
specialize gez i,
rw ge_iff_le at gez,
exact gez,
rw prod at important,
simp only [eq_self_iff_true, finset.mem_univ, mul_eq_zero, forall_true_left, true_iff] at important,
funext i,
specialize important i,
cases important with vz vcz,
rw vz,
refl,
rw vector_conj at vcz,
dsimp at vcz,
have vczc := congr_arg conj vcz,
rw [complex.conj_conj] at vczc,
simp only [map_zero] at vczc,
rw vczc,
refl,
intro ez,
rw ez,
rw zero_dot_product,
end
theorem hermitian_real_eigenvalues (M : matrix n n ℂ) (h : is_Hermitian M) (e : ℂ) (ev : has_eigenvalue M e) : e.im = 0 :=
begin
rw has_eigenvalue at ev,
cases ev with v ep,
rw has_eigenpair at ep,
cases ep with nz mul,
rw is_Hermitian at h,
have mul_0 : dot_product (vector_conj v) (M.mul_vec v) = dot_product (vector_conj v) (e • v) :=
begin
rw mul,
end,
have mul_1 : dot_product (vector_conj v) (e • v) = e * (vector_magnitude v) :=
begin
rw vector_magnitude,
simp only [dot_product_smul, algebra.id.smul_eq_mul],
end,
have mul_2 : dot_product v (e • (vector_conj v)) = e * vector_magnitude v :=
begin
rw dot_product_comm,
rw vector_magnitude,
rw dot_product,
rw dot_product,
rw finset.mul_sum,
rw finset.sum_congr,
refl,
intros _ _,
simp only [pi.smul_apply, algebra.id.smul_eq_mul,mul_assoc],
end,
have mul_3 : dot_product (vector_conj v) (mul_vec (Mᵀ.matrix_conj) v) = (conj e) * vector_magnitude v :=
begin
rw dot_product_mul_vec_vec_mul,
rw matrix_trans_conj,
rw vec_mul_trans,
rw ← mul_vec_conj,
rw mul,
rw vec_smul_conj,
rw vector_magnitude,
simp only [smul_dot_product, algebra.id.smul_eq_mul],
end,
rw h at mul_3,
rw mul at mul_3,
rw mul_1 at mul_3,
rw ne at nz,
rw ← vector_zero_iff_magnitude_zero at nz,
rw dot_product_comm at nz,
rw ← vector_magnitude at nz,
rw mul_eq_mul_right_iff at mul_3,
simp only [nz] at mul_3,
rw or_false at mul_3,
have e_im := congr_arg complex.im mul_3,
rw complex.conj_im at e_im,
rw eq_neg_iff_add_eq_zero at e_im,
rw add_self_eq_zero at e_im,
exact e_im,
end
end definite
end matrix
Radu Zevri (Apr 30 2022 at 17:16):
Apologies that my message is so long, but there are a lot of lemmas that this theorem depends on, so a mrw would still be quite large
Eric Wieser (Apr 30 2022 at 17:48):
(I just created #13837 to get rid of that noncomputable
)
Eric Wieser (Apr 30 2022 at 17:52):
You ought to be able to replace all your vec_conj
s with star
(and replace rw star
with rw pi.star_apply
); then for instance lemmas like dot_prod_conj
are already in the library, as docs#matrix.star_dot_product_star
Eric Wieser (Apr 30 2022 at 17:54):
Of course, it's already a great learning experience to build everything up from scratch - the next thing to learn is how to find the lemmas and definitions that already exist!
Radu Zevri (May 01 2022 at 12:56):
Thank you for your advice! I will try to do that eventually, but right now I am trying to first prove some theorems, as I've sadly not done as much as I wish I had yet.
I have a problem that might sound a bit silly. I have (complex) numbers a and b, and I have that a > 0 and a * b > 0. I am struggling to prove that b > 0 without writing an unreasonably large amount of code, and would like to know if there is some theorem I could use. Something like this sounds like it would already be in th library, but I can't find it. Thank you very much!
Ruben Van de Velde (May 01 2022 at 13:10):
Is it true? < is somewhat of a strange operation on C, if I recall correctly
Yaël Dillies (May 01 2022 at 13:14):
Yes it is, because being comparable to 0
implies being real. One way to prove it is 0 < (a * b) * a⁻¹ = b
.
Radu Zevri (May 01 2022 at 13:15):
I believe it is. Since we have a > 0, we have that a is real. Then since a*b>0, b is also real. Then we have b > 0. I'll try to convert them to real numbers and go from there
Radu Zevri (May 01 2022 at 13:16):
The way @Yaël Dillies showed that b is real is more elegant
Eric Wieser (May 01 2022 at 13:18):
I would recommend avoiding docs#complex.partial_order for what you're doing, and learning how to make vector_magnitude
real instead
Yaël Dillies (May 01 2022 at 13:18):
Ruben Van de Velde said:
< is somewhat of a strange operation on C, if I recall correctly
The order on ℂ
is unusual, but it is plenty behaved.
Ruben Van de Velde (May 01 2022 at 13:22):
That's good - I have zero intuition for it :)
Radu Zevri (May 01 2022 at 13:57):
I've shown that positive definite matrices have positive eigenvalues. I will now write some related results.
Radu Zevri (May 02 2022 at 23:56):
Hello! I am having some trouble understanding the definition of matrix.nondegenerate. The definition says w ⬝ M ⬝ v ≠ 0
, but the code says v ⬝ᵥ M.mul_vec w
. I was looking to use this definition to prove that e is an eigenvalue of M iff det(M-eI)=0, but I'm not sure if this form can be used. I apologise if this sounds like a silly question, but I would like some assistance, if possible. Thank you very much!
Eric Wieser (May 03 2022 at 00:10):
The "definition" in the docstring is using math notation not lean notation
Eric Wieser (May 03 2022 at 00:11):
It does look like the v and w got swapped though... ah, the binders are just stated confusingly
Eric Wieser (May 03 2022 at 00:14):
docs#matrix.nondegenerate.exists_not_ortho_of_ne_zero is the statement from the docstring
Radu Zevri (May 03 2022 at 00:43):
Apologies, but I'm not sure I understand. The math notation does not seem to correspond to the actual code, unless I'm understanding it incorrectly.
I was hoping there would be a version mathing the math notation, as having the M.mul_vec v
(where v
is in the first forall, allowing me to pick it) would let me pick an eigenvector, and replace M.mul_vec v
with e • v
. Do you know if there is anything that I could do about this? Thank you for helping me.
Kevin Buzzard (May 03 2022 at 12:51):
In documentation my impression is that we allow people to make claims in docstrings such as "P is defined to be X" when the truth is that P is actually defined to be Y, and it's a theorem that X=Y. @Radu Zevri the best way to ask questions in this forum is not to try and explain your question mathematically but to post a short #mwe containing a sorry
and ask for help filling it in. I think that you have a very precise question here so you may as well post it as a Lean "puzzle"; that way, somebody passing who knows the matrix part of the library can easily post exactly the proof you want.
Radu Zevri (May 03 2022 at 14:08):
Thank you for letting me know! I've managed to work my way around this, and have managed to prove that e is an eigenvalue of M iff det(M-eI)=0. I believe this would be an important first step in estabishing the relation between eigenvalues and the characteristic polynomial of a matrix. I have also proven some other useful properties about eigenvalues. Please have a look, and let me know what you think: https://github.com/oxford-1032451/4th-year-project/blob/main/mwe.lean. Thank you!
Kevin Buzzard (May 03 2022 at 14:39):
Here's some thoughts about the first lemma:
theorem vector_magnitude_geq_zero (v : n → ℂ) : vector_magnitude v ≥ 0 :=
begin
-- two indents not 3
rw [vector_magnitude, dot_product], -- can do multiple rewrites in one line
have h : ∀ (x : n), 0 ≤ conj (v x) * (v x) := -- don't ever use ≥, only ≤
begin
intro x,
-- we have the result already
rw star_ordered_ring.nonneg_iff,
-- Remember: if the result looks natural, it will be in the library already.
exact ⟨v x, rfl⟩,
end,
apply finset.sum_nonneg,
intros x _, -- can do more than one intro at once
specialize h x,
-- rw ← ge, -- no longer needed because we used ≤
exact h,
end
-- now do it without the `have`:
theorem vector_magnitude_geq_zero' (v : n → ℂ) : vector_magnitude v ≥ 0 :=
begin
rw [vector_magnitude, dot_product],
refine finset.sum_nonneg _, -- `apply` is a special case of `refine`
intros i _,
rw star_ordered_ring.nonneg_iff,
exact ⟨v i, rfl⟩,
end
-- The rewrites are definitional (do you know about definitional equality?) so
-- we can skip them:
theorem vector_magnitude_geq_zero'' (v : n → ℂ) : vector_magnitude v ≥ 0 :=
begin
refine finset.sum_nonneg (λ i _, _), -- do the intros on this line
rw star_ordered_ring.nonneg_iff,
exact ⟨v i, rfl⟩,
end
Kevin Buzzard (May 03 2022 at 14:41):
What do you think about this proof?
lemma vector_conj_element (v: n → ℂ) (i : n) : (vector_conj v) i = conj (v i) :=
begin
refl, -- true by definition
end
Radu Zevri (May 03 2022 at 20:15):
Kevin Buzzard said:
Here's some thoughts about the first lemma:
theorem vector_magnitude_geq_zero (v : n → ℂ) : vector_magnitude v ≥ 0 := begin -- two indents not 3 rw [vector_magnitude, dot_product], -- can do multiple rewrites in one line have h : ∀ (x : n), 0 ≤ conj (v x) * (v x) := -- don't ever use ≥, only ≤ begin intro x, -- we have the result already rw star_ordered_ring.nonneg_iff, -- Remember: if the result looks natural, it will be in the library already. exact ⟨v x, rfl⟩, end, apply finset.sum_nonneg, intros x _, -- can do more than one intro at once specialize h x, -- rw ← ge, -- no longer needed because we used ≤ exact h, end -- now do it without the `have`: theorem vector_magnitude_geq_zero' (v : n → ℂ) : vector_magnitude v ≥ 0 := begin rw [vector_magnitude, dot_product], refine finset.sum_nonneg _, -- `apply` is a special case of `refine` intros i _, rw star_ordered_ring.nonneg_iff, exact ⟨v i, rfl⟩, end -- The rewrites are definitional (do you know about definitional equality?) so -- we can skip them: theorem vector_magnitude_geq_zero'' (v : n → ℂ) : vector_magnitude v ≥ 0 := begin refine finset.sum_nonneg (λ i _, _), -- do the intros on this line rw star_ordered_ring.nonneg_iff, exact ⟨v i, rfl⟩, end
Thank you for your input! I will use your advice in the future.
Radu Zevri (May 03 2022 at 20:17):
Kevin Buzzard said:
What do you think about this proof?
lemma vector_conj_element (v: n → ℂ) (i : n) : (vector_conj v) i = conj (v i) := begin refl, -- true by definition end
I've removed it.
Radu Zevri (May 03 2022 at 20:23):
Apologies for not answering sooner, I was writing some other proof and some odd error popped up: "result contains metavariables". I've tried using recover and suggest to fix the issue, as I've seen you've solved similar issues by that method in the past, but that leads to other type errors which I cannot even begin to comprehend. The theorem which has this error begins at line 1532 (link). Would it be possible to receive some assistance? Thank you very much!
Alex J. Best (May 03 2022 at 20:45):
The problem is the tactic nth_rewrite
which can leave side goals as metavariables, the solution in this case is to replace it with explicit rw smul_comm t a
type calls, which don't suffer from this bug
theorem independent_eigenvectors_linear_combination_not_eigenvector (M : matrix n n ℂ) (u v : n → ℂ) (e r : ℂ) (neq : e ≠ r)
(ep_1 : has_eigenpair M e u) (ep_2 : has_eigenpair M r v) : ∀ (a b : ℂ), a ≠ 0 → b ≠ 0 → ¬(has_eigenvector M (a•u+b•v)) :=
begin
have epeu := ep_1,
have eprv := ep_2,
intros a b anz bnz,
rw ne at anz bnz neq,
by_contra ev,
rw has_eigenvector at ev,
cases ev with t ep,
rw has_eigenpair at ep,
cases ep with lc mul,
rw mul_vec_add at mul,
rw smul_add at mul,
repeat {rw mul_vec_smul at mul},
rw has_eigenpair at ep_1 ep_2,
cases ep_1 with unz mul_1,
cases ep_2 with vnz mul_2,
rw ne at unz vnz,
rw [mul_1,mul_2] at mul,
have helper_lemma : a • (e - t) • u + b • (r - t) • v = 0,
repeat {rw sub_smul},
repeat {rw smul_sub},
rw sub_add_sub_comm,
rw sub_eq_zero,
rw smul_comm a t,
rw smul_comm b t,
exact mul,
apply_instance,
apply_instance,
have lin_ind := eigenvectors_linearly_independent M u v e r epeu eprv neq,
specialize lin_ind (a • (e - t)) (b • (r - t)),
have ent : ¬(e - t)=0 :=
begin
by_contra eeqt,
rw eeqt at helper_lemma,
rw [zero_smul, smul_zero, zero_add] at helper_lemma,
rw smul_eq_zero at helper_lemma,
cases helper_lemma,
apply bnz,
exact helper_lemma,
rw smul_eq_zero at helper_lemma,
cases helper_lemma,
rw sub_eq_zero at eeqt helper_lemma,
apply neq,
rw [helper_lemma,eeqt],
apply vnz,
exact helper_lemma,
end,
have rnt : ¬(r - t)=0 :=
begin
by_contra reqt,
rw reqt at helper_lemma,
rw [zero_smul, smul_zero, add_zero] at helper_lemma,
rw smul_eq_zero at helper_lemma,
cases helper_lemma,
apply anz,
exact helper_lemma,
rw smul_eq_zero at helper_lemma,
cases helper_lemma,
apply ent,
exact helper_lemma,
apply unz,
exact helper_lemma,
end,
have aemtnz : a • (e - t) ≠ 0,
rw smul_ne_zero,
split,
rwa ne,
rwa ne,
have brmtnz : b • (r - t) ≠ 0,
rw smul_ne_zero,
split,
rwa ne,
rwa ne,
have lin_ind_2 := lin_ind aemtnz brmtnz,
repeat {rw smul_assoc at lin_ind_2},
rw ne at lin_ind_2,
apply lin_ind_2,
exact helper_lemma,
end
Radu Zevri (May 03 2022 at 21:03):
Thank you very much! I would have never guessed that. Everything works now.
Alex J. Best (May 03 2022 at 21:12):
In this case I happened to know that nth_rewrite
has this problem, but in future maybe moving the recover
call around would help you find which tactic was causing the issue, the first place recover
adds something would be the problem. If everything is working as intended recover
should never be needed.
Last updated: Dec 20 2023 at 11:08 UTC