Zulip Chat Archive
Stream: new members
Topic: Factor out a vector
Rida Hamadani (Apr 28 2024 at 01:39):
I have a quantity which I want to prove is zero, but in my hypothesis, the terms of the equation are multiplied by a vector, using both smul
and mulVec
. Is there a way to factor out both types of multiplication at once? More specifically, I wish to transform this:
1 *ᵥ e + l • e + (l ^ 2 • e - ↑d *ᵥ e) = 0
into this:
l ^ 2 + l - (↑d - 1) = 0
Here is an #mwe representing the context of this:
import Mathlib.Combinatorics.SimpleGraph.StronglyRegular
import Mathlib.LinearAlgebra.Matrix.Spectrum
import Mathlib.LinearAlgebra.Eigenspace.Basic
namespace SimpleGraph
variable {α β : Type*} [Fintype α] {G : SimpleGraph α} [DecidableRel G.Adj] [RCLike β] [DecidableEq α]
open SimpleGraph Matrix
abbrev f : α → β := fun _ ↦ 1
example {d : ℕ} {e : α → β} {l : β} (he : e ∈ Module.End.eigenspace (Matrix.toLin' (G.adjMatrix β)) l)
(ho : f ⬝ᵥ e = 0) : l ^ 2 + l - (d - 1) = 0 := by
have : 1 + (G.adjMatrix β) + ((G.adjMatrix β) ^ 2 - d • 1) = of fun _ _ ↦ 1 := by sorry
rw [Module.End.mem_eigenspace_iff, toLin'_apply', mulVecLin_apply] at he
have hesqr : ((G.adjMatrix β) ^ 2) *ᵥ e = (l ^ 2) • e := by
repeat rw [pow_succ, pow_one]
rw [← mulVec_mulVec, he, mulVec_smul_assoc, he, ← smul_assoc, smul_eq_mul]
apply_fun (fun x ↦ x *ᵥ e) at this
rw [add_mulVec, add_mulVec, sub_mulVec, he, hesqr] at this
have rhsZero : (Matrix.of (fun _ _ ↦ 1) : Matrix α α β) *ᵥ e = 0 := by
ext i
simp [mulVec, ho]
rw [rhsZero, nsmul_eq_mul, mul_one] at this
-- this : 1 *ᵥ e + l • e + (l ^ 2 • e - ↑d *ᵥ e) = 0
-- goal : l ^ 2 + l - (↑d - 1) = 0
sorry
Mitchell Lee (Apr 28 2024 at 03:01):
Regrettably, I don't think there is any painless way to do this. In fact, every step of the process seems to be painful in some way. But maybe you could help to fill some of the gaps in the library that make this particular example so difficult. (I would not give an answer like this to a total beginner, but I have seen you make several contributions, so I know you aren't one.)
Here is the outline. It's quite similar to how you would make the argument on paper. Note that you need to assume that e ≠ 0
, otherwise your theorem is false.
- Rewrite the
↑d *ᵥ e
asd • e
. Unfortunately, the theory of matrices in mathlib is rather incomplete, so I don't think there is currently a theorem that lets you do this. But you could write your own. - Rewrite the
1 *ᵥ e
as1 • e
. You could do this by first rewriting the1
as((1 : ℕ) : Matrix α α β)
usingrfl
, and then using the theorem you proved in step 1. (See here for the reason that you have to do the first awkward rewriting step.) - The resulting equation is
1 • e + l • e + (l ^ 2 • e - ↑d • e) = 0
. Now rewrite it as(1 + l + (l ^ 2 - ↑d)) • e = 0
. The easiest way I know to do this is to usesimp only [← add_nsmul, ← sub_nsmul] at this
. (I have been waiting for a tactic that does this sort of manipulation automatically, as I mentioned here, but that might not happen in the near future.) - Now, apply smul_eq_zero to conclude that
1 + l + (l ^ 2 - ↑d) = 0
. This requires the extra assumption thate ≠ 0
. - Finally, you just need to observe that by the ring axioms, this equation is equivalent to the one you are trying to prove. To do so, you can use
ring_nf at this ⊢; assumption
.
Kyle Miller (Apr 28 2024 at 03:26):
Here's a way to do it with convert
:
example {d : ℕ} {e : α → β} {l : β} (he : e ∈ Module.End.eigenspace (Matrix.toLin' (G.adjMatrix β)) l)
(hnz : e ≠ 0)
(ho : f ⬝ᵥ e = 0) : l ^ 2 + l - (d - 1) = 0 := by
have : 1 + (G.adjMatrix β) + ((G.adjMatrix β) ^ 2 - d • 1) = of fun _ _ ↦ 1 := by sorry
rw [Module.End.mem_eigenspace_iff, toLin'_apply', mulVecLin_apply] at he
have hesqr : ((G.adjMatrix β) ^ 2) *ᵥ e = (l ^ 2) • e := by
repeat rw [pow_succ, pow_one]
rw [← mulVec_mulVec, he, mulVec_smul_assoc, he, ← smul_assoc, smul_eq_mul]
apply_fun (fun x ↦ x *ᵥ e) at this
rw [add_mulVec, add_mulVec, sub_mulVec, he, hesqr] at this
have foo : d *ᵥ e = (d : β) • e := by
change diagonal d *ᵥ e = _
ext
simp [mulVec_diagonal]
have : (l ^ 2 + l - (↑d - 1)) • e = 0 := by
convert this using 1
· simp [add_smul, sub_smul, foo]
ring
· ext i
simp [mulVec, ho]
rw [smul_eq_zero_iff_left hnz] at this
exact this
Rida Hamadani (Apr 28 2024 at 12:21):
Thank you very much! Sorry for forgetting the e ≠ 0
hypothesis, I was thinking "e
is an eigenvector, and eigenvectors are non-zero", but now I notice why he
doesn't imply that.
Mitchell Lee said:
But maybe you could help to fill some of the gaps in the library that make this particular example so difficult.
I would love to, but I think I'm far from a level where I can attempt writing the module_nf
tactic. Maybe the first step is to add the d *ᵥ e = d • e
lemma. Do you have suggestions on how to fill more of these gaps?
Eric Wieser (Apr 28 2024 at 12:40):
d *ᵥ e = d • e
should definitely be a lemma we have; though we also add all of:
diagonal_mulVec : (diagonal fun x => r) *ᵥ v = r • v
natcast_mulVec : n *ᵥ v = n • v
intcast_mulVec : z *ᵥ v = z • v
ofNat_mulVec : (OfNat.ofNat (no_index n) : R) *ᵥ v = (OfNat.ofNat n : R) • v
Eric Wieser (Apr 28 2024 at 12:41):
(the first one should work as the proof for all the rest)
Rida Hamadani (Apr 30 2024 at 06:46):
#12538 proves these, please let me know if you have suggestions or more lemmas to add to it.
Eric Wieser (Apr 30 2024 at 06:51):
That looks good so far, I left some comments
Last updated: May 02 2025 at 03:31 UTC