Zulip Chat Archive
Stream: maths
Topic: jordan-chevalley decomposition
Johan Commelin (Nov 11 2020 at 09:34):
Afaik we don't have the Jordan-Chevalley decomposition of a linear operator. I think we have all the prerequisites. We should have this, so that we can say that "linear algebra" is done. See #4973.
Patrick Massot (Nov 11 2020 at 11:16):
I was a bit perplexed that I needed to go to wikipedia to know you need to "have linear algebra done". It turns out France uses a different name for this. @confused French people: Johan means Dunford decomposition.
Kevin Buzzard (Nov 11 2020 at 11:27):
It's rare to see the French calling something by a different name which isn't the name of a French person -- apparently Nelson Dunford was an American.
Patrick Massot (Nov 11 2020 at 11:28):
Yes, this is extremely confusing.
Johan Commelin (Nov 11 2020 at 11:56):
It seems like you missed a big chance there...
Johan Commelin (Nov 11 2020 at 11:57):
Cette décomposition a été démontrée une première fois en 1870 par Camille Jordan, puis dans les années 1950 par Claude Chevalley dans le contexte de la théorie des groupes algébriques. Dans le monde francophone, elle est parfois attribuée à tort à Nelson Dunford, dont les travaux sont postérieurs à ceux de Chevalley.
Patrick Massot (Nov 11 2020 at 12:04):
Too late.
Oliver Nash (Jan 04 2024 at 14:38):
I now need this decomposition. Is there any chance I can nerdsnipe someone into doing it?
Oliver Nash (Jan 04 2024 at 14:39):
To improve my chances, I have done some of the early boring work:
import Mathlib.FieldTheory.Perfect
import Mathlib.Order.Sublattice
import Mathlib.LinearAlgebra.EigenSpace.Basic
/-!
# Jordan-Chevalley-Dunford decomposition of a linear operator (additive version)
-/
open Set Function
variable {K V : Type*} [Field K] [AddCommGroup V] [Module K V]
namespace LinearMap
variable (f : V →ₗ[K] V)
def invariantSubspace : Sublattice (Submodule K V) where
carrier := {p | MapsTo f p p}
supClosed' := by
intro p hp q hq x hx
obtain ⟨y, hy, z, hz, rfl⟩ := Submodule.mem_sup.mp hx
exact map_add f _ _ ▸ Submodule.add_mem_sup (hp hy) (hq hz)
infClosed' := fun p hp q hq x hx ↦ ⟨hp hx.1, hq hx.2⟩
namespace invariantSubspace
theorem injective_coeSubmodule :
Injective ((↑) : f.invariantSubspace → Submodule K V) := fun x y h ↦ by
cases x; cases y; congr
instance instTop : Top f.invariantSubspace := ⟨⊤, fun _ _ ↦ mem_univ _⟩
instance instBot : Bot f.invariantSubspace := ⟨⊥, fun x hx ↦ by
replace hx : x = 0 := by simpa using hx
simp [hx]⟩
instance instSupSet : SupSet f.invariantSubspace where
sSup S := ⟨sSup {(p : Submodule K V) | p ∈ S}, by
intro x hx
simp only [Subtype.exists, exists_and_right, exists_eq_right, SetLike.mem_coe,
Submodule.mem_sSup, mem_setOf_eq, forall_exists_index] at hx ⊢
intro q hq
refine hx (q.comap f) fun p (hp : p ≤ p.comap f) hp' ↦ ?_
exact le_trans hp <|Submodule.comap_mono <| hq p hp hp'⟩
instance instInfSet : InfSet f.invariantSubspace where
sInf S := ⟨sInf {(p : Submodule K V) | p ∈ S}, by
rintro x hx - ⟨p, rfl⟩
simp only [Subtype.exists, exists_and_right, exists_eq_right, mem_setOf_eq, iInter_exists,
mem_iInter, Submodule.sInf_coe] at hx ⊢
exact fun hp hp' ↦ hp (hx p hp hp')⟩
lemma coe_sup (p q : f.invariantSubspace) : (↑(p ⊔ q) : Submodule K V) = ↑p ⊔ ↑q := rfl
lemma coe_inf (p q : f.invariantSubspace) : (↑(p ⊓ q) : Submodule K V) = ↑p ⊓ ↑q := rfl
@[simp]
theorem sSup_coe_toSubmodule (S : Set f.invariantSubspace) :
(↑(sSup S) : Submodule K V) = sSup {(s : Submodule K V) | s ∈ S} :=
rfl
theorem sSup_coe_toSubmodule' (S : Set f.invariantSubspace) :
(↑(sSup S) : Submodule K V) = ⨆ p ∈ S, (p : Submodule K V) := by
rw [sSup_coe_toSubmodule, ← Set.image, sSup_image]
@[simp]
theorem sInf_coe_toSubmodule (S : Set f.invariantSubspace) :
(↑(sInf S) : Submodule K V) = sInf {(s : Submodule K V) | s ∈ S} :=
rfl
theorem sInf_coe_toSubmodule' (S : Set f.invariantSubspace) :
(↑(sInf S) : Submodule K V) = ⨅ p ∈ S, (p : Submodule K V) := by
rw [sInf_coe_toSubmodule, ← Set.image, sInf_image]
instance instCompleteLattice : CompleteLattice f.invariantSubspace :=
{ (invariantSubspace.injective_coeSubmodule f).completeLattice
(↑) (coe_sup f) (coe_inf f) (sSup_coe_toSubmodule' f) (sInf_coe_toSubmodule' f) rfl rfl with }
end invariantSubspace
protected abbrev IsSemisimple := ComplementedLattice f.invariantSubspace
variable [PerfectField K] [FiniteDimensional K V]
def semisimpleComponent (f : V →ₗ[K] V) : V →ₗ[K] V :=
sorry -- Data
def nilpotentComponent (f : V →ₗ[K] V) : V →ₗ[K] V := f - f.semisimpleComponent
lemma semisimpleComponent.IsSemisimple : f.semisimpleComponent.IsSemisimple :=
sorry -- Prop
lemma nilpotentComponent.IsNilpotent : IsNilpotent f.nilpotentComponent :=
sorry -- Prop
lemma Commute_semisimple_nilpotent : Commute f.semisimpleComponent f.nilpotentComponent :=
sorry -- Prop
@[simp]
lemma semisimple_add_nilpotent_eq_self : f.semisimpleComponent + f.nilpotentComponent = f :=
add_sub_cancel'_right (semisimpleComponent f) f
end LinearMap
Eric Wieser (Jan 04 2024 at 15:24):
Note that the work in the unmerged !3#18289 has some overlap with your invariantSubspace
Oliver Nash (Jan 04 2024 at 15:26):
Thanks! I probably won't be returning to this for a week or so and the first thing to do would be to PR the invariant submodule stuff, so I'll make sure to look at !3#18289 if / when I get to that.
Kevin Buzzard (Jan 05 2024 at 10:59):
Oliver pointed out this paper https://arxiv.org/abs/2205.05432 to me; Theorem 7 seems to be a relatively short proof of the theorem. Note that perfectness of K is assumed (implicitly) -- see the paragraph above the statement of the theorem. The other approach is to first prove it for alg closed fields and then descend using Galois theory but this is probably harder work.
Oliver Nash (Jan 05 2024 at 11:17):
I agree that it is probably easier to avoid the Galois descent argument. Looking now I am slightly surprised to find that Bourbaki do take this route though (and of course prove something slightly stronger):
image.png
image.png
Kevin Buzzard (Jan 05 2024 at 12:16):
I suspect that the proof described in Geck's paper will generalise to show this Theorem 9.1 of Bourbaki. The key fact Geck needs is that the char poly of the matrix divides some power of a poly defined over the ground field and which has distinct roots in the alg closure. Do we have the function on a UFD which sends something nonzero to a product of its irreducible factors? (e.g. sends 9 to 3, sends 12 to 6 etc)
Johan Commelin (Jan 05 2024 at 12:18):
If it exists, I hope it is called radical
?
Kevin Buzzard (Jan 05 2024 at 12:22):
We have docs#IsRadical (relevant) and then the radical of an ideal docs#Ideal.radical
Patrick Massot (Jan 05 2024 at 13:23):
Geck taught me basic abstract algebra! He is a great teacher and I'm confident his papers are probably very clear so let's try this!
Kevin Buzzard (Jan 05 2024 at 13:42):
My impression is that the idea is due to Chevalley, and it was written up by Geck very clearly because Geck wasn't happy with the literature, plus they take the opportunity to discuss its practicality as an algorithm for decomposing a concrete matrix.
Antoine Chambert-Loir (Jan 05 2024 at 16:56):
It is indeed due to Chevalley, in his Theory of Lie groups and was revived by various people, comprising Couty et al. But these authors consider the case where the characteristic polynomial is split in the ground field; the general case is treated by Geck (and I had treated it in my Algebra lecture notes independently) — one just needs to assume that the irreducible factors of the minimal polynomial are separable (Geck states it less clearly, I believe).
Antoine Chambert-Loir (Jan 05 2024 at 17:09):
I have already implemented the Newton part.
import Mathlib.RingTheory.Nilpotent
import Mathlib.Data.Polynomial.Basic
import Mathlib.Data.Polynomial.Identities
import Mathlib.Data.Polynomial.Derivative
lemma Nat.self_sub_two_pow (n : ℕ) : n ≤ 2 ^ n := by
induction n with
| zero => simp only [Nat.zero_eq, pow_zero, zero_le]
| succ n ih =>
rw [pow_succ', two_mul, Nat.succ_eq_add_one]
apply Nat.add_le_add ih
exact Nat.one_le_two_pow n
open Polynomial
variable {R : Type*} [CommRing R]
variable {P : R[X]} {a b : R}
lemma sub_eval_isNilpotent_of_isNilpotent_sub (h : IsNilpotent (b - a)) :
IsNilpotent (P.eval b - P.eval a) := by
have ⟨cj, hc⟩ := evalSubFactor P b a
rw [hc]
exact Commute.isNilpotent_mul_right (Commute.all _ _) h
lemma eval_isNilpotent_of_isNilpotent_sub (ha : IsNilpotent (P.eval a))
(hb : IsNilpotent (b - a)) : IsNilpotent (P.eval b) := by
rw [← add_sub_cancel'_right (eval a P) (eval b P)]
exact Commute.isNilpotent_add (Commute.all _ _) ha (sub_eval_isNilpotent_of_isNilpotent_sub hb)
lemma eval_isUnit_of_isNilpotent_sub (ha : IsUnit (P.eval a))
(hb : IsNilpotent (b - a)) : IsUnit (P.eval b) := by
rw [← add_sub_cancel'_right (P.eval a) (P.eval b)]
rw [← ha.choose_spec]
apply Commute.IsNilpotent.add_isUnit _ (Commute.all _ _)
rw [ha.choose_spec]
exact sub_eval_isNilpotent_of_isNilpotent_sub hb
variable (P)
/-- The Newton iteration of P -/
noncomputable
def newton_iterate (a : R) : R :=
a - (P.eval a) * Ring.inverse ((P.derivative.eval a))
/-- The condition for the Newton iteration to hold -/
def newton_ih (a : R) := IsNilpotent (P.eval a) ∧ IsUnit (P.derivative.eval a)
variable {P}
noncomputable
def newton_series (ha : IsNilpotent (P.eval a)) : ℕ → {x : R // IsNilpotent (x - a)}
| 0 => ⟨a, by simp only [sub_self, IsNilpotent.zero]⟩
| k + 1 => ⟨newton_iterate P (newton_series ha k), by
rw [newton_iterate, sub_sub, add_comm, ← sub_sub]
apply Commute.isNilpotent_sub (Commute.all _ _) (newton_series ha k).prop
apply Commute.isNilpotent_mul_left (Commute.all _ _)
apply eval_isNilpotent_of_isNilpotent_sub ha (newton_series ha k).prop⟩
lemma eval_newton_div (ha : newton_ih P a) (k : ℕ) :
(P.eval a) ^ (2 ^ k) ∣ (P.eval ↑(newton_series ha.1 k)) := by
induction k with
| zero => simp only [Nat.zero_eq, pow_zero, pow_one, newton_series, dvd_refl]
| succ k hk =>
simp only [newton_series, newton_iterate._eq_1, sub_eq_add_neg, neg_mul_eq_neg_mul]
have ⟨d, hd⟩ := binomExpansion P (newton_series ha.1 k) (-eval (↑(newton_series ha.1 k)) P * Ring.inverse (eval (↑(newton_series ha.1 k)) (derivative P)))
rw [hd]
apply dvd_add
convert dvd_zero _
rw [mul_comm, mul_assoc, Ring.inverse_mul_cancel _, mul_one, add_right_neg]
exact eval_isUnit_of_isNilpotent_sub ha.2 (newton_series ha.1 k).prop
apply Dvd.dvd.mul_left
rw [mul_pow, neg_pow]
simp only [even_two, Even.neg_pow, one_pow, one_mul]
apply Dvd.dvd.mul_right
rw [pow_succ', pow_mul]
exact pow_dvd_pow_of_dvd hk 2
theorem newton_exists_unique (ha : newton_ih P a) :
∃! (b : R), IsNilpotent (b - a) ∧ eval b P = 0 := by
apply exists_unique_of_exists_of_unique
· -- Existence
suffices : ∃ k, P.eval ↑(newton_series ha.1 k) = 0
obtain ⟨k, hk⟩ := this
exact ⟨newton_series ha.1 k, (newton_series ha.1 k).prop, hk⟩
obtain ⟨n, hn⟩ := ha.1
use n
rw [← zero_dvd_iff]
simp_rw [← pow_eq_zero_of_le (Nat.self_sub_two_pow n) hn]
exact eval_newton_div ha n
· -- Uniqueness
intro b c hb hc
have ⟨u, hu⟩ := binomExpansion P b (c - b)
simp only [add_sub_cancel'_right, hc.2, hb.2, zero_add, pow_two, ← mul_assoc, ← add_mul] at hu
apply symm
rw [← sub_eq_zero]
rw [eq_comm, ← smul_eq_mul, IsUnit.smul_eq_zero ?_] at hu
exact hu
have hb' : IsUnit (eval b (derivative P)) := eval_isUnit_of_isNilpotent_sub ha.2 hb.1
rw [← hb'.choose_spec]
apply Commute.IsNilpotent.add_isUnit _ (Commute.all _ _)
apply Commute.isNilpotent_mul_right (Commute.all _ _)
rw [← sub_sub_sub_cancel_right c b a]
exact Commute.isNilpotent_sub (Commute.all _ _) hc.1 hb.1
Oliver Nash (Feb 06 2024 at 15:29):
Using Antoine's code above, I now I have a PR which establishes the Jordan-Chevalley decomposition: #10295
Oliver Nash (Feb 06 2024 at 15:30):
A little more work is required in order to establish uniqueness (the only missing piece is that a sum of commuting semisimple endomorphims is semisimple).
Riccardo Brasca (Feb 06 2024 at 15:46):
I've delegated #10284
Antoine Chambert-Loir (Feb 07 2024 at 08:49):
Oliver Nash said:
A little more work is required in order to establish uniqueness (the only missing piece is that a sum of commuting semisimple endomorphims is semisimple).
How do you wish to prove this? The proof I teach is not so nice, it uses the relation of semisimple with diagonalizabilty over the algebraic closure and uses a bit of Galois theory… It probably doesn't hold in general. (Take multiplications by and in where are elements of which are not -th powers (, a field of characteristic ), I think both of them are semisimple, but their sum is not if is a -th power… )
Oliver Nash (Feb 07 2024 at 08:59):
I don't have a strategy in mind. Once I finish some review, I'm going to study my application and see if I need the uniqueness just to double check I need it (hard to imagine I don't but just possible) and then think about proof strategies. Thanks for the outline above.
Oliver Nash (Feb 07 2024 at 09:00):
Further suggestions welcome!
Riccardo Brasca (Feb 07 2024 at 09:01):
Are there cases where it is really simpler? Characteristic 0? Algebraically closed?
Oliver Nash (Feb 07 2024 at 09:04):
I don't know but skimming https://mathoverflow.net/questions/115273/sum-of-commuting-semisimple-operators suggests perfect field might be a reasonable assumption.
Oliver Nash (Feb 07 2024 at 09:04):
(Back in 2012 this sort of question was allowed on MO.)
Eric Rodriguez (Feb 07 2024 at 10:17):
What's your application?
Oliver Nash (Feb 07 2024 at 10:20):
I want to show that elements of a Cartan subalgebra are semisimple (subject to appropriate hypotheses).
Kim Morrison (Feb 07 2024 at 12:18):
Oliver Nash said:
(Back in 2012 this sort of question was allowed on MO.)
/me feels nostalgic for MO in 2012.
Antoine Chambert-Loir (Feb 07 2024 at 22:22):
Oliver Nash said:
I don't know but skimming https://mathoverflow.net/questions/115273/sum-of-commuting-semisimple-operators suggests perfect field might be a reasonable assumption.
Yes, as I said, I think it is false in general (the example I gave should work), and if the field is perfect, then the semisimple elements are diagonalizable over its separable closure. There, you have commuting diagonalizable operators, so their sum is diagonalizable, and this implies semisimplicity over the ground field. This is how Chevalley does in his Theory of Lie groups.
Antoine Chambert-Loir (Feb 07 2024 at 22:23):
(And uniqueness can be checked over the separable closure.)
Kevin Buzzard (Feb 07 2024 at 22:37):
Antoine Chambert-Loir said:
It probably doesn't hold in general. (Take multiplications by and in where are elements of which are not -th powers (, a field of characteristic ), I think both of them are semisimple, but their sum is not if is a -th power… )
So semistability is not stable under field extensions?! I had never realised this (I really never think about nonperfect fields...)
Damiano Testa (Feb 07 2024 at 22:39):
Well, it is semistable...
Antoine Chambert-Loir (Feb 07 2024 at 23:03):
No, semisimplicity is not stable under field extensions!
Antoine Chambert-Loir (Feb 07 2024 at 23:03):
(That's why there is the notion of “geometrically semisimple”.)
Junyan Xu (Feb 08 2024 at 08:47):
This comment inspired me to produce the following conceptual proof that doesn't require passing to the algebraic closure. Suppose that we have commuting -linear operators acting on , where is a perfect field, and is semisimple as a -module and also as a -module, and is finite-dimensional over . The minimal polynomial of of on is a separable polynomial, so is a product of separable extensions, and the -action on factors through it.
Junyan Xu (Feb 08 2024 at 08:47):
Similarly the -action on factors through a product of fields, so the action of (and therefore of , , etc.) on factors through the tensor product , which is a finite-dimensional commutative reduced -algebra, so it is a finite product of fields (may require some work of @Jujian Zhang on artinian rings) and therefore a semisimple ring, so every -module is semisimple, in particular .
If is infinite-dimensional I think we can decompose it into a direct sum of finite-dimensional -submodules, so we should still be able to show that , etc. act semisimply.
Antoine Chambert-Loir (Feb 08 2024 at 09:26):
Of course, one does not need the full separable closure, the extension generated by the roots is sufficient. But still, one needs Galois theory to go down.
Junyan Xu (Feb 11 2024 at 06:31):
Here are some basic missing pieces in the direction of my approach (two sorries left); haven't got to the main argument yet.
import Mathlib.RingTheory.SimpleModule
import Mathlib.Algebra.Algebra.Basic
import Mathlib.RingTheory.Artinian
section Ring
variable {R S M N} [Ring R] [Ring S] [AddCommGroup M] [Module R M] [AddCommGroup N] [Module R N]
namespace IsSimpleModule
variable [IsSimpleModule R M] {m : M} (hm : m ≠ 0)
theorem span_singleton_eq_top : Submodule.span R {m} = ⊤ :=
(eq_bot_or_eq_top _).resolve_left fun h ↦ hm (h.le <| Submodule.mem_span_singleton_self m)
open LinearMap
theorem toSpanSingleton_surjective : Function.Surjective (toSpanSingleton R M m) := by
rw [← range_eq_top, ← span_singleton_eq_range, span_singleton_eq_top hm]
theorem ker_toSpanSingleton_isMaximal : Ideal.IsMaximal (ker (toSpanSingleton R M m)) := by
rw [Ideal.isMaximal_def, ← isSimpleModule_iff_isCoatom]
exact congr (quotKerEquivOfSurjective _ <| toSpanSingleton_surjective hm)
instance : (⊤ : Submodule R M).IsPrincipal :=
have := IsSimpleModule.nontrivial R M
have ⟨m, hm⟩ := exists_ne (0 : M)
⟨⟨m, (span_singleton_eq_top hm).symm⟩⟩
theorem isSimpleModule_iff_quot_maximal :
IsSimpleModule R M ↔ ∃ I : Ideal R, I.IsMaximal ∧ Nonempty (M ≃ₗ[R] R ⧸ I) := by
refine ⟨fun h ↦ ?_, fun ⟨I, ⟨coatom⟩, ⟨equiv⟩⟩ ↦ ?_⟩
· have := IsSimpleModule.nontrivial R M
have ⟨m, hm⟩ := exists_ne (0 : M)
exact ⟨_, ker_toSpanSingleton_isMaximal hm,
⟨(quotKerEquivOfSurjective _ <| toSpanSingleton_surjective hm).symm⟩⟩
· convert congr equiv; rwa [isSimpleModule_iff_isCoatom]
end IsSimpleModule
@[reducible] def IsSemisimpleRing (R) [Ring R] := IsSemisimpleModule R R
-- not needed
-- instance [IsSemisimpleRing R] [IsSemisimpleRing S] : IsSemisimpleRing (R × S) := sorry
instance {ι} [Finite ι] (R : ι → Type*) [∀ i, Ring (R i)] [∀ i, IsSemisimpleRing (R i)] :
IsSemisimpleRing (∀ i, R i) := sorry
theorem RingEquiv.isSemisimpleRing (e : R ≃+* S) [IsSemisimpleRing R] : IsSemisimpleRing S :=
haveI := RingHomInvPair.of_ringEquiv e
haveI := RingHomInvPair.of_ringEquiv e.symm
(Submodule.orderIsoMapComap e.toSemilinearEquiv).complementedLattice
theorem IsSemisimpleModule.congr [IsSemisimpleModule R N] (e : M ≃ₗ[R] N) : IsSemisimpleModule R M :=
(Submodule.orderIsoMapComap e.symm).complementedLattice
instance [IsSemisimpleModule R M] (N : Submodule R M) : IsSemisimpleModule R (M ⧸ N) :=
have ⟨P, compl⟩ := exists_isCompl N
.congr (N.quotientEquivOfIsCompl P compl)
private instance {ι} [IsSemisimpleRing R] : IsSemisimpleModule R (ι →₀ R) := sorry
-- use isSemisimpleModule_of_isSemisimpleModule_submodule'
instance [IsSemisimpleRing R] : IsSemisimpleModule R M :=
.congr (LinearMap.quotKerEquivOfSurjective _ <| Finsupp.total_id_surjective R M).symm
end Ring
section CommRing
variable (R S M) [CommRing R] [CommRing S] [Algebra R S]
[AddCommGroup M] [Module R M] [Module S M] [IsScalarTower R S M]
theorem IsArtinianRing.of_finite [IsArtinianRing R] [Module.Finite R S] : IsArtinianRing S :=
isArtinian_of_tower R isArtinian_of_fg_of_artinian'
abbrev annihilator (R M) [CommSemiring R] [AddCommMonoid M] [Module R M] : Ideal R :=
(⊤ : Submodule R M).annihilator
-- TODO: generalize to `I ≤ annihilator R M`
open DistribMulAction in
instance : Module (R ⧸ annihilator R M) M where
smul := Quotient.lift (toAddMonoidHom M ·) fun r r' h ↦ by
ext m; change r • m = r' • m; rw [← sub_eq_zero, ← sub_smul]
exact Submodule.mem_annihilator.mp ((Submodule.quotientRel_r_def _).mp h) _ trivial
zero_smul := zero_smul _
one_smul := one_smul _
add_smul := by rintro ⟨⟩ ⟨⟩; apply add_smul
mul_smul := by rintro ⟨⟩ ⟨⟩; apply mul_smul
smul_zero := by rintro ⟨r⟩; exact map_zero (toAddMonoidHom M r)
smul_add := by rintro ⟨r⟩; exact map_add (toAddMonoidHom M r)
instance : IsScalarTower R (R ⧸ annihilator R M) M where
smul_assoc := by rintro _ ⟨⟩ _; apply mul_smul
section surj
variable (h : Function.Surjective (algebraMap R S)) {R S M} -- can't switch order
def submoduleEquiv : Submodule R M ≃o Submodule S M where
toFun N := { N with
smul_mem' := fun s m ↦ by obtain ⟨r, rfl⟩ := h s; rw [algebraMap_smul]; apply N.smul_mem }
invFun := Submodule.restrictScalars R
left_inv _ := rfl
right_inv _ := rfl
map_rel_iff' := Iff.rfl
theorem IsSemisimpleModule.iff_of_isScalarTower :
IsSemisimpleModule R M ↔ IsSemisimpleModule S M :=
(submoduleEquiv h).complementedLattice_iff
end surj
theorem IsSemisimpleModule.quot_annihilator_iff :
IsSemisimpleModule R M ↔ IsSemisimpleModule (R ⧸ annihilator R M) M :=
IsSemisimpleModule.iff_of_isScalarTower Quotient.surjective_Quotient_mk''
end CommRing
Antoine Chambert-Loir (Feb 11 2024 at 11:38):
I usuall like overkill proofs, but here are the two useful results that mathlib should know, for an endomorphism of a finite dimensional -vector space :
- If is an extension of such that is semi-simple, then is semi-simple. The converse holds if is separable.
- The minimal polynomial of is separable (has distinct roots in an algebraic closure) if and only if there exists a separable extension of such that is diagonalizable.
(This is essentially how Chevalley proves the result requested by @Oliver Nash in his book, Théorie des groupes de Lie, II, Hermann, 1968, chapter I, §8, https://archive.org/details/theoriedesgroupe0000chev)
Antoine Chambert-Loir (Feb 11 2024 at 11:43):
Another approach would formalize Bourbaki's chapter on Semisimple modules and rings (it has just been translated into English…) and study the tensor product of semisimple modules. That would certainly be an excellent thing to do, but I'm afraid one can't do that in mathlib until one has a fairly flexible way to talk about left and right modules.
Junyan Xu (Feb 11 2024 at 17:43):
Yeah those two facts would be nice to have. But we would also need the simultaneous diagonalizability result (if both operators are diagonalizable) to use the approach. What's the best way to formalize diagonalizability? Each simple submodule of Module.AEval' f
is of dimension 1 (where f
is the operator)? You might need to define isotypic components to get the result.
Systematically formalizing Bourbaki might be nice, but we're in the commutative setting here. I think the basic facts about semisimple rings/modules above pretty much suffice for the current purpose. BTW, The book I learned noncommutative algebra from in graduate school is Lorenz's Algebra II and I think it's a very nice reference.
A fact I'll need once or twice is that a finite-dimensional commutative reduced algebra over a field is a product of fields and hence semisimple, which already follows from the Chinese remainder theorem docs#Ideal.quotientInfRingEquivPiQuotient together with the fact that the intersection of maximal ideals is the nilradical due to docs#IsArtinianRing.isMaximal_of_isPrime (which is Jujian's prior work; it doesn't depend on anything not yet in mathlib).
BTW I already killed the second sorry above, and for the first sorry I want to use docs#Submodule.orderIsoMapComap but annoyingly LinearEquiv(Class) requires an InvPair of RingHoms, when one surjective RingHom already suffices. (I want to show that the action of ∀ i, R i
on R i
factors through R i
, so R i
being a semisimple R i
-module implies it is also semisimple as a ∀ i, R i
-module.)
I also figured out an easy way to show is reduced, where (no tensor products anymore!). We just need to show q docs#IsRadical in R[Y], and we already know q is separable in k[Y] (squarefree + perfect), so its image in any extension is separable, and hence squarefree; we know that squarefree implies IsRadical in a DecompositionMonoid (#10327), and is indeed a decomposition monoid, being reduced and hence a product of fields, since p is squarefree.
The last missing piece is that the minimal polynomial of a semisimple module is squarefree, but that should be easy. (In general the minimal polynomial of a finite sup of submodules is the docs#Finset.lcm of the individual minimal polynomials, and we should prove squarefree_lcm.)
Antoine Chambert-Loir (Feb 11 2024 at 23:41):
I had proved this last missing piece when @Oliver Nash asked for a proof of Jordan Chevalley.
Antoine Chambert-Loir (Feb 11 2024 at 23:43):
Antoine Chambert-Loir (Feb 11 2024 at 23:47):
My proof, however, is not the one you suggest, which is better.
Antoine Chambert-Loir (Feb 11 2024 at 23:54):
To state diagonalizability: one has eigenspaces and it suffices to say that the space is the direct sum of its eigenspaces, and its equivalent to the minimal polynomial to be split and separable.
The simultaneous diagonalizability of two commuting diagonalizable endomorphisms follows from the fact that the eigenspaces of the first one are stable by the second one, whose resriction is diagonalizable (by the criterion on minimal polynomials).
Junyan Xu (Feb 28 2024 at 07:32):
My approach is now successful, PR'd as #10808. I've been able to deduce minpoly_squarefree_of_isSemisimple
easily from the general fact that the annihilator of a semisimple module must be a radical ideal. The PR can be split into multiple parts but I refrain from doing so without an explicit request.
Johan Commelin (Feb 28 2024 at 07:38):
That PR description is very nice! I'll review asap
Antoine Chambert-Loir (Feb 28 2024 at 08:06):
It is true that it could have been split, but having just read it, the many additions make sense and the bulk of the work is in two parts only.
Last updated: May 02 2025 at 03:31 UTC