Zulip Chat Archive

Stream: Is there code for X?

Topic: Linearly disjoint


Jz Pan (Dec 21 2023 at 17:05):

I think we don't have linearly disjoint https://en.wikipedia.org/wiki/Linearly_disjoint in mathlib yet. I think I come to a point that I have to use such concepts.

Jz Pan (Dec 21 2023 at 17:07):

algebras A, B over a field k inside some field extension Ω of k are said to be linearly disjoint over k if the following equivalent conditions are met:

  • The map AkBAB A\otimes _{k}B\to AB induced by ( x , y ) ↦ x y is injective.
  • Any k-basis of A remains linearly independent over B.
  • If u_i , v_j are k-bases for A, B, then the products u_i v_j are linearly independent over k.

Jz Pan (Dec 21 2023 at 17:13):

But I only know the following definition: for any k-embeddings of A and B into Ω, the intersection of their images is k. Is this definition correct?

[EDIT] I don't think so, unless Ω is sufficiently large such that the above-mentioned embeddings exist, and such that Ω / k is normal.

More references: https://stacks.math.columbia.edu/tag/09IC
https://encyclopediaofmath.org/wiki/Linearly-disjoint_extensions
https://mathoverflow.net/questions/8324/what-does-linearly-disjoint-mean-for-abstract-field-extensions <-- it can be defined without mentioning Ω, namely, use the definition that AkB A\otimes_kB is a field
https://math.stackexchange.com/questions/381775/when-are-nonintersecting-finite-degree-field-extensions-linearly-disjoint

Jz Pan (Dec 21 2023 at 17:13):

import Mathlib.FieldTheory.IntermediateField

noncomputable section

variable (k Ω A B : Type*) [CommSemiring k] [Semiring Ω] [Algebra k Ω]
  [Semiring A] [Algebra k A] [Semiring B] [Algebra k B]

def LinearlyDisjoint : Prop :=  (i : A →ₐ[k] Ω) (j : B →ₐ[k] Ω), i.range  j.range = 

Jz Pan (Dec 21 2023 at 17:15):

The particular result I need to use is: if A and B are linearly disjoint intermediate fields of Ω / k, then the degree of A ⊔ B / A and B / k are equal.

Jz Pan (Dec 21 2023 at 17:17):

Jz Pan said:

But I only know the following definition: for any k-embeddings of A and B into Ω, the intersection of their images is k. Is this definition correct?

Using this definition, it's clear that if A / k is separable and B / k is purely inseparable, then they are linearly disjoint.

Junyan Xu (Dec 27 2023 at 05:03):

But I only know the following definition: for any k-embeddings of A and B into Ω, the intersection of their images is k. Is this definition correct?
[EDIT] I don't think so, unless Ω is sufficiently large such that the above-mentioned embeddings exist, and such that Ω / k is normal.

I doubt the definition is correct, because this answer is supposed to be a counterexample. It says there's a inseparable extension disjoint from the perfect closure, and since the perfect closure is normal, it remains disjoint from the inseparable extension no matter how you change their embeddings. The thing that I fail to see is why the two extensions are not linearly disjoint, but I can prove it if I assume a statement that I can't find anywhere: if L and K are both linearly disjoint from E over F, then the compositum LK is also linearly disjoint from E over F. Now assume this, and assume for contradiction that the inseparable extension is linearly disjoint from the perfect closure. Since the normal closure of the inseparable extension is the compositum of its conjugates, which are all disjoint from the perfect closure, the normal closure is also linearly disjoint from the perfect closure, but the normal closure can be split into a purely inseparable extension at the bottom followed by a separable extension at the top, and the purely inseparable part is nontrivial and therefore isn't disjoint from the perfect closure, let alone being linearly disjoint.

Kevin Buzzard (Dec 27 2023 at 09:52):

I don't know what linearly disjoint means but is F=Q and E,K,L the three cubic subfields of the splitting field of X^3-2 a counterexample?

Junyan Xu (Dec 27 2023 at 15:21):

The cubic subfields are the classical example of disjoint subfields that are not linearly disjoint (so they don't give a counterexample to my hypothesis), but they are not disjoint under every embedding (in fact they're identical under suitable embeddings), so they don't give a counterexample to Jz's claim that we can test linear disjointness using disjointness by varying the embeddings.

Junyan Xu (Dec 27 2023 at 22:57):

I realized my hypothesis is very far from true: consider the (abelian) Galois extensions Q(√2), Q(√3) and Q(√6) of Q. Since they are Galois you can check linear disjointness by checking the intersections are Q, so they're pairwise linearly disjoint. However, the compositum of Q(√2) and Q(√3) contains Q(√6) so is definitely not linearly disjoint from it ...

Kevin Buzzard (Dec 27 2023 at 23:22):

It's rare that "X and Y miss A implies the thing generated by X and Y misses A" is true in general

Jz Pan (Dec 27 2023 at 23:40):

Junyan Xu said:

this answer is supposed to be a counterexample. It says there's a inseparable extension disjoint from the perfect closure, and since the perfect closure is normal, it remains disjoint from the inseparable extension no matter how you change their embeddings. The thing that I fail to see is why the two extensions are not linearly disjoint

In that example, K/F is a simple extension (namely, generated by a root of f(T):=T^(p^2)+x*T^p+y). To check whether K and F^perf are linearly disjoint over F, we only need to check that whether K\otimes_F F^perf = F^perf[T]/(f(T)) is a field, namely, whether f(T) is still irreducible over F^perf. Clearly f(T)=(T^p+x^(1/p)*T+y^(1/p))^p which is NOT irreducible over F^perf.

Jz Pan (Dec 27 2023 at 23:52):

This in fact proves that K and E:=F(x^(1/p),y^(1/p)) are not linearly disjoint over F. It's easy to see that [KE : F] = p^3 which is not equal to [K : F] [E : F] = p^4.

Junyan Xu (Dec 28 2023 at 00:42):

It's rare that "X and Y miss A implies the thing generated by X and Y misses A" is true in general

Indeed! That was also roughly how I realized it's wrong :sweat_smile:

Jz Pan said:

This in fact proves that K and E:=F(x^(1/p),y^(1/p)) are not linearly disjoint over F. It's easy to see that [KE : F] = p^3 which is not equal to [K : F] [E : F] = p^4.

Thanks! It's actually a standard fact that an extension is linearly disjoint from the perfect closure iff it's separable (e.g. Theorem 12.15(ii) in Clark's notes). I think I was partially misguided by this example when I made my "hypothesis": the compositum of two extensions linearly disjoint from the perfect closure is also linearly disjoint from the perfect closure, because the compositum of two separable extensions is separable. It's also true that an extension is (linearly) disjoint from the (absolute) separable closure (which is Galois) iff it's purely inseparable (almost by definition).

It's still unclear whether my weaker hypothesis, that K being linearly disjoint from E over F implies the normal closure of K/F is also linearly disjoint from E over F, is true.

Kevin Buzzard (Dec 28 2023 at 10:26):

What is the definition of "linearly disjoint" here?

Jz Pan (Dec 28 2023 at 11:45):

Kevin Buzzard said:

What is the definition of "linearly disjoint" here?

Here I use the three equivalent definitions in https://en.wikipedia.org/wiki/Linearly_disjoint. According to https://mathoverflow.net/questions/8324/what-does-linearly-disjoint-mean-for-abstract-field-extensions, if one of the fields are algebraic over the base field, then it is equivalent to that the tensor product is a field.

Kevin Buzzard (Dec 28 2023 at 11:55):

So what about F=QF=\mathbb{Q}, K=Q(21/3)K=\mathbb{Q}(2^{1/3}) and E=Q(3)E=\mathbb{Q}(\sqrt{-3})?

Jz Pan (Dec 28 2023 at 12:06):

Kevin Buzzard said:

So what about F=QF=\mathbb{Q}, K=Q(21/3)K=\mathbb{Q}(2^{1/3}) and E=Q(3)E=\mathbb{Q}(\sqrt{-3})?

That is linearly disjoint, you can check it via, for example, x^2+3 is still irreducible in K. In fact I believe that two finite extensions with coprime degrees are linearly disjoint.

Kevin Buzzard (Dec 28 2023 at 13:25):

Right, and the normal closure of K/F is not linearly disjoint from E so it's a counterexample to your hypothesis if I've understood things correctly

Junyan Xu (Dec 28 2023 at 17:17):

Indeed this example works. Thanks! It's so simple that I feel bad about missing it.

for any k-embeddings of A and B into Ω, the intersection of their images is k. Is this definition correct?

I now feel we can find separable counterexamples to this as well.

Besides: I thought about the proofs of multiplicativity of (in)separability degree, and it seems we need the fact that [KS:K] = [S:F] and [KS:S]=[K:F] for purely inseparable K/F and separable S/F at some point (either equality is equivalent to linearly disjointness in the finite case). The easiest way to show these seem to be showing that an F-linearly independent family in S remains linearly independent over K using Cohn's argument in the other thread, then we immediately get the first equality and that KS is isomorphic to the tensor product of K and S over F (linear disjointness), and then we can deduce that any F-linearly independent family in K remains linearly independent over S and get the second equality. I don't know what proof @Jz Pan has in mind but it seems to me that going through linear disjointness can't be avoided.

Jz Pan (Dec 28 2023 at 19:08):

Junyan Xu said:

... but it seems to me that going through linear disjointness can't be avoided.

I think you are right. Although the intermediate result

For K / E / F with E / F algebraic, separableClosure E K = adjoin E (separableClosure F K)

can be proved without using linearly disjoint, the proof of multiplicity of (infinite) separable degree (under the assumption that E / F algebraic, which is required, otherwise there are counterexamples) needs it.

Here is my sketch. Let's denote S := separableClosure F E.

  • It's easy to prove that separableClosure F K = separableClosure S K.
  • Apply the above intermediate result to K / E / S we know that separableClosure E K = adjoin E (separableClosure S K).
  • Since E / S is purely inseparable and separableClosure S K / S is separable, they are linearly disjoint, hence [separableClosure E K : E] = [separableClosure S K : S], therefore [K : F]_s = [separableClosure F K : F] = [separableClosure S K : F] = [separableClosure S K : S] [S : F] =[separableClosure E K : E] [S : F] = [K : E]_s [E : F]_s.

It looks like that the keypoint is to prove [separableClosure E K : E] = [separableClosure S K : S]. Let me see if it can be proved without explicitly define LinearlyDisjoint (but of course we need the property that it preserves linearly independent subsets).

Antoine Chambert-Loir (Dec 28 2023 at 19:13):

Allow me to remind you that Bourbaki (Algebra, chap 4, §5, n°2) does this in a reasonable generality, the same as the one in Wikipedia, so it seems, namely for subalgebras of a given field extension. Browsing the subsequent pages, this notion is used quite extensively for their development of the (in)separable closures.

Junyan Xu (Dec 29 2023 at 05:55):

Here's an example of two extensions K and L of Q of degree p (a prime) that are disjoint from all conjugates of each other but are not linearly disjoint. Write L=Q(x) and it suffices that the minimal polynomial of x splits into two irreducible polynomials of degree > 1 over K: firstly, KL = K(x) would then have degree <= p(p-2) over Q so K and L can't be linearly disjoint over Q, and secondly, K doesn't contain any root of the minimal polynomial because it doesn't split out a degree 1 factor over K, so K is disjoint from all conjugates of L=Q(x) (since [L:Q] is prime, if K is not disjoint from a conjugate of L it must be equal to it). Such an example is given on MO, where p=7 and the minimal polynomial factors into irreducibles of degree 3 and 4 over K.

Junyan Xu (Dec 29 2023 at 07:44):

I think the multiplicativity proof I have in mind is slightly less involved since it doesn't require considering separableClosure S K. Let S = separableClosure F E, SES_E = separableClosure E K and SFS_F = separableClosure F K. We have the intermediate result SE=ESFS_E=ES_F, and our goal is [SE:E][S:F]=[SF:F][S_E:E][S:F]=[S_F:F]. Since [SF:F]=[SF:S][S:F][S_F:F]=[S_F:S][S:F] we just need to show [SE:E]=[ESF:E]=[SF:S][S_E:E]=[ES_F:E]=[S_F:S], and this follows from the linear disjointness of EE and SFS_F over SS. But Cohn actually proves this directly, so no need to introduce linear disjointness here; however if you want to prove the multiplicativity of the inseparable degree [K:SF]=[K:SE][E:S][K:S_F]=[K:S_E][E:S], then it boils down to the other equality [ESF:SF]=[E:S][ES_F:S_F]=[E:S], which does seem to require the isomorphism with the tensor product as an intermediary.
I haven't thought through all formalization details though. e.g. S is an IntermediateField F E and you'd need to map it to IntermediateField F K and use inclusion to get a S-algebra structure on SFS_F.

Jz Pan (Dec 29 2023 at 13:11):

Junyan Xu said:

Here's an example of two extensions K and L of Q of degree p (a prime) that are disjoint from all conjugates of each other but are not linearly disjoint.

I see. Abstractly, PSL_2(F_7) has two non-isomorphic non-conjugate subgroups of index 7, hence K and L are disjoint from all conjugates of each other. But it doesn't have subgroups of index 49, so they cannot be linarly disjoint.

Junyan Xu (Dec 29 2023 at 16:51):

There are two conjugacy classes of subgroups of index 7, but subgroups in both classes are still isomorphic (to S4S_4). See e.g. page 6 of this paper which apparently contains a proof, or page 7 of this. (On the other hand this page seems to wrongly suggest there is only one conjugacy class.)

Jz Pan (Jan 01 2024 at 21:02):

Junyan Xu said:

... Cohn actually proves this directly, so no need to introduce linear disjointness here; however if you want to prove the multiplicativity of the inseparable degree [K:SF]=[K:SE][E:S][K:S_F]=[K:S_E][E:S], then it boils down to the other equality [ESF:SF]=[E:S][ES_F:S_F]=[E:S], which does seem to require the isomorphism with the tensor product as an intermediary.

Probably we can use the equivalent definition of (2) and (3) of linearly djsoint? I think Cohn's book actually proves the (2) holds. The condition (2) by its definition is not symmetric in A and B, but the condition (3) is. So if we can prove the equivalency of (2) and (3), then we may deduce (2) with A and B exchanged, hence prove the multiplicativity of inseparable degree. Is this correct?

Junyan Xu (Jan 01 2024 at 23:28):

The three conditions are:

- The map $$ A\otimes _{k}B\to AB $$ induced by `( x , y )  x y` is injective.
- Any `k`-basis of `A` remains linearly independent over `B`.
- If `u_i , v_j` are `k`-bases for `A`, `B`, then the products `u_i v_j` are linearly independent over `k`.

We can certainly connect (2) and (3) directly, but it's not really more difficult than connecting with (1). (We already have docs#Basis.tensorProduct.) So we might as well just introduce the notion of linear disjointness.

Jz Pan (Jan 02 2024 at 00:13):

This is my first attempt:

import Mathlib.FieldTheory.Adjoin

open scoped Classical Polynomial

open FiniteDimensional Polynomial IntermediateField Field

noncomputable section

universe u v w

variable (F : Type u) (E : Type v) [Field F] [Field E] [Algebra F E]

variable (K : Type w) [Field K] [Algebra F K]

namespace IntermediateField

variable {F E}

variable (A B : IntermediateField F E)

/-- Two intermediate fields `A` and `B` of `E / F` are called linearly disjoint, if any `F`-linearly
independent subset of `A` remains linearly independent over `B`. -/
def LinearlyDisjoint :=  a : Set A, LinearIndependent F (fun x : a  x.1) 
    LinearIndependent B (fun x : a  x.1.1)

variable {A B}

theorem linearlyDisjoint_iff_linearIndependent_mul :
    LinearlyDisjoint A B   (a : Set A) (b : Set B), LinearIndependent F (fun x : a  x.1) 
      LinearIndependent F (fun y : b  y.1) 
      LinearIndependent F (fun x : a × b  x.1.1.1 * x.2.1.1) := by
  sorry

theorem LinearlyDisjoint.comm (h : LinearlyDisjoint A B) : LinearlyDisjoint B A := by
  rw [linearlyDisjoint_iff_linearIndependent_mul] at h 
  intro a b ha hb
  rw [ linearIndependent_equiv (Equiv.prodComm b a)]
  convert h b a hb ha
  exact mul_comm _ _

end IntermediateField

Jz Pan (Jan 02 2024 at 00:13):

Currently I restrict myself to IntermediateFields. Later we'll see if they can be generalized.

Jz Pan (Jan 02 2024 at 00:14):

Also, my writing LinearIndependent F (fun x : a ↦ x.1) is slightly different from that appeared in the definition of Module.rank. Does this matter?

Junyan Xu (Jan 02 2024 at 00:34):

Looks good! What's slightly different? To me they look like syntactically equal. You used both linear independence in A and in E in the definition, but it should be easy to convert between them using docs#LinearMap.linearIndependent_iff.

Jz Pan (Jan 02 2024 at 00:43):

Junyan Xu said:

Looks good! What's slightly different? To me they look like syntactically equal.

In Module.rank it is LinearIndependent F ((↑) : a → A) where a becomes a Subtype, but in LinearIndependent F (fun x : a ↦ x.1) the a is a Set.Elem. It's not clear to me the differences between them, but for the LinearIndependent F (fun x : a × b ↦ x.1.1.1 * x.2.1.1) it's product of two Set.Elem, I don't know how to write it as Subtype.

You used both linear independence in A and in E in the definition, but it should be easy to convert between them using docs#LinearMap.linearIndependent_iff.

I'm not sure if it's better to write LinearIndependent F (fun x : a ↦ x.1) or LinearIndependent F (fun x : a ↦ x.1.1), the first one is on A, the second one is on E. They are equivalent since A -> E is injective. But you need some existing theorems to transfer them.

Junyan Xu (Jan 02 2024 at 03:03):

Isn't docs#Set.Elem also a subtype? It's definition is literally ↑s = { x : α // x ∈ s }.

Jz Pan (Jan 03 2024 at 19:42):

OK, I have managed to prove the equivalency of (2) and (3):

import Mathlib.FieldTheory.Adjoin
import Mathlib.LinearAlgebra.TensorProductBasis

open scoped Classical Polynomial

open FiniteDimensional Polynomial IntermediateField Field

noncomputable section

universe u v w

variable (F : Type u) (E : Type v) [Field F] [Field E] [Algebra F E]

variable (K : Type w) [Field K] [Algebra F K]

namespace IntermediateField

variable {F E}

variable (A B : IntermediateField F E)

/-- Two intermediate fields `A` and `B` of `E / F` are called linearly disjoint, if any `F`-linearly
independent subset of `A` remains linearly independent over `B`. Marked as `protected` because later
we will define linearly disjoint for two abstract fields over a base field. -/
protected def LinearlyDisjoint :=  {a : Set A}, LinearIndependent F (fun x : a  x.1) 
    LinearIndependent B (fun x : a  x.1.1)

variable {A B}

private lemma key_lemma {a : Set A} {b : Set B} (l : a × b →₀ F) (L : a →₀ B)
    {h0 : Finsupp.total b B F (fun y  y.1) 0 = 0}
    (h : L = l.curry.mapRange (Finsupp.total b B F (fun y  y.1)) h0) :
    Finsupp.total (a × b) E F (fun x : a × b  x.1.1.1 * x.2.1.1) l =
      Finsupp.total a E B (fun x : a  x.1.1) L := by
  let g (x : a) (y : b) (z : F) := z  (x.1.1 * y.1.1)
  rw [Finsupp.total_apply, Finsupp.total_apply, h,
    Finsupp.sum_mapRange_index (by exact fun _  zero_smul B _),
     l.sum_curry_index g (fun _ _  zero_smul F _) (fun _ _ _ _  add_smul _ _ _),
    Finsupp.sum]
  congr
  ext x
  simp_rw [Finsupp.total_apply, Finsupp.sum, Finset.sum_smul]
  congr
  ext y
  simp_rw [Algebra.smul_def, map_mul, mul_comm x.1.1 y.1.1,  mul_assoc]
  rfl

theorem LinearlyDisjoint.linearIndependent_mul (H : A.LinearlyDisjoint B)
    {a : Set A} {b : Set B}
    (ha : LinearIndependent F (fun x : a  x.1))
    (hb : LinearIndependent F (fun y : b  y.1)) :
    LinearIndependent F (fun x : a × b  x.1.1.1 * x.2.1.1) := by
  replace H := H ha
  rw [linearIndependent_iff] at H hb 
  intro l hl
  let L := l.curry.mapRange (Finsupp.total b B F (fun y  y.1)) (map_zero _)
  ext x, y
  rw [Finsupp.zero_apply,  Finsupp.curry_apply, hb (l.curry x) <| by
    simpa only [Finsupp.onFinset_apply, Finsupp.zero_apply] using
      congr($(H _ (key_lemma l L rfl  hl)) x), Finsupp.zero_apply]

theorem LinearlyDisjoint.of_linearIndependent_mul
    (H :  {a : Set A} {b : Set B}, LinearIndependent F (fun x : a  x.1) 
      LinearIndependent F (fun y : b  y.1) 
      LinearIndependent F (fun x : a × b  x.1.1.1 * x.2.1.1)) :
    A.LinearlyDisjoint B := fun {a} ha  by
  let b := Basis.ofVectorSpaceIndex F B
  let basis : Basis b F B := Basis.ofVectorSpace F B
  have hb : LinearIndependent F (fun y : b  y.1) := by
    convert basis.linearIndependent
    ext x
    congr
    exact (Basis.extend_apply_self _ _).symm
  replace H := H ha hb
  rw [linearIndependent_iff] at H 
  intro l hl
  let L := Finsupp.finsuppProdEquiv.symm (l.mapRange basis.repr (map_zero _))
  have : l = (Finsupp.finsuppProdEquiv L).mapRange
      (Finsupp.total b B F (fun y  y.1)) (map_zero _) := by
    rw [Finsupp.finsuppProdEquiv.apply_symm_apply,
       Finsupp.mapRange_comp _ rfl _ (map_zero _) (by rw [Function.comp_apply, map_zero]; rfl)]
    convert l.mapRange_id.symm
    ext y
    rw [id, Function.comp_apply]
    congr
    convert basis.total_repr y
    ext x
    congr
    exact (Basis.ofVectorSpace_apply_self F B x).symm
  have := H L (key_lemma L l this  hl)
  rw [Finsupp.finsuppProdEquiv.symm_apply_eq] at this
  change _ = 0 at this
  apply_fun Finsupp.mapRange basis.repr.symm (map_zero _) at this
  rw [Finsupp.mapRange_zero,  Finsupp.mapRange_comp basis.repr.symm (map_zero _) basis.repr
    (map_zero _) (by simp_rw [Function.comp_apply, map_zero])] at this
  have : l.mapRange id rfl = 0 := by
    convert this
    exact basis.repr.toEquiv.symm_comp_self.symm
  rwa [Finsupp.mapRange_id] at this

theorem linearlyDisjoint_iff_linearIndependent_mul :
    A.LinearlyDisjoint B   {a : Set A} {b : Set B}, LinearIndependent F (fun x : a  x.1) 
      LinearIndependent F (fun y : b  y.1) 
      LinearIndependent F (fun x : a × b  x.1.1.1 * x.2.1.1) :=
  fun H _ _ ha hb  H.linearIndependent_mul ha hb, LinearlyDisjoint.of_linearIndependent_mul

theorem LinearlyDisjoint.comm (H : A.LinearlyDisjoint B) : B.LinearlyDisjoint A := by
  rw [linearlyDisjoint_iff_linearIndependent_mul] at H 
  intro a b ha hb
  rw [ linearIndependent_equiv (Equiv.prodComm b a)]
  convert H hb ha
  exact mul_comm _ _

theorem linearlyDisjoint_comm : A.LinearlyDisjoint B  B.LinearlyDisjoint A :=
  LinearlyDisjoint.comm, LinearlyDisjoint.comm

end IntermediateField

/-- Two abstract field `E` and `K` over `F` are called linearly disjoint, if their tensor product
over `F` is a field. -/
def LinearlyDisjoint := IsField (TensorProduct F E K)

variable {F E K} in
theorem LinearlyDisjoint.isAlgebraic (H : LinearlyDisjoint F E K) :
    Algebra.IsAlgebraic F E  Algebra.IsAlgebraic F K := by
  sorry

Jz Pan (Jan 03 2024 at 19:44):

But it requires set_option maxHeartbeats 400000 By splitting it into three parts, it can be removed, but it's still very slow. :sweat_smile: It's struggling to write down such proof.

Perhaps someone could golf the proof a little bit?

[EDIT] I found that the two rw [← Finsupp.mapRange_comp] in the proof of LinearlyDisjoint.of_linearIndependent_mul are the same thing: it's basis.repr.symm (which should be proved to be equal to Finsupp.total b B F (fun y ↦ y.1)) and basis.repr. I don't know how to optimize it, though.

Junyan Xu (Jan 03 2024 at 19:50):

I'll take a look later! I forgot to mention this, but it would be handy to prove that the existence of a basis of A/F that is linearly independent over B is sufficient to guarantee linear disjointness.

Jz Pan (Jan 03 2024 at 20:05):

Junyan Xu said:

it would be handy to prove that the existence of a basis of A/F that is linearly independent over B is sufficient to guarantee linear disjointness.

OK, noted it.

Jz Pan (Jan 04 2024 at 00:27):

I plan to open a draft PR. Is it better to put it inside FieldTheory, or LinearAlgebra?

Jz Pan (Jan 04 2024 at 00:51):

This is slightly better:

theorem LinearlyDisjoint.of_linearIndependent_mul
    (H :  {a : Set A} {b : Set B}, LinearIndependent F (fun x : a  x.1) 
      LinearIndependent F (fun y : b  y.1) 
      LinearIndependent F (fun x : a × b  x.1.1.1 * x.2.1.1)) :
    A.LinearlyDisjoint B := fun {a} ha  by
  let b := Basis.ofVectorSpaceIndex F B
  let basis : Basis b F B := Basis.ofVectorSpace F B
  have hb : LinearIndependent F (fun y : b  y.1) := by
    convert basis.linearIndependent
    ext x
    congr
    exact (Basis.extend_apply_self _ _).symm
  replace H := H ha hb
  rw [linearIndependent_iff] at H 
  intro l hl
  let L := Finsupp.finsuppProdEquiv.symm (l.mapRange basis.repr (map_zero _))
  have : l = (Finsupp.finsuppProdEquiv L).mapRange
      (Finsupp.total b B F (fun y  y.1)) (map_zero _) := by
    rw [Finsupp.finsuppProdEquiv.apply_symm_apply,
       Finsupp.mapRange_comp _ rfl _ (map_zero _) (by rw [Function.comp_apply, map_zero]; rfl)]
    convert l.mapRange_id.symm
    ext y
    rw [id, Function.comp_apply]
    congr
    convert basis.total_repr y
    ext x
    congr
    exact (Basis.ofVectorSpace_apply_self F B x).symm
  rwa [H L (key_lemma L l this  hl), show Finsupp.finsuppProdEquiv 0 = 0 from rfl,
    Finsupp.mapRange_zero] at this

It only requires one rw [← Finsupp.mapRange_comp].

Jz Pan (Jan 04 2024 at 18:24):

:dizzy::dizzy::dizzy: Do we have this?

import Mathlib.FieldTheory.Adjoin

variable (F E : Type*) [Field F] [Field E] [Algebra F E]

private lemma test2 {a b : Type*} {v : a  b →₀ F} (H : LinearIndependent F v) :
    LinearIndependent E (fun x : a  (v x).mapRange (algebraMap F E) (map_zero _)) := by
  sorry

Namely, if a matrix over F has linear independent columns, then its base change over E also has linear independent columns. The only proof I know for it uses matrix determinant.

Jz Pan (Jan 04 2024 at 18:29):

the existence of a basis of A/F that is linearly independent over B is sufficient to guarantee linear disjointness.

It seems that I need to use the above result to prove this.

Junyan Xu (Jan 04 2024 at 21:25):

Here's an abstract way to prove this:
docs#Algebra.TensorProduct.basis gives you an isomorphism between E ⊗[F] (a →₀ F) and a →₀ E (same for a replaced with b), and recall a family of vectors in a F-module M is linearly independent iff the induced map (Finsupp.total) from a →₀ F to M (= b →₀ F in our case) is injective. What you want to prove is equivalent to that the base-changed map from a →₀ E to E ⊗[F] M ≅ b →₀ E is injective, so all you need is that E/F is flat. Mathlib knows docs#Module.Flat.of_free, but our definition of flatness applies to ideals in F, not arbitrary F-modules. Either we wait for Jujian's PRs to land in mathlib (I should probably go back to review them ...), or you could extract lemmas from the proofs of docs#Module.Flat.directSum, docs#Module.Flat.self etc. that applies to an arbitrary module M rather than just ideals.

Jz Pan (Jan 04 2024 at 22:25):

You mean more generally we should prove that if M is a F-module, a subset of M is F-linearly independent (namely, the induced linear map (a →₀ F) → M is injective), then the induced subset in M ⊗[F] E is E-linearly independent. So we need to prove that - ⊗[F] E preserves injection (when E / F is flat).

Jz Pan (Jan 04 2024 at 22:26):

But that proof must involve determinant argument as I said before, if I remembered correctly (from Atiyah-Macdonald or Matsumura).

Junyan Xu (Jan 04 2024 at 23:29):

I'm not sure what's the determinant argument? We're not necessarily dealing with a square matrix, right?

So we need to prove that - ⊗[F] E preserves injection (when E / F is flat).

Yes, currently the flatness argument for a free F-module E goes like this: (1) - ⊗[F] F preserves injectivity because we have natural isomorphisms M ⊗[F] F ≅ M (docs#Module.Flat.self); (2) if - ⊗[F] N i preserves injectivity for a family of modules N i, then - ⊗[F] N also preserves injectivity, where N is the direct sum of the N i (docs#Module.Flat.directSum); (3) if - ⊗[F] M preserves injectivity and M and N are isomorphic F-modules, then - ⊗[F] N also preserves injectivity (docs#Module.Flat.of_linearEquiv); (4) since a →₀ F is isomorphic to a direct sum of copies of F, - ⊗[F] a →₀ F preserves injectivity by (1), (2) and (3) (docs#Module.Flat.finsupp); (5) since a free module M is isomorphic to some a →₀ F, - ⊗[F] M preserves injectivity by (3) and (4) (docs#Module.Flat.of_free). However, currently all these proofs quantify over ideals of R rather than arbitrary modules, and you could just lift the quantifier one level up to get the general versions.

Jz Pan (Jan 04 2024 at 23:53):

Junyan Xu said:

I'm not sure what's the determinant argument? We're not necessarily dealing with a square matrix, right?

Let's prove that if a matrix M over F has linearly independent columns, then its base change over E also has linear independent columns. We may assume that M has only finitely many columns, as the original result holds when it holds for all M[all,J] as J runs over all finite subsets of column (indices).

Say M has n columns. Then its columns are linearly independent is equivalent to that there exists n rows such that the n * n submatrix has non-zero determinant (namely, column rank is equal to rank). Now consider the M base change to E. Then the above mentioned n * n submatrix base change to E still has non-zero determinant. This means that the M base change to E still have linearly independent columns.

Jz Pan (Jan 04 2024 at 23:58):

I don't know any direct proof of this, namely suppose there is a E-linear combination of the columns of M which sums to zero, it's not obvious to produce a F-linear combination of them which still sums to zero, hence I don't know how to apply the condition that these columns are F-linear independent.

Jz Pan (Jan 05 2024 at 00:03):

@[simp]
theorem Matrix.rank_transpose{m : Type u_2} {n : Type u_3} {R : Type u_5} [Fintype n] [Fintype m] [LinearOrderedField R] (A : Matrix m n R) :
Matrix.rank (Matrix.transpose A) = Matrix.rank A
-- TODO: prove this in greater generality.

LinearOrderedField :joy::rolling_on_the_floor_laughing::sweat_smile:

Jz Pan (Jan 05 2024 at 00:08):

The rank of a matrix should be equal to the largest natural number n such that there exists a n * n minor which has non-zero determinant. I think we don't have this in mathlib yet. Otherwise Matrix.rank_transpose is trivial.

Jz Pan (Jan 05 2024 at 00:12):

Abstractly, the rank of a map M -> N between two finite-dimensional F-vector space should be equal to the largest natural number n such that there exist F^n -> M and N -> F^n such that the composite map F^n -> M -> N -> F^n is bijective. Using this it's also easy to prove Matrix.rank_transpose as it's the rank of N^* -> M^*, and by taking dual of the above we get F^n <- M^* <- N^* <- F^n, vice versa.

Jz Pan (Jan 05 2024 at 00:22):

By using the above abstract argument, the Gaussian elimination is not needed (I think).

Jz Pan (Jan 05 2024 at 02:29):

So we need to prove that - ⊗[F] E preserves injection (when E / F is flat).

I think we can cheat over fields. Since an F-linear map is injective if and only if it has left inverse (only true for modules over fields), and tensor product preserves this property (true as long as E is an algebra over a field F), so it preserves injection.

Jz Pan (Jan 05 2024 at 20:15):

Got this:

import Mathlib.FieldTheory.Adjoin
import Mathlib.LinearAlgebra.TensorProductBasis

open scoped Classical Polynomial

open FiniteDimensional Polynomial IntermediateField Field

noncomputable section

namespace LinearMap

variable {F M N : Type*} [DivisionRing F] [AddCommGroup M] [AddCommGroup N]
  [Module F M] [Module F N] (f : M →ₗ[F] N)

theorem injective_iff_exists_leftInverse_of_divisionRing :
    Function.Injective f   g : N →ₗ[F] M, Function.LeftInverse g f := by
  refine fun H  ?_, fun _, h  h.injective
  obtain m, bm⟩⟩ := Basis.exists_basis F M
  let m' := Set.range (f  bm)
  let e : m  m' := Equiv.ofInjective (f  bm) (H.comp bm.injective)
  have h := (linearIndependent_equiv e (f := Subtype.val)).1 <|
    bm.linearIndependent.map' f (LinearMap.ker_eq_bot_of_injective H)
  let n := h.extend m'.subset_univ
  let bn : Basis n F N := Basis.extend h
  let g' (y : n) : M := if h : y.1  m' then (bm <| e.symm y.1, h⟩) else 0
  let g := Finsupp.total n M F g' |>.comp bn.repr.toLinearMap
  have : g.comp f = LinearMap.id := bm.ext fun x  by
    let y' := e x
    change Finsupp.total n M F g' (bn.repr y'.1) = bm x
    let y : n := y'.1, (Basis.subset_extend h) y'.2
    rw [ show bn y = y'.1 from Basis.extend_apply_self h y,
      bn.repr_self, Finsupp.total_single, one_smul]
    simp_rw [y'.2, dite_true]
    exact congr(bm $(Equiv.ofInjective_symm_apply _ x))
  exact g, fun x  congr($this x)⟩

end LinearMap

Jz Pan (Jan 05 2024 at 20:19):

There should also be a surjective_iff_exists_rightInverse_of_divisionRing but I haven't written down the proof yet. I plan to finish the basic properties of linearly disjoint first.

Jz Pan (Jan 05 2024 at 20:25):

And we don't need to repeat the argument for the surjective one, since the injective one already implies that all short exact sequence splits, so we can make use of this to prove the surjective one.

Jz Pan (Jan 06 2024 at 00:19):

This proof is without using basis argument but using split exact argument. Not much shorter than the basis argument, though:

theorem surjective_iff_exists_rightInverse_of_divisionRing :
    Function.Surjective f   g : N →ₗ[F] M, Function.RightInverse g f := by
  refine fun H  ?_, fun _, h  h.surjective
  let K := LinearMap.ker f
  obtain k, hk := (injective_iff_exists_leftInverse_of_divisionRing _).1 K.injective_subtype
  let g := f.toFun.invFun
  have hg : Function.RightInverse g f := Function.rightInverse_invFun H
  let g' (y : N) : M := g y - K.subtype (k (g y))
  have hg' (x : M) (y : N) (h : f x = y) :
      g' y = (LinearMap.id (R := F) (M := M) - K.subtype.comp k) x := show _ = x - K.subtype _ by
    rw [ sub_eq_zero,  sub_add, sub_right_comm, sub_add, sub_eq_zero,  map_sub,  map_sub]
    have : g y - x  K := by rw [LinearMap.mem_ker, map_sub, h, hg, sub_self]
    rw [show g y - x = K.subtype _, this from rfl, hk]
  exact ⟨⟨⟨g', fun x y  by simp_rw [hg' (g x + g y) (x + y) (by rw [map_add, hg, hg]),
    fun z  hg' (g z) z (hg z), map_add]⟩, fun x y  by simp_rw [RingHom.id_apply,
      hg' (x  g y) (x  y) (by rw [map_smul, hg]), hg' (g y) y (hg y), map_smul]⟩, fun y  by
        erw [map_sub, hg, map_coe_ker, sub_zero]⟩

Jz Pan (Jan 06 2024 at 00:21):

But it saves brain. :joy:

Junyan Xu (Jan 06 2024 at 00:30):

You missed docs#LinearMap.exists_rightInverse_of_surjective and docs#LinearMap.exists_leftInverse_of_injective ...

Jz Pan (Jan 06 2024 at 00:34):

Junyan Xu said:

You missed docs#LinearMap.exists_rightInverse_of_surjective and docs#LinearMap.exists_leftInverse_of_injective ...

Oops. I waste a whole day to write down these proofs. :rolling_on_the_floor_laughing: No wonder I can't find them using Loogle, they don't mention Function.Injective or Function.Surjective or Function.LeftInverse or Function.RightInverse.

Kevin Buzzard (Jan 06 2024 at 00:46):

yeah maybe injective should be changed to ker_eq_bot?

Jz Pan (Jan 06 2024 at 01:19):

Kevin Buzzard said:

yeah maybe injective should be changed to ker_eq_bot?

For Ring they are equivalent (but not for Semiring).

Kevin Buzzard (Jan 06 2024 at 01:50):

right but that doesn't justify the naming!

Jz Pan (Jan 06 2024 at 02:12):

Do we have LinearMap.baseChange_id? It can be proved using TensorProduct.AlgebraTensorModule.ext and LinearMap.baseChange_tmul.

Junyan Xu (Jan 08 2024 at 08:37):

Jz Pan said:

:dizzy::dizzy::dizzy: Do we have this?

import Mathlib.FieldTheory.Adjoin

variable (F E : Type*) [Field F] [Field E] [Algebra F E]

private lemma test2 {a b : Type*} {v : a  b →₀ F} (H : LinearIndependent F v) :
    LinearIndependent E (fun x : a  (v x).mapRange (algebraMap F E) (map_zero _)) := by
  sorry

Namely, if a matrix over F has linear independent columns, then its base change over E also has linear independent columns. The only proof I know for it uses matrix determinant.

I've solved this goal using the tensor product approach and it takes 90 lines; tensorFinsupp is a more general version of docs#Algebra.TensorProduct.basis that is only R-linear rather than S-linear, but it seems missing in mathlib.

import Mathlib.RingTheory.TensorProduct

open scoped TensorProduct

noncomputable section

variable (R M N S ι) [CommSemiring R] [AddCommMonoid M] [Module R M]

@[simps!] def finsuppTensor : (ι →₀ R) [R] M ≃ₗ[R] ι →₀ M :=
  .ofLinear (TensorProduct.lift <| Finsupp.total _ _ _ Finsupp.lsingle)
    (Finsupp.lsum R (TensorProduct.mk _ _ _ <| Finsupp.single · 1))
    (by ext; simp) (by ext; simp)

@[simps!] def tensorFinsupp : M [R] (ι →₀ R) ≃ₗ[R] ι →₀ M :=
  (TensorProduct.comm _ _ _).trans (finsuppTensor R M ι)

open Function (Injective)

variable [AddCommMonoid N] [Module R N] {f : M →ₗ[R] N} (hf : Injective f)

theorem tensorFinsupp_rTensor (x : M [R] (ι →₀ R)) :
    tensorFinsupp R N ι (f.rTensor _ x) = Finsupp.mapRange.linearMap f (tensorFinsupp R M ι x) := by
  refine x.induction_on (by simp) (fun x y  ?_) fun x y hx hy  by simp_rw [map_add, hx, hy]
  apply y.induction_linear (by simp) (fun y z hy hz  ?_) fun i y  ?_
  · simp_rw [TensorProduct.tmul_add, map_add, hy, hz]
  · simp [TensorProduct.liftAux]

variable {R M N ι}

theorem finsupp_flat : Injective (f.rTensor (ι →₀ R)) :=
  fun x y h  (tensorFinsupp R M ι).injective <| by
    apply_fun tensorFinsupp R N ι at h
    simp_rw [tensorFinsupp_rTensor] at h
    ext i; exact hf (FunLike.congr_fun h i)

variable [AddCommMonoid S] [Module R S]

open LinearMap LinearEquiv

nonrec def LinearEquiv.lTensor (f : M ≃ₗ[R] N) : S [R] M ≃ₗ[R] S [R] N :=
  .ofLinear (f.lTensor S) (f.symm.lTensor S) (by simp [ lTensor_comp]) (by simp [ lTensor_comp])

variable [Module.Free R S]

theorem flat_of_free : Injective (f.rTensor S) := fun x y h  by
  have b := Module.Free.chooseBasis R S
  apply (b.repr.lTensor M).injective
  apply finsupp_flat hf
  simp_rw [LinearEquiv.lTensor, ofLinear_apply,  comp_apply,
    rTensor_comp_lTensor,  lTensor_comp_rTensor]
  exact congr_arg (b.repr.lTensor N) h

theorem flat_of_free' : Injective (f.lTensor S) :=
  fun x y h  (TensorProduct.comm R M S).symm.injective <| flat_of_free S hf <| by
    simpa [ comm_comp_lTensor_comp_comm_eq] using h

end

variable {R S M ι} [CommRing R] [Ring S] [Algebra R S] [Module.Free R S]
  [AddCommGroup M] [Module R M]

theorem LinearIndependent.algebraTensor {v : ι  M} (h : LinearIndependent R v) :
    LinearIndependent S (TensorProduct.mk R S M 1  v) := by
  rw [linearIndependent_iff_injective_total] at h 
  rw [ (tensorFinsupp R S ι).toEquiv.injective_comp]
  convert flat_of_free' S h
  ext x; refine x.induction_on (by simp) (fun x y  ?_) fun x y hx hy  ?_
  · apply y.induction_linear (by simp) (fun y z hy hz  ?_) fun i y  ?_
    · rw [Function.comp_apply, LinearEquiv.coe_toEquiv] at hy hz 
      simp_rw [TensorProduct.tmul_add, map_add, hy, hz]
    · simp [TensorProduct.liftAux, TensorProduct.smul_tmul']
  rw [Function.comp_apply, LinearEquiv.coe_toEquiv] at hx hy 
  simp_rw [map_add, hx, hy]

variable {a b : Type*} {v : a  b →₀ R} (H : LinearIndependent R v)

theorem LinearIndependent.mapRange_linearMap :
    LinearIndependent S (Finsupp.mapRange.linearMap (Algebra.ofId R S).toLinearMap  v) :=
  .of_comp (Algebra.TensorProduct.basis S Finsupp.basisSingleOne).repr.symm.toLinearMap <| by
    convert H.algebraTensor (S := S)
    rw [ Function.comp.assoc]
    apply congr_arg (·  v)
    ext x; apply x.induction_linear (by simp) (fun x y hx hy  ?_) fun i x  ?_
    · rw [Function.comp_apply] at hx hy ⊢; simp_rw [map_add, hx, hy]
    · simp [Algebra.ofId_apply, TensorProduct.smul_tmul', TensorProduct.smul_tmul]

theorem LinearIndependent.mapRange :
    LinearIndependent S fun x : a  (v x).mapRange (algebraMap R S) (map_zero _) :=
  H.mapRange_linearMap

Junyan Xu (Jan 08 2024 at 08:39):

@Richard Copley may be interested because the first part of the above code proves that a free module is flat in 50 lines. (cross ref)

Richard Copley (Jan 08 2024 at 13:05):

Glorious!

Jz Pan (Jan 08 2024 at 13:23):

Junyan Xu said:

I've solved this goal using the tensor product approach and it takes 90 lines

Thank you very much! I'm stuck with the last sorry:

import Mathlib.FieldTheory.Adjoin
import Mathlib.LinearAlgebra.TensorProductBasis

open scoped Classical Polynomial

open FiniteDimensional Polynomial IntermediateField Field

noncomputable section

universe u v w

variable (F : Type u) (E : Type v) [Field F] [Field E] [Algebra F E]

private lemma test2 {a b : Type*} {v : a  b →₀ F} (H : LinearIndependent F v) :
    LinearIndependent E fun x : a  (v x).mapRange (algebraMap F E) (map_zero _) := by
  let f := Finsupp.total a _ F v
  obtain g, hg : g.comp f =  _ := LinearMap.exists_leftInverse_of_injective _ H
  let f' := f.baseChange E
  let g' := g.baseChange E
  have hg' : g'.comp f' = LinearMap.id := by
    convert (LinearMap.baseChange_comp (A := E) f g).symm
    rw [hg]
    refine TensorProduct.AlgebraTensorModule.ext fun x y  ?_
    rw [LinearMap.baseChange_tmul]
    rfl
  let ba := Algebra.TensorProduct.basis E (Basis.ofRepr (LinearEquiv.refl (R := F) (M := a →₀ F)))
  let bb := Algebra.TensorProduct.basis E (Basis.ofRepr (LinearEquiv.refl (R := F) (M := b →₀ F)))
  let f'' := (bb.repr.toLinearMap.comp f').comp ba.repr.symm.toLinearMap
  let g'' := (ba.repr.toLinearMap.comp g').comp bb.repr.symm.toLinearMap
  have hg'' : g''.comp f'' = LinearMap.id := by
    sorry
  rw [LinearIndependent]
  convert LinearMap.ker_eq_bot_of_inverse hg''
  refine (Basis.ofRepr (LinearEquiv.refl (R := E) (M := a →₀ E))).ext fun x  ?_
  simp only [Basis.coe_ofRepr, LinearEquiv.refl_symm, LinearEquiv.refl_apply, Finsupp.total_single,
    one_smul, Basis.coe_repr_symm, LinearMap.coe_comp, LinearEquiv.coe_coe, Function.comp_apply,
    Algebra.TensorProduct.basis_apply, LinearMap.baseChange_tmul,
    Algebra.TensorProduct.basis_repr_tmul]

Richard Copley (Jan 08 2024 at 13:42):

  have hg'' : g''.comp f'' = LinearMap.id := by
    change ba.repr ∘ₗ (g' ∘ₗ (bb.repr.symm.toLinearMap ∘ₗ bb.repr.toLinearMap) ∘ₗ f') ∘ₗ
        ba.repr.symm.toLinearMap = LinearMap.id
    rw [ LinearEquiv.coe_trans, LinearEquiv.self_trans_symm, LinearEquiv.refl_toLinearMap,
      LinearMap.id_comp, hg', LinearMap.id_comp,  LinearEquiv.coe_trans,
      LinearEquiv.symm_trans_self, LinearEquiv.refl_toLinearMap]

Jz Pan (Jan 08 2024 at 13:51):

Thank you very much. Now my proof (only for this particular result) is shorter than Junyan's proof. :joy:

Jz Pan (Jan 11 2024 at 13:11):

I uploaded my code as a draft PR #9651.

Jz Pan (Jan 11 2024 at 14:31):

Junyan Xu said:

I've solved this goal using the tensor product approach and it takes 90 lines; tensorFinsupp is a more general version of docs#Algebra.TensorProduct.basis that is only R-linear rather than S-linear, but it seems missing in mathlib.

Would you submit your code as a PR?

Jz Pan (Feb 20 2024 at 22:45):

@Junyan Xu I think your finsuppTensor and tensorFinsupp are special cases of docs#finsuppTensorFinsupp. Maybe your results can be put into that file?

Richard Copley (Feb 20 2024 at 22:50):

Oh, @Antoine Chambert-Loir was also looking for a result similar to finsuppTensor and tensorFinsupp earlier today.

Antoine Chambert-Loir (Feb 21 2024 at 10:24):

@Junyan Xu , how do your finsuppTensor and tensorFinsuppcompare with what I indicate here:
https://leanprover.zulipchat.com/#narrow/stream/217875-Is-there-code-for-X.3F/topic/Tensor.20Product/near/422584588

Junyan Xu (Feb 21 2024 at 16:49):

If your construction is easy (update: I see your PR and it's indeed easy), I think my finsuppTensor should be defined using your tensorProductLeft together with docs#Finsupp.mapRange.equiv, even though the other way around is also possible (requires associativity of tensor product and two applications of finsuppTensor).

Junyan Xu (Mar 02 2024 at 22:48):

Here's a puzzle from @Jz Pan's last question here. What's an example of two non-torsion elements m : M and n : N in two R-modules M and N such that m ⊗ n is torsion in M ⊗ N? (Here I'm interested in the notion of non-torsion in the sense that r : R ↦ r • m is injective, or that {m} is R-linearly independent.) As Jz observed, neither M nor N could be flat. If we could produce such an example we could probably massage it into a counterexample to item 3 in the link.

Kevin Buzzard (Mar 02 2024 at 22:50):

Is RR commutative or any ring (or do you care about semirings or whatever?)

Junyan Xu (Mar 02 2024 at 22:53):

Yeah you may assume R is a commutative ring.

Junyan Xu (Mar 02 2024 at 22:54):

Here is an example of a tensor product of torsion-free modules that is not torsion-free, but the given torsion element is not a pure tensor.

Kevin Buzzard (Mar 02 2024 at 22:56):

ha ha I was just working through exactly that example confident that it would be a counterexample!

Junyan Xu (Mar 06 2024 at 07:07):

SE suggested this question to me when I'm about to ask the question (only typed the title), with an answer saying such example doesn't exist. Now I need to verify the proof; maybe it even extends to linear independent subsets of cardinality > 1?

Antoine Chambert-Loir (Mar 06 2024 at 09:31):

The same arguments works for a reduced ring. For the general case, I am not sure (the proof there is written in a fairly obscure way…)

Jz Pan (Mar 23 2024 at 01:40):

@Junyan Xu I'd like to add your definition of finsuppTensor to #11598; currently it is only a draft, some others were added but not your finsuppTensor yet. Comments are welcome.

Jz Pan (Mar 23 2024 at 01:42):

(only basic definitions; not including the following results on flat and free)

Junyan Xu (Mar 23 2024 at 02:45):

Should we wait for #10824 or is this something independent?

Junyan Xu (Mar 23 2024 at 03:53):

I see you're refactoring docs#finsuppTensorFinsupp' through the new finsuppTensorFinsuppLid in #11598. I agree this can be done independently of #10824.

Jz Pan (Mar 23 2024 at 07:21):

Junyan Xu said:

Should we wait for #10824 or is this something independent?

Thanks for the information. Currently my change in #11598 is independent of it, buy what I was also planned are finsupp[Scalar](Left|Right) in #10824.

Jz Pan (Mar 23 2024 at 17:08):

By the way, do we have this?

import Mathlib.LinearAlgebra.TensorProduct.Basic

example (R : Type*) [CommSemiring R] : TensorProduct.lid R R = TensorProduct.rid R R := by
  apply LinearEquiv.toLinearMap_injective
  apply TensorProduct.ext'
  simp [mul_comm]

Jz Pan (Mar 23 2024 at 17:09):

@loogle TensorProduct.lid, TensorProduct.rid

Jz Pan (Mar 23 2024 at 17:10):

@loogle TensorProduct.lid, TensorProduct.rid

loogle (Mar 23 2024 at 17:10):

:search: contractLeft_assoc_coevaluation', contractLeft_assoc_coevaluation

Jz Pan (Mar 26 2024 at 19:32):

ping Could someone look at PRs #11598 and #11635?

Jz Pan (Mar 26 2024 at 19:35):

My suggestion to #11635 is that maybe we can mark some apply lemmas as simp lemmas (https://github.com/leanprover-community/mathlib4/pull/10824#discussion_r1536662221).

Jz Pan (Mar 27 2024 at 20:09):

I also need this result

-- TODO: move to suitable file ?
theorem _root_.Module.Flat.preserves_injective_linearMap'
    {R : Type u} {M : Type v} [CommRing R] [AddCommGroup M] [Module R M]
    {N : Type w} [AddCommGroup N] [Module R N] {M' : Type*} [AddCommGroup M'] [Module R M']
    [Module.Flat R N] (L : M →ₗ[R] M') (hL : Function.Injective L) :
    Function.Injective (LinearMap.lTensor N L) := by
  -- rw [Module.Flat.lTensor_inj_iff_rTensor_inj]
  simp_rw [ LinearMap.comm_comp_rTensor_comp_comm_eq, LinearMap.coe_comp, LinearEquiv.coe_coe,
    EmbeddingLike.comp_injective, EquivLike.injective_comp]
  exact Module.Flat.preserves_injective_linearMap L hL

but I don't know if it worth to be added to mathlib. The whole simp_rw can be replaced by the commented out rw, but which is a deprecated function with no replacements. This confuses me about the current design of Module.Flat. Anyone could answer this?

Junyan Xu (Mar 27 2024 at 21:14):

This works:

import Mathlib.RingTheory.Flat.Basic
theorem Module.Flat.preserves_injective_linearMap'
    {R : Type u} {M : Type v} [CommRing R] [AddCommGroup M] [Module R M]
    {N : Type w} [AddCommGroup N] [Module R N] {M' : Type*} [AddCommGroup M'] [Module R M']
    [Module.Flat R N] (L : M →ₗ[R] M') (hL : Function.Injective L) :
    Function.Injective (LinearMap.lTensor N L) := by
  simpa [ LinearMap.comm_comp_rTensor_comp_comm_eq] using preserves_injective_linearMap L hL

The same method is used in several proofs in the file to replace lTensor_inj_iff_rTensor_inj. I'd be fine if you want to restore lTensor_inj_iff_rTensor_inj near comm_comp_rTensor_comp_comm_eq but it shouldn't be namespaced Module.Flat (and should we have Surjective, Bijective versions too?

Jz Pan (Mar 28 2024 at 00:07):

Thank you very much.

Junyan Xu said:

I'd be fine if you want to restore lTensor_inj_iff_rTensor_inj near comm_comp_rTensor_comp_comm_eq but it shouldn't be namespaced Module.Flat (and should we have Surjective, Bijective versions too?

I have no preference for restoring lTensor_inj_iff_rTensor_inj or not; but perhaps you're right, it should be under the namespace LinearMap instead.

What bothers me is: should I add Module.Flat.preserves_injective_linearMap' into mathlib.

Junyan Xu (Mar 28 2024 at 10:48):

Jz Pan said:

What bothers me is: should I add Module.Flat.preserves_injective_linearMap' into mathlib.

I'd prefer combining Module.Flat.preserves_injective_linearMap and lTensor_inj_iff_rTensor_inj for this. So maybe we should indeed restore the latter ...

Christian Merten (Mar 28 2024 at 10:50):

I'd prefer having both versions, tensoring on the left and tensoring on the right. This will be used so frequently that I think its worth it to have both.

Jz Pan (Mar 28 2024 at 11:23):

Another point is: should the original Module.Flat.preserves_injective_linearMap be renamed with Module.Flat.rTensor_preserves_injective_linearMap? Then the new Module.Flat.preserves_injective_linearMap' should be called Module.Flat.lTensor_preserves_injective_linearMap.

If you think the name is too long, maybe they can also be called Module.Flat.(l|r)Tensor_preserves_injective, omitting linearMap since (l|r)Tensor can only take LinearMap as an input, so this will not cause ambiguity (not really, injective can also means injective module (?)).

Jz Pan (Mar 28 2024 at 20:07):

#11748

Jz Pan (Mar 30 2024 at 23:33):

Are these results useful? I need to use them to find out finitely generated submodules of M and N once an element of M ⊗[R] N is given.

import Mathlib.LinearAlgebra.TensorProduct.Basic

private theorem test1 (R : Type*) [CommSemiring R] (M N : Type*)
    [AddCommMonoid M] [AddCommMonoid N] [Module R M] [Module R N] (x : M [R] N) :
     S : Multiset (M × N), x = (S.map fun i  i.1 ⊗ₜ[R] i.2).sum := by
  induction x using TensorProduct.induction_on with
  | zero => exact 0, by simp
  | tmul x y => exact ⟨{(x, y)}, by simp
  | add x y hx hy =>
    obtain Sx, hx := hx
    obtain Sy, hy := hy
    exact Sx + Sy, by rw [Multiset.map_add, Multiset.sum_add, hx, hy]⟩

private theorem test2 (R : Type*) [CommSemiring R] (M N : Type*)
    [AddCommMonoid M] [AddCommMonoid N] [Module R M] [Module R N] (x : M [R] N) :
     S : M →₀ N, x = S.sum fun m n  m ⊗ₜ[R] n := by
  induction x using TensorProduct.induction_on with
  | zero => exact 0, by simp
  | tmul x y => exact Finsupp.single x y, by simp
  | add x y hx hy =>
    obtain Sx, hx := hx
    obtain Sy, hy := hy
    use Sx + Sy
    rw [hx, hy]
    exact (Finsupp.sum_add_index' (by simp) TensorProduct.tmul_add).symm

Seems that they should be an immediate consequence of TensorProduct.span_tmul_eq_top, but I don't know how to write a one-line proof.

Antoine Chambert-Loir (Mar 31 2024 at 07:11):

I used similar results in our work with @María Inés de Frutos Fernández, but finitely resorted on the file about direct limits and tensor products.

Jz Pan (Apr 02 2024 at 09:20):

Finally I used this version:

import Mathlib.LinearAlgebra.TensorProduct.Basic
import Mathlib.LinearAlgebra.Dimension.Finite

open scoped Classical TensorProduct

open Submodule

variable {R S : Type*} [CommSemiring R] [Semiring S] [Algebra R S] (M N : Submodule R S)

-- TODO: move to suitable file and give it a better name
variable (R) in
private theorem test1 (M N : Type*)
    [AddCommMonoid M] [AddCommMonoid N] [Module R M] [Module R N] (x : M [R] N) :
     S : Multiset (M × N), x = (S.map fun i  i.1 ⊗ₜ[R] i.2).sum := by
  induction x using TensorProduct.induction_on with
  | zero => exact 0, by simp
  | tmul x y => exact ⟨{(x, y)}, by simp
  | add x y hx hy =>
    obtain Sx, hx := hx
    obtain Sy, hy := hy
    exact Sx + Sy, by rw [Multiset.map_add, Multiset.sum_add, hx, hy]⟩

-- TODO: move to suitable file and give it a better name
private theorem test2L {ι : Type*} [Fintype ι] (x : ι  M [R] N) :
     (M' : Submodule R S) (x' : ι  M' [R] N) (hM : M'  M), Module.Finite R M' 
       (i : ι), (inclusion hM).rTensor N (x' i) = x i := by
  choose Sx hSx using fun i  test1 R M N (x i)
  let T := ((Finset.sum Finset.univ Sx).map fun i  i.1.1).toFinset
  let M' := span R (T : Set S)
  have hM : M'  M := span_le.2 fun x hx  by
    simp only [Finset.mem_coe, Multiset.mem_toFinset, Multiset.mem_map, Prod.exists,
      exists_and_right, Subtype.exists, exists_eq_right, T] at hx
    obtain hx, _ := hx
    exact hx
  let f (i : M × N) : M' [R] N :=
    if h : i.1.1  T then i.1.1, subset_span h ⊗ₜ[R] i.2 else 0
  let x' (i : ι) := ((Sx i).map f).sum
  refine M', x', hM, inferInstance, fun j  ?_
  simp_rw [x', hSx, map_multiset_sum, Multiset.map_map]
  congr 1
  refine Multiset.map_congr rfl fun i hi  ?_
  replace hi : i.1.1  T := by
    simp only [Multiset.mem_toFinset, Multiset.mem_map, SetLike.coe_eq_coe, Prod.exists,
      exists_and_right, Subtype.exists, exists_eq_right, T]
    exact i.2.1, i.2.2, Multiset.mem_of_le (Finset.single_le_sum (by simp) (by simp)) hi
  simp_rw [Function.comp_apply, f, dif_pos hi]; rfl

-- TODO: move to suitable file and give it a better name
private theorem test2Lpair (x y : M [R] N) :
     (M' : Submodule R S) (x' y' : M' [R] N) (hM : M'  M), Module.Finite R M' 
      (inclusion hM).rTensor N x' = x  (inclusion hM).rTensor N y' = y := by
  obtain M', x', hM, hfin, hx := test2L M N ![x, y]
  exact M', x' 0, x' 1, hM, hfin, hx 0, hx 1

-- TODO: move to suitable file and give it a better name
private theorem test2R {ι : Type*} [Fintype ι] (x : ι  M [R] N) :
     (N' : Submodule R S) (x' : ι  M [R] N') (hN : N'  N), Module.Finite R N' 
       (i : ι), (inclusion hN).lTensor M (x' i) = x i := by
  obtain N', x', hN, hfin, hx := test2L N M fun i  TensorProduct.comm R M N (x i)
  have key : (inclusion hN).lTensor M ∘ₗ TensorProduct.comm R N' M =
      (TensorProduct.comm R M N).symm ∘ₗ (inclusion hN).rTensor M :=
    TensorProduct.ext' fun _ _  rfl
  refine N', fun i  TensorProduct.comm R N' M (x' i), hN, hfin, fun i  ?_
  replace hx := congr((TensorProduct.comm R M N).symm $(hx i))
  rw [LinearEquiv.symm_apply_apply] at hx
  simpa only [ hx] using congr($key (x' i))

-- TODO: move to suitable file and give it a better name
private theorem test2Rpair (x y : M [R] N) :
     (N' : Submodule R S) (x' y' : M [R] N') (hN : N'  N), Module.Finite R N' 
      (inclusion hN).lTensor M x' = x  (inclusion hN).lTensor M y' = y := by
  obtain M', x', hM, hfin, hx := test2R M N ![x, y]
  exact M', x' 0, x' 1, hM, hfin, hx 0, hx 1

Jz Pan (Apr 02 2024 at 09:23):

By the way, the file LinearAlgebra/TensorProduct/Basic gets too long (in #11731). What should I do now?

Yaël Dillies (Apr 02 2024 at 09:24):

You can just ignore the error by adding the corresponding line (ie Mathlib/LinearAlgebra/TensorProduct/Basic.lean#L1: ERR_NUM_LIN: 1700 file contains 1554 lines, try to split it up) to style-exception.txt

Jz Pan (Apr 02 2024 at 09:30):

I see. When the number of lines in Mathlib/LinearAlgebra/TensorProduct/Basic.lean changed, should the corresponding line in style-exception.txt be also changed?

Yaël Dillies (Apr 02 2024 at 09:31):

No, only the 1700 matters. That's the number of lines you're allowed

Yaël Dillies (Apr 02 2024 at 09:32):

I complained before that this was very unclear and that the error message didn't tell you how to ignore it. cc @Mario Carneiro who got this right with shake.

Mario Carneiro (Apr 02 2024 at 11:37):

speaking of that error message, although the 1700 has a particular meaning the formatting is really not good for that, it looks like it's talking about 1700 files with slightly wrong grammar

Jz Pan (Apr 15 2024 at 23:09):

Now I'm stuck (mathematically) at this:

import Mathlib.LinearAlgebra.Dimension.Free

theorem test9
    {F E : Type*} [CommRing F] [StrongRankCondition F] [Ring E] [Algebra F E]
    (b : F ≃ₗ[F] E) : Function.Surjective (algebraMap F E) := by
  sorry

It is the n = 1 case of <https://mathoverflow.net/questions/89040>, it is said that it's not hard (TM) to show this result. But I cannot think a proof of it. I guess that a preimage of y : E can be b.symm y * b.symm (b 1 ^ 2) but I cannot prove the correctness of it (unless assume E is a domain).

Jz Pan (Apr 15 2024 at 23:17):

This should lead to a generalization of docs#Subalgebra.eq_bot_of_finrank_one

Joël Riou (Apr 15 2024 at 23:57):

Let x : E be defined as x := b 1. The multiplication map (y : E) => x * y by x is an endomorphism of E as a F-module, but as it is free of rank one, it must be equal to the scalar multiplication by an element in a : F (this is like a 1×1-matrix), then for all y : E, we have x * y = a • y. In particular, for y = 1, we get that x is the image of a by algebraMap F E, and this easily implies the result.

Jz Pan (Apr 16 2024 at 20:46):

Thank you very much, this code works:

import Mathlib.LinearAlgebra.Dimension.Free

theorem bijective_algebraMap_of_linearEquiv
    {F E : Type*} [CommRing F] [Ring E] [Algebra F E]
    (b : F ≃ₗ[F] E) : Function.Bijective (algebraMap F E) := by
  have h : b.symm (b 1 * b (b.symm 1)) = b.symm 1 * b.symm (b 1 * b 1) := by
    let j := b.symm.toLinearMap ∘ₗ .mulLeft F (b 1) ∘ₗ b.toLinearMap
    change j (b.symm 1) = (b.symm 1)  j 1
    rw [ map_smul, smul_eq_mul, mul_one]
  replace h : b 1 = (algebraMap F E) (b.symm (b 1 * b 1)) := by
    apply_fun b at h
    rwa [b.apply_symm_apply, b.apply_symm_apply, mul_one, mul_comm (b.symm 1),
       smul_eq_mul, map_smul, b.apply_symm_apply, Algebra.smul_def, mul_one] at h
  refine Function.bijective_iff_has_inverse.2 fun y  b.symm y * b.symm (b 1 * b 1),
    fun x  b.injective ?_, fun y  ?_
  · dsimp only; rw [mul_comm,  smul_eq_mul, map_smul, Algebra.smul_def,  h, b.apply_symm_apply,
       Algebra.commutes x (b 1),  Algebra.smul_def,  map_smul, smul_eq_mul, mul_one]
  · dsimp only; rw [map_mul,  h,  Algebra.smul_def,  map_smul, smul_eq_mul, mul_one,
      b.apply_symm_apply]

It's a little bit longer, though.

Jz Pan (Apr 21 2024 at 19:14):

Please have a look at this code:

-- unused
-- TODO: move to suitable place
theorem _root_.Subalgebra.finite_iSup_of_finite
    {K L : Type*} [CommSemiring K] [CommSemiring L] [Algebra K L]
    {ι : Type*} {t : ι  Subalgebra K L} [h : Finite ι]
    [ i, Module.Finite K (t i)] : Module.Finite K ( i, t i : Subalgebra K L) := by
  rw [ iSup_univ]
  exact Set.Finite.induction_on Set.finite_univ (by rw [iSup_emptyset]; exact finite_bot)
    (fun _ _ _  by rw [iSup_insert]; exact finite_sup _ _)

-- unused
-- TODO: move to suitable place
theorem _root_.Subalgebra.gc {K L : Type*} [CommSemiring K] [Semiring L] [Algebra K L] :
    GaloisConnection (Algebra.adjoin K) (fun x  (x : Set L)) := fun _ _   Algebra.adjoin_le_iff

-- unused
-- TODO: move to suitable place
theorem _root_.Subalgebra.biSup_adjoin_simple
    {K L : Type*} [CommSemiring K] [Semiring L] [Algebra K L] (S : Set L) :
     x  S, Algebra.adjoin K {x} = Algebra.adjoin K S := by
  rw [ iSup_subtype'',  gc.l_iSup, iSup_subtype'']; congr; exact S.biUnion_of_singleton

-- unused
-- XXX
theorem _root_.Algebra.adjoin.finite {K L : Type*} [CommRing K] [Ring L] [Algebra K L] {x : L}
    (hx : IsIntegral K x) : Module.Finite K (Algebra.adjoin K {x}) := by
  have := hx.fg_adjoin_singleton
  exact Module.Finite.iff_fg.2 this

-- unused
-- XXX
theorem _root_.Subalgebra.finite_adjoin {K L : Type*} [CommRing K] [CommRing L] [Algebra K L]
    {S : Set L} [Finite S] (hS :  x  S, IsIntegral K x) :
    Module.Finite K (Algebra.adjoin K S) := by
  change Module.Finite K (toSubmodule (Algebra.adjoin K S))
  rw [Module.Finite.iff_fg]
  refine fg_adjoin_of_finite ?_ hS
  rwa [ Set.finite_coe_iff]

The first three are API analogues to that of intermediate fields, which are not in mathlib yet. Originally, I want to use them to prove the last two results, since I can't find them using loogle. However, later I found them by using Submodule.FG instead of Module.Finite. So I think these two are not needed to be added to mathlib. But what about the first three?

Jz Pan (Apr 25 2024 at 11:11):

ping what's the progress of #11731?

Kevin Buzzard (Apr 25 2024 at 13:11):

I'm really sorry, this looks great to me but I don't have access to a computer with internet until Saturday. You might be better off bringing this up in the PR review stream.

Andrew Yang (Apr 25 2024 at 14:01):

Oops, I think this slipped through the maintainer merge queue. Just bumped it there.

Jz Pan (Apr 25 2024 at 19:49):

Kevin Buzzard said:

... You might be better off bringing this up in the PR review stream.

Thank you for the information. But I have trouble subscribing or viewing this stream; subscribing is always unsuccessful, and I can only see the messages two years ago in it.

Jz Pan (Apr 25 2024 at 19:50):

I split the first part of this PR, concerning linearly disjointness of submodules, as #12434, and is ready for review. Comments welcome.

Kevin Buzzard (Apr 25 2024 at 19:52):

#PR reviews is the stream I'm talking about. Can you confirm that you can't see recent messages in this stream? I don't think it's private so this should probably be a bug report unless I've made a mistake

Jz Pan (Apr 25 2024 at 20:12):

I can only see the messages in that stream around year 2022. It's not loading any newer messages. Maybe I should try refreshing the web browser. The Zulip web UI works weirdly and never runs properly in my browser, for example, it doesn't load newest messages first, but always insists load message around 2022 first.

Kevin Buzzard (Apr 25 2024 at 20:22):

Try the app?

Jz Pan (Apr 25 2024 at 22:57):

I have just download the Windows app, it is 200MB+ and seems just a browser based app... Looks like nothing have changed yet.

After restarting my computer, there are some recent messages in PR reviews channel, in both web UI and Windows app.

I tried close and reopen the Windows app, the whole recent partipicated conversation list get disappared and needed to be downloaded again (and which takes a few minutes). Huh! It's not even cached locally by the app. This makes the app the same as the web UI, no any advantages. I decided to uninstall it to save a few hundred megabytes from my harddisk.

Kevin Buzzard (Apr 26 2024 at 00:16):

I have never had to wait a few minutes for anything to happen ever when using the Linux app, so I don't really know what's going on at your end

Jz Pan (Jun 24 2024 at 18:51):

ping could someone look at #12498? https://leanprover.zulipchat.com/#narrow/stream/144837-PR-reviews/topic/.2312498.20some.20linear.20maps.20induced.20by.20mul.2E.20for.20submodules/near/440929818

Kim Morrison (Jun 25 2024 at 01:06):

@Jz Pan, three generic comments:

  1. There are open reviewer comments on the PR. If there's nothing further to do with these, mark them as resolved. Otherwise these will steer new reviewers away from looking at the PR.
  2. There's a merge conflict, which will prevent the PR appearing on the #queue.
  3. Requests to look at PRs are better done in the PR reviews channel.

Jz Pan (Jun 25 2024 at 10:15):

Sorry for the noise, but I already posted the thread in reviews channel for almost one month, and got no response.

Kim Morrison (Jun 25 2024 at 10:22):

I would suggest for a comment like https://github.com/leanprover-community/mathlib4/pull/12498#discussion_r1652446945 that you don't click resolve, but instead mention the reviewer by @name, and explain why you disagree with their suggestion. Notice that their suggestion had been up, without a response from you, for that last month.

Jz Pan (Jul 30 2024 at 23:05):

Part 2 of the linearly disjoint (subalgebra part) is submitted as PR #15346 and is ready for review.

Jz Pan (Oct 15 2024 at 16:29):

ping again any more comments on #15346?

Jz Pan (Oct 28 2024 at 12:17):

OK #15346 (now it is: some more properties of linearly disjoint subalgebras, mainly on their ranks) is ready for review. It was split into another smaller PR and was merged.

Jz Pan (Nov 01 2024 at 12:42):

OK now the main PR #9651 (definition and basic properties of linearly disjoint of fields) is ready for review. Some small results on intermediate fields is split as PR #18515. Comments welcome.

Jz Pan (Nov 08 2024 at 13:55):

ping if someone could look at these two PRs? Thank you very much.

Jz Pan (Nov 28 2024 at 13:55):

@Junyan Xu I'm considering importing AlgebraicIndependent into Mathlib.FieldTheory.LinearDisjoint to drop the algebraic assumption in docs#IntermediateField.LinearDisjoint.rank_sup_of_isAlgebraic. Do you think it's a good idea? It may add a significant amount of imports to that file, though.

Junyan Xu (Nov 28 2024 at 15:08):

Sounds good to me! Only +4 trans imports.

Jz Pan (Nov 28 2024 at 15:17):

+4 is reasonable. Also, in order to prove results on Field.LinearDisjoint I think AlgebraicIndependent is also needed.

Jz Pan (Dec 01 2024 at 14:25):

PR #19614 concerning <https://mathoverflow.net/questions/8324> is created but it's still WIP. Some results are already added.

Seems that use IsDomain/IsField XXX is already convenient, so I think maybe it's not needed to introduce abbrev Field.XXXLinearDisjoint := IsDomain/IsField XXX?

Jz Pan (Dec 01 2024 at 14:27):

Junyan Xu said:

Sounds good to me! Only +4 trans imports.

It's at least 16-7=9 :joy:

Junyan Xu (Dec 01 2024 at 15:57):

Indeed it appears to add 16 trans imports now

import Mathlib.FieldTheory.LinearDisjoint -- 1617
import Mathlib.RingTheory.AlgebraicIndependent -- 1633

I'd bet it's due to the Algebraic split.

Jz Pan (Dec 06 2024 at 09:01):

Now I'm stuck mathematically at the following problem:

If A,B are flat R-algebras, both of them are transcendental, then ARB A\otimes_RB is not a field.

Suppose otherwise. Then A,B and R are not trivial. By transcendental, the R -> A and R -> B are injective, and by flatness, AARB A\to A\otimes_RB and BARB B\to A\otimes_RB are also injective, and the images of these two maps are linearly disjoint (Submodule.LinearDisjoint.include_range_of_injective in PR #19614), so the intersection of these two images must have R-rank at most 1.

Let a,b be a transcendental element in A,B, respectively, then R[X] inject into A,B and they become R[X]-algebras. So we have the map ARBAR[X]B A\otimes_RB \to A\otimes_{R[X]}B (TensorProduct.mapOfCompatibleSMul), which maps a1 a\otimes 1 and 1b 1\otimes b to the same element.

  • Suppose AR[X]B0 A\otimes_{R[X]}B\neq 0 . Then the above map is injective (RingHom.injective), so a1=1b a\otimes 1=1\otimes b in ARB A\otimes_RB which means that R[X] injects into the intersection of the two images, contradiction with the rank result.

  • Suppose AR[X]B=0 A\otimes_{R[X]}B=0 . Then I'm stuck...

Jz Pan (Dec 06 2024 at 09:03):

Note that if A,B,R are fields, then we can consider AR(X)B A\otimes_{R(X)}B instead, which must be not trivial, by Module.FaithfullyFlat.lTensor_nontrivial. This is the case in https://mathoverflow.net/questions/8324/.

Jz Pan (Dec 06 2024 at 09:08):

Oh, if one of A,B is a flat R[X]-module, then AR[X]B0 A\otimes_{R[X]} B\neq 0 , since either A or B injects into it. But I think this assumption is too strong...

Junyan Xu (Dec 06 2024 at 18:57):

To show ARBA\otimes_R B is not a field, it suffices to show that the ring hom ARBFrac(A)Frac(R[X])Frac(B)A\otimes_R B\to\mathrm{Frac}(A)\otimes_{\mathrm{Frac}(R[X])}\mathrm{Frac}(B) is not injective and the codomain is nontrivial. (Notice that R,A,BR,A,B are all domains because they inject into ARBA\otimes_R B which you assume is a field (for contradiction).) It's also possible (but unnecessary) to show AR[X]BA\otimes_{R[X]} B is a domain; it should be true that Frac(A)R[X]Frac(B)\mathrm{Frac}(A)\otimes_{R[X]}\mathrm{Frac}(B) is the localization of ARBA\otimes_R B w.r.t. the submonoid generated by aba\otimes b with a,b0a,b\ne0 (you can probably do this in two stages, each stage being a base change = localized module), and we know Frac(A)R[X]Frac(B)Frac(A)Frac(R[X])Frac(B)\mathrm{Frac}(A)\otimes_{R[X]}\mathrm{Frac}(B)\to\mathrm{Frac}(A)\otimes_{\mathrm{Frac}(R[X])}\mathrm{Frac}(B) is surjective. It's unclear whether AR[X]BFrac(A)R[X]Frac(B)A\otimes_{R[X]}B\to\mathrm{Frac}(A)\otimes_{R[X]}\mathrm{Frac}(B) is always injective since we don't know A or B is flat over R[X], which would yield another proof.

Jz Pan (Dec 06 2024 at 19:18):

Junyan Xu said:

Notice that R,A,BR,A,B are all domains because they inject into ARBA\otimes_R B which you assume is a field (for contradiction).

Great observation! I think this should make it work, just replace AR[X]B A\otimes_{R[X]}B in my arguments by Frac(A)Frac(R[X])Frac(B) \mathrm{Frac}(A)\otimes_{\mathrm{Frac}(R[X])}\mathrm{Frac}(B) ; let me think tomorrow.

Jz Pan (Dec 07 2024 at 12:23):

Oops, TensorProduct.mapOfCompatibleSMul does not have an AlgHom version...

Jz Pan (Dec 07 2024 at 12:24):

@loogle |- AlgHom _ (TensorProduct ..) (TensorProduct ..)

loogle (Dec 07 2024 at 12:24):

:search: Algebra.TensorProduct.map

Jz Pan (Dec 07 2024 at 12:35):

def _root_.Algebra.TensorProduct.testest123
    (R : Type*) [CommSemiring R] (S T : Type*) [CommSemiring S] [CommSemiring T]
    [Algebra R S] [Algebra R T] (A : Type*) [CommSemiring A] [Algebra A S] [Algebra A T]
    [SMulCommClass R A S] [TensorProduct.CompatibleSMul R A S T] : S [A] T →ₐ[A] S [R] T where
  __ := TensorProduct.mapOfCompatibleSMul R S T A
  map_one' := rfl
  map_mul' := by
    simp
    sorry
  map_zero' := rfl
  commutes' _ := rfl

I don't understand the conditions [SMulCommClass R A S] [TensorProduct.CompatibleSMul R A S T], I just copied them from TensorProduct.mapOfCompatibleSMul. Maybe these conditions are not enough to define AlgHom. (In application we should have IsScalarTower A R S and IsScalarTower A R T.) Any ideas?

I don't know how to prove map_mul', seems that I should use some induction regarding tensor product (to reduce to tmul case), but I have no idea how to use it in this case.

Jz Pan (Dec 07 2024 at 12:47):

OK just use induction x with works:

def _root_.Algebra.TensorProduct.algHomOfCompatibleSMul
    (R : Type*) [CommSemiring R] (S T : Type*) [CommSemiring S] [CommSemiring T]
    [Algebra R S] [Algebra R T] (A : Type*) [CommSemiring A] [Algebra A S] [Algebra A T]
    [SMulCommClass R A S] [TensorProduct.CompatibleSMul R A S T] : S [A] T →ₐ[A] S [R] T where
  __ := TensorProduct.mapOfCompatibleSMul R S T A
  map_one' := rfl
  map_mul' x y := by
    dsimp
    induction x with
    | zero => simp
    | tmul u v =>
      induction y with
      | zero => simp
      | tmul w z => simp
      | add w z hw hz => rw [left_distrib, map_add, hw, hz, map_add, left_distrib]
    | add u v hu hv => rw [right_distrib, map_add, hu, hv, map_add, right_distrib]
  map_zero' := rfl
  commutes' _ := rfl

Junyan Xu (Dec 07 2024 at 14:26):

Are you looking for #19670?

Junyan Xu (Dec 07 2024 at 14:29):

I should add a surjective lemma for AlgHom version too ...

Jz Pan (Dec 07 2024 at 14:58):

Junyan Xu said:

Are you looking for #19670?

Thanks! That's exactly what I was looking for. Seems that you changed the order of arguments of both mapOfCompatibleSMul.

Junyan Xu said:

I should add a surjective lemma for AlgHom version too ...

Thank you! Fortunately I don't need the surjective result in the proof.

Jz Pan (Dec 07 2024 at 19:41):

Why Polynomial.basisMonomials is in Mathlib.RingTheory.MvPolynomial.Basic ??? :man_facepalming:

Jz Pan (Dec 07 2024 at 19:41):

And we don't have Polynomial.rank_eq

Junyan Xu (Dec 07 2024 at 19:50):

We can make docs#Polynomial.toFinsupp a LinearEquiv. There's the RingEquiv Polynomial.toFinsuppIso but its codomain is actually AddMonoidAlgebra ... (defeq to Finsupp though)

Jz Pan (Dec 07 2024 at 19:53):

RingEquiv is not enough; we need AlgEquiv and then we can toLinearEquiv. Anyways now I import Mathlib.RingTheory.MvPolynomial.Basic for an ad-hoc workaround

Jz Pan (Dec 08 2024 at 08:06):

Algebra.TensorProduct.not_isField_of_transcendental is added in https://github.com/leanprover-community/mathlib4/pull/19614/commits/1f1b2c73e5a5c03ad1fdecbb48fc5c03892b1172

Jz Pan (Dec 08 2024 at 08:11):

Seems that newest version of Lean has trouble on typeclass inference...

Jz Pan (Dec 08 2024 at 12:52):

PR #19614 is ready for review (modulo some theorem moving TODOs), which addresses the equivalent definitions in <https://mathoverflow.net/questions/8324>.

Jz Pan (Dec 15 2024 at 15:37):

@Junyan Xu has some ideas of generalizing linearly disjoint in https://github.com/leanprover-community/mathlib4/pull/19614#discussion_r1884848986, which should be noted here:

import Mathlib

variable {R A B C} [CommSemiring R] [AddCommMonoid A] [AddCommMonoid B]
  [Module R A] [Module R B]

section

variable [NonUnitalNonAssocSemiring C] [Module R C]
  [SMulCommClass R C C] [IsScalarTower R C C]

def LinearMap.LinearDisjoint (f : A →ₗ[R] C) (g : B →ₗ[R] C) : Prop :=
  Function.Injective ((LinearMap.mul R C).compl₁₂ f g)

def Submodule.LinearDisjoint' (M N : Submodule R C) : Prop :=
  M.subtype.LinearDisjoint N.subtype

end

def Subalgebra.LinearDisjoint' [Semiring C] [Algebra R C] (M N : Subalgebra R C) :
  Prop := M.val.toLinearMap.LinearDisjoint N.val.toLinearMap

def IntermediateField.LinearDisjoint' {R C} [Field R] [Field C] [Algebra R C]
    (M N : IntermediateField R C) : Prop :=
  M.val.toLinearMap.LinearDisjoint N.val.toLinearMap

Last updated: May 02 2025 at 03:31 UTC