Zulip Chat Archive

Stream: new members

Topic: Type casting proof


Anirudh Suresh (Apr 11 2025 at 20:36):

import Mathlib
import Lean.Elab.Tactic
open scoped Matrix
open Lean Elab Tactic

def tensor_product_2 {m n o p : } (A : Matrix (Fin m) (Fin n) )
  (B : Matrix (Fin o) (Fin p) ) : Matrix (Fin (m * o)) (Fin (n * p))  :=
(Matrix.kroneckerMap (fun (a:) (b:) => a*b) A B).reindex finProdFinEquiv finProdFinEquiv

infixl:80 " ⊗ " => tensor_product_2

def I1 := !![(1:)]
lemma h: m, m=1*m:= by {
  simp
}
lemma I1_tensor {m n : } (A : Matrix (Fin m) (Fin n) ) : tensor_product_2 (I1) A = (h m)  (h n)  A :=
by {
  ext (i : Fin (1*m)) (j : Fin (1*n))
  revert i j
  simp_rw [finProdFinEquiv.surjective.forall]
  rintro i1, i2 j1, j2
  have hj1 : j1 = 0 := Subsingleton.elim _ _
  have hj2 : i1 = 0 := Subsingleton.elim _ _
  rw [hj1, hj2]
  have h32 : (((3 : Fin 4) : ) / 2, by decide : Fin 2) = 1 := rfl
  have h32_2 : (((3 : Fin 4) : ) % 2, by decide : Fin 2) = 1 := rfl
  simp [tensor_product_2, Matrix.mul_apply, Matrix.kroneckerMap,Fin.sum_univ_four, finProdFinEquiv, Fin.divNat, Fin.modNat, h32, h32_2, Matrix.vecCons_const, I1]
  norm_cast
  simp only [Fin.val_mk, Nat.mod_eq_of_lt i2.isLt]
  simp only [Fin.val_mk, Nat.mod_eq_of_lt j2.isLt]
  simp

}
lemma tensor_assoc {m n o p q r:} (A:Matrix (Fin m) (Fin n)  ) (B:Matrix (Fin o) (Fin p)  ) (C:Matrix (Fin q) (Fin r)  ):
tensor_product_2 (tensor_product_2 A B) C = (Nat.mul_assoc m o q)  (Nat.mul_assoc n p r)  tensor_product_2 A (tensor_product_2 B C) :=
by{
  unfold tensor_product_2
  simp [Matrix.submatrix,Matrix.kronecker_assoc,Fin.divNat,Fin.modNat, Matrix.of]
  sorry
}

In this code, I am trying to prove a lemma about tensor products. However, the lemma involves me doing type casting because of how I defined tensor products. Is there a way to easily simplify proofs which have this form of type casting as in the two lemmas shown or is it possible to make Lean somehow ignore this type issue and not do any casting?

Anirudh Suresh (Apr 13 2025 at 17:58):

import Mathlib
import Lean.Elab.Tactic
open scoped Matrix
open Lean Elab Tactic

def tensor_product_2 {m n o p : } (A : Matrix (Fin m) (Fin n) )
  (B : Matrix (Fin o) (Fin p) ) : Matrix (Fin (m * o)) (Fin (n * p))  :=
(Matrix.kroneckerMap (fun (a:) (b:) => a*b) A B).reindex finProdFinEquiv finProdFinEquiv

infixl:80 " ⊗ " => tensor_product_2

def I1 := !![(1:)]
lemma h: m, m=1*m:= by {
  simp
}
lemma I1_tensor {m n : } (A : Matrix (Fin m) (Fin n) ) : tensor_product_2 (I1) A = (h m)  (h n)  A :=
by {
  ext (i : Fin (1*m)) (j : Fin (1*n))
  revert i j
  simp_rw [finProdFinEquiv.surjective.forall]
  rintro i1, i2 j1, j2
  have hj1 : j1 = 0 := Subsingleton.elim _ _
  have hj2 : i1 = 0 := Subsingleton.elim _ _
  rw [hj1, hj2]
  have h32 : (((3 : Fin 4) : ) / 2, by decide : Fin 2) = 1 := rfl
  have h32_2 : (((3 : Fin 4) : ) % 2, by decide : Fin 2) = 1 := rfl
  simp [tensor_product_2, Matrix.mul_apply, Matrix.kroneckerMap,Fin.sum_univ_four, finProdFinEquiv, Fin.divNat, Fin.modNat, h32, h32_2, Matrix.vecCons_const, I1]
  norm_cast
  simp only [Fin.val_mk, Nat.mod_eq_of_lt i2.isLt]
  simp only [Fin.val_mk, Nat.mod_eq_of_lt j2.isLt]
  simp

}
lemma tensor_assoc {m n o p q r:} (A:Matrix (Fin m) (Fin n)  ) (B:Matrix (Fin o) (Fin p)  ) (C:Matrix (Fin q) (Fin r)  ):
tensor_product_2 (tensor_product_2 A B) C = (Nat.mul_assoc m o q)  (Nat.mul_assoc n p r)  tensor_product_2 A (tensor_product_2 B C) :=
by{
  unfold tensor_product_2
  simp [Matrix.submatrix,Matrix.kronecker_assoc,Fin.divNat,Fin.modNat, Matrix.of]
  sorry
}

In this code, I am trying to prove a lemma about tensor products. However, the lemma involves me doing type casting because of how I defined tensor products. Is there a way to easily simplify proofs which have this form of type casting as in the two lemmas shown or is it possible to make Lean somehow ignore this type issue and not do any casting?

Eric Wieser (Apr 13 2025 at 18:55):

My general tip would be to use Matrix.submatrix rather than ; and in general, use a "nice" casting function in place of the most general one

Eric Wieser (Apr 13 2025 at 18:59):

Maybe this lemma is useful to you?

import Mathlib

variable {m m' n n' R R'}

theorem Matrix.cast_eq (hm : m = m') (hn : n = n') (hR : R = R') (A : Matrix m n R) :
    cast (congr (congrArg₂ Matrix hm hn) hR) A = (A.submatrix (cast hm.symm) (cast hn.symm)).map (cast hR) := by
  cases hm; cases hn; cases hR
  rfl

Eric Wieser (Apr 13 2025 at 19:01):

For your theorem, it applies as

  simp_rw [eqRec_eq_cast]
  rw [Matrix.cast_eq _ _ rfl]

Anirudh Suresh (Apr 13 2025 at 19:05):

Eric Wieser said:

My general tip would be to use Matrix.submatrix rather than ; and in general, use a "nice" casting function in place of the most general one

How can I use submatrix?

Anirudh Suresh (Apr 13 2025 at 19:08):

Oh I see you use it in the theorem

Anirudh Suresh (Apr 13 2025 at 19:09):

im not sure if it helps solve the goal though. The casting still exists

Eric Wieser (Apr 13 2025 at 19:09):

This might also be helpful:

-- some string diagram statement here
theorem finProdFinEquiv_assoc {m n p : Nat} :
    ((@finProdFinEquiv m n).prodCongr (.refl _)).trans (@finProdFinEquiv _ p) =
      (Equiv.prodAssoc _ _ _).trans (Equiv.prodCongr (.refl _) (@finProdFinEquiv n p) |>.trans (@finProdFinEquiv m _) |>.trans (Fin.castOrderIso (mul_assoc _ _ _).symm).toEquiv) := by
  ext ⟨⟨i, j, k
  simp
  ring

Eric Wieser (Apr 13 2025 at 19:10):

Maybe @Adam Topaz knows what the above corresponds to categorically, since this looks somewhat adjacent to the prod_assoc% elaborator

Adam Topaz (Apr 13 2025 at 19:18):

It would be helpful if you could explain what you want to do with this. Mathematically speaking, working with matrices, as opposed to, say, linear maps, just complicates things. Also, reindexing from (Fin a) x (Fin b) to Fin (a * b) adds another layer of complication.

Anirudh Suresh (Apr 13 2025 at 19:25):

I want to use Matrices to prove theorems which involve lots of tensor products.However, Lean's kronecker map creates a tuple which causes issues when it comes to managing the dimensions of the matrix. Doing more than one tensor product would result in nested tuple indices which is hard for me to work with.

Adam Topaz (Apr 13 2025 at 19:26):

You can use prod_assoc% to manage the indices.

Adam Topaz (Apr 13 2025 at 19:26):

Even if you do reindex as you have done, you will still have to cast because Fin ((a * b) * c) and Fin (a * (b * c)) are different.

Anirudh Suresh (Apr 13 2025 at 19:30):

Would it be possible to say multiply matrices after their tensor product is computed? Is there any reason why using kronecker without reindexing is more helpful?

Eric Wieser (Apr 13 2025 at 19:30):

Do you have some informal / LaTeX mathematics that you are trying to translate?

Anirudh Suresh (Apr 13 2025 at 19:31):

No not really

Adam Topaz (Apr 13 2025 at 19:32):

import Mathlib
import Lean.Elab.Tactic
open scoped Matrix
open Lean Elab Tactic

def T
    {α β γ δ : Type}
    [Fintype α]
    [Fintype β]
    [Fintype γ]
    [Fintype δ]
    (A : Matrix α β )
    (B : Matrix γ δ ) : Matrix (α × γ) (β × δ)  :=
  Matrix.kroneckerMap (fun (a:) (b:) => a*b) A B

example {a b c d e f}
    [Fintype a]
    [Fintype b]
    [Fintype c]
    [Fintype d]
    [Fintype e]
    [Fintype f]
    (A : Matrix a b )
    (B : Matrix c d )
    (C : Matrix e f ) :
    T (T A B) C = (T A <| T B C).reindex prod_assoc% prod_assoc% := by
  ext ⟨⟨x,y⟩,z ⟨⟨a,b⟩,c
  simp [T, mul_assoc]

Adam Topaz (Apr 13 2025 at 19:33):

Well, we do have Matrix.kronecker_assoc as well.

Eric Wieser (Apr 13 2025 at 19:36):

Note that open scoped Kronecker gives you ⊗ₖ as infix notation for your T

Eric Wieser (Apr 13 2025 at 19:38):

Anirudh Suresh said:

Is there any reason why using kronecker without reindexing is more helpful?

Either you reindex along Fin ((a * b) * c) ≃ Fin (a * (b * c)) or along (Fin a × Fin b) × Fin c ≃ Fin a × (Fin b × Fin c), and the latter is structurally more relevant

Edison Xie (Apr 14 2025 at 07:51):

what's wrong with the current TensorProduct?

Eric Wieser (Apr 14 2025 at 08:27):

This is a duplicate of a thread in #lean4

Yaël Dillies (Apr 14 2025 at 08:29):

Can someone please move this to #maths > Matrix casting proof ?

Yaël Dillies (Apr 14 2025 at 08:29):

(or both to #new members)

Notification Bot (Apr 14 2025 at 11:06):

This topic was moved here from #lean4 > Type casting proof by Kevin Buzzard.

Notification Bot (Apr 14 2025 at 11:07):

3 messages were moved here from #maths > Matrix casting proof by Eric Wieser.

Kevin Buzzard (Apr 14 2025 at 11:13):

@Anirudh Suresh (1) please don't double post. (2) The #lean4 channel is for questions relating to the core Lean 4 program and the #maths channel is usually used for questions not directly related to Lean at all. You can see descriptions of all channels on the Zulip e.g. at the top of the app if you click on the channel. Right now you might want to stick to posting in #new members which is safe for pretty much any questions.


Last updated: May 02 2025 at 03:31 UTC