Zulip Chat Archive

Stream: new members

Topic: Sums over product types


view this post on Zulip Yakov Pechersky (Jun 23 2020 at 21:42):

Am I missing something that I can use from Pi lemmas? Or are there lemmas missing about pair types? I'm trying to prove lemmas about higher-dim dot products, and am hitting issues proving the following generalized lemma:

import data.finset

universe u
variables {α : Type u} [add_comm_monoid α]

open_locale big_operators

example {u : Type*} {v : Type*} [fintype u] [fintype v] (f : u × v -> α) :
   (i : u),  (j : v), f (i, j) =  (p : u × v), f p :=
begin
  rw <-finset.sum_product,
  repeat { rw finset.univ },
  sorry,
end

It's getting into the morass of fintype.elems and finsets.

view this post on Zulip Chris Hughes (Jun 23 2020 at 21:46):

This'll probably be refl after rw <- finset.sum_product. I don't think it's in the library already and it should be added.

view this post on Zulip Chris Hughes (Jun 23 2020 at 21:46):

It won't be refl actually.

view this post on Zulip Chris Hughes (Jun 23 2020 at 21:48):

It is refl

view this post on Zulip Yakov Pechersky (Jun 23 2020 at 22:01):

Do you think it should be in mathlib in this forward, or in the backward direction?

view this post on Zulip Yakov Pechersky (Jun 23 2020 at 22:02):

It's been my recent experience that there aren't a lot of lemmas that work directly on pair types, so perhaps expanding out the pair into the double sum is the better direction.

view this post on Zulip Chris Hughes (Jun 23 2020 at 22:03):

It should be the same direction as finset.sum_product

view this post on Zulip Yakov Pechersky (Jun 23 2020 at 22:24):

And where would it go? algebra.big_operators has no lemmas that utilize [fintype \a]. Would it be alright to change the imports to introduce fintype?

view this post on Zulip Reid Barton (Jun 23 2020 at 23:11):

Those lemmas are in data.fintype I think

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 03:29):

Thanks for that guidance. Using that now-defined lemma, I'm trying to prove the equivalent of fin.sum_univ_succ for pair types, and am running into issues, culminating in getting fin to nat coercion to work. My computer has been crashing on memory with Lean and VSCode, so I've had to limit it's available memory. This prevents me from using library_search, so I'm relying a lot on refine _ suggested by suggest. I'm sure this proof can be cleaned up a lot. But the current issue is the final coercion from nat to fin back to nat. It could also be that this has been proved in a more general way for some \Pi u type, I just don't know how to find it, or how to specialize it for pairs. I'd be happy to develop the more general \Pi version if I better understood how pi_finsets work.

import data.matrix.notation

universe u
variables {α : Type u} [add_comm_monoid α] [has_mul α]

open_locale big_operators

@[simp] lemma sum_prod {u : Type*} {v : Type*} [fintype u] [fintype v] (f : u × v -> α) :
   (i : u),  (j : v), f (i, j) =  (p : u × v), f p :=
begin
  rw <-finset.sum_product,
  refl
end

example {u : Type*} {n : } [fintype u] [decidable_eq u] (v : fin n.succ × u -> α) :
    (p : fin n.succ × u), v p =
    (i : u), v (0, i) +  (ki : fin n × u), v (nat.succ ki.fst, ki.snd) :=
begin
  rw <-@sum_prod α _ (by apply_instance) (fin n) u _,
  rw finset.sum_comm,
  rw <-finset.sum_add_distrib,
  rw <-sum_prod,
  rw finset.sum_comm,
  simp_rw fin.sum_univ_succ,
  refine congr_arg finset.univ.sum _,
  ext i,
  refine congr_arg ((+) (v (0, i))) _,
  refine congr_arg finset.univ.sum _,
  ext j,
  refine congr_arg v _,
  rw prod.eq_iff_fst_eq_snd_eq,
  split,
  { rw fin.coe_eq_val,
    rw fin.eq_iff_veq,
    rw fin.succ_val,
    cases j,
    rw nat.succ_eq_add_one,
    rw <-fin.coe_eq_val,
    sorry }, -- j_val + 1 = ↑↑(j_val + 1)
  { refl }
end

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 03:31):

There might be some cleaner way to do this with finset.prod_image, finset.prod_insert, but I think it would have to be on the pi_finsets

view this post on Zulip Johan Commelin (Jun 24 2020 at 06:14):

Currently there are a bunch of lemmas that need both fintype and big_operators in data.finitype.card. I think that's a reasonably bad name. Probably that file should be renamed to data.fintype.big_operators

view this post on Zulip Johan Commelin (Jun 24 2020 at 06:15):

Maybe even better: all those lemmas don't belong in data.*

view this post on Zulip Johan Commelin (Jun 24 2020 at 06:16):

I'm not sure if they belong in algebra.* either. I think initially the goal was to have algebra.* mean "algebraic hierarchy" and it should only contain the definitions (of groups, rings, what have you) and the most basic lemmas.
So maybe the bigops stuff should all move to group_theory.*

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 06:37):

Would it make more sense to PR in a couple of small lemmas still into data.fintype.card, and then rearrange later?

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 06:37):

It might be easier to do the rearrangement when a critical mass has formed

view this post on Zulip Johan Commelin (Jun 24 2020 at 06:38):

Whatever you want. But a "rename a file PR" could also be done right now

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 06:38):

In any case, I'm still struggling on how to prove:

import data.finset
import tactic

example {n : } (x : fin n) : x.succ = ((x : fin (n + 1)).succ) :=
begin
  unfold_coes,
  rw fin.eq_iff_veq,
  sorry, -- x.succ.val = x.val.cast.succ.val.cast.val
end

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 06:39):

There's some weird diamond going on with val, cast, and succ where I can't eliminate any of them

view this post on Zulip Johan Commelin (Jun 24 2020 at 06:44):

In which type is that equality taking place?

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 06:46):

As far as I understand, it should be taking place in fin (n + 1)

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 06:46):

but since there is no arrow on the left, it's probably in fin n

view this post on Zulip Johan Commelin (Jun 24 2020 at 06:50):

So on the rhs you are casting x to fin (n+1), then taking succ, which moves it to fin (n+2) and then casting back?

view this post on Zulip Bryan Gin-ge Chen (Jun 24 2020 at 06:51):

(deleted)
Whoops, had my cursor in the wrong place.

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 06:52):

I guess it is in nat after the rw fin.eq_iff_veq, but the statement at the top is in fin (n + 1)

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 06:55):

I think the right hand side is going to fin n --val--> nat --cast--> fin (n + 1) --succ--> fin (n + 1) --val--> nat --cast--> fin (n + 1)

view this post on Zulip Johan Commelin (Jun 24 2020 at 07:00):

Right... if possible, I would try to avoid that :stuck_out_tongue_wink:

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 07:00):

Or is it fin n --val--> nat --cast--> fin (n + 1) --succ--> fin (n + 2) --val--> nat --cast--> fin (n + 2)

view this post on Zulip Johan Commelin (Jun 24 2020 at 07:01):

It is

fin n --val--> nat --cast--> fin (n + 1) --succ--> fin (n + 2) --val--> nat --cast--> fin (n + 1)

view this post on Zulip Johan Commelin (Jun 24 2020 at 07:01):

And the cast from nat to fin (? + 1) is taking i to i % (?+1)

view this post on Zulip Johan Commelin (Jun 24 2020 at 07:02):

This proof will be a bit messy, because of all those casts

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 07:02):

Well it goes back to the proof I'm trying to show above, that sums broken up over pair types works out. The current sum lemmas are on plain fin types, and I don't understand how to employ the pi_finset to prove about pairs.

view this post on Zulip Johan Commelin (Jun 24 2020 at 07:02):

hence, maybe it's easier to take a small step back, and see if you can avoid it all together

view this post on Zulip Johan Commelin (Jun 24 2020 at 07:03):

Can you instead use sums over finset.range n?

view this post on Zulip Johan Commelin (Jun 24 2020 at 07:04):

There is also finset.product to take the cartesian product of two finsets

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 07:49):

I would imagine that any proof which involves some funny map from fin(n+2) to fin(n+1) which is "the right thing for the input you have, but crazy in general" is going to have difficulties, for the same reason that theorems involving natural number subtraction are hard in general, even when the subtraction is "the right subtraction in this situation"

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 07:50):

Usually I try to refactor subtractions so instead of a-b which is "ok in this situation" I lose the a variable completely and instead use a new variable c such that a=c+b

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 07:59):

Looking at what you did, it seems very natural; I am surprised this has happened! Your original statement looks sensible, how did we end up with the casts?

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 08:11):

Aah, I think the statement is bad. It contains example (m : ℕ) (i : fin m) : fin (m+1) := nat.succ i. Maybe i.succ is better?

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 08:14):

example {u : Type*} {n : } [fintype u] [decidable_eq u] (v : fin n.succ × u -> α) :
    (p : fin n.succ × u), v p =
    (i : u), v (0, i) +  (ki : fin n × u), v (fin.succ ki.fst, ki.snd) :=
begin
  rw <-@sum_prod α _ (by apply_instance) (fin n) u _,
  rw finset.sum_comm,
  rw <-finset.sum_add_distrib,
  rw <-sum_prod,
  rw finset.sum_comm,
  refine congr_arg finset.univ.sum _,
  ext i,
  simp_rw fin.sum_univ_succ,
  -- it's all over
end

Will this do for you? It's an initial segment of your proof, applied to a different statement.

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 08:24):

example (n : ) (i : fin n) : nat.succ i = fin.succ i :=
begin
  unfold_coes,
  cases i,
  simp,
end

This is the glue which relates the proof above to your original statement.

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 08:28):

Here's a proof of your original statement. I am still a bit confused by all this. Note that it contains a non-terminal simp.

lemma helper (n : ) (i : fin n) : nat.succ i = fin.succ i :=
begin
  unfold_coes,
  cases i,
  simp,
end


example {u : Type*} {n : } [fintype u] [decidable_eq u] (v : fin n.succ × u -> α) :
    (p : fin n.succ × u), v p =
    (i : u), v (0, i) +  (ki : fin n × u), v (nat.succ ki.fst, ki.snd) :=
begin
  simp_rw helper,
  simp,
  rw <-@sum_prod α _ (by apply_instance) (fin n) u _,
  rw finset.sum_comm,
  rw <-finset.sum_add_distrib,
  rw <-sum_prod,
  rw finset.sum_comm,
  simp_rw fin.sum_univ_succ,
end

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 08:40):

There's a coercion from nat to fin (nat.succ m) and this is the poorly-behaved map which involves % in general. Avoiding this is the key.

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 09:21):

My actual problem is at the bottom of the mwe below. Thanks for all the insights. I'd love to hear more comments about how to incorporate this general sum result into the more specific dot_product proof. I think what is blocking that incorporation is that simp_rw which can't be moved further up.

import data.fintype.card
import data.matrix.basic

open_locale big_operators

namespace matrix

universe u
variables {α : Type u}

open_locale matrix

section matrix_notation
variables {ι : Type} [fintype ι] [decidable_eq ι]

/-- `vec_cons h t` prepends an entry `h` to a vector `t`.

The inverse functions are `vec_head` and `vec_tail`.
The notation `![a, b, ...]` expands to `vec_cons a (vec_cons b ...)`.
-/
def vec_cons {n : } (h : ι  α) (t : fin n × ι  α) : fin n.succ × ι  α :=
λ p, match p with
| ⟨⟨0, _⟩, i := h i
| ⟨⟨nat.succ k, hk, i := t (k, nat.lt_of_succ_lt_succ hk, i)
end

/-- `vec_head v` gives the first entry of the vector `v` -/
def vec_head {n : } (v : fin n.succ × ι  α) : ι -> α :=
λ i, v (0, nat.succ_pos', i)

/-- `vec_tail v` gives a vector consisting of all entries of `v` except the first -/
def vec_tail {n : } (v : fin n.succ × ι  α) : fin n × ι  α :=
λ p, v ⟨⟨nat.succ p.1, nat.succ_lt_succ p.1.2, p.2

end matrix_notation

variables {n : } {n' : Type} [fintype n'] [decidable_eq n']
variables {ι : Type} [fintype ι] [decidable_eq ι]

section val

@[simp] lemma cons_val_succ (x : ι -> α) (u : fin n × ι  α) (i : fin n) (ix : ι) :
  (vec_cons x u) (i.succ, ix) = u (i, ix) :=
by { simp [vec_cons], cases i, refl }

end val

/-- A product over the `univ` of a pair type equals to
the double product over the `univ`s of the `fst` and `snd` types. -/
@[to_additive "A sum over the `univ` of a pair type equals to
the double sum over the `univ`s of the `fst` and `snd` types."]
lemma fintype.prod_product_fintype {a : Type*} {β : Type*} {M : Type*} [fintype α] [comm_monoid M] [fintype β] {f : α × β  M} :
   (x : α × β), f x =  x : α,  y : β, f (x, y) :=
by { rw <-finset.prod_product, refl }

section dot_product

variables [add_comm_monoid α] [has_mul α]

@[simp] lemma cons_dot_product [has_mul α] (x : ι -> α) (v : fin n × ι  α) (w : fin n.succ × ι  α) :
  dot_product (vec_cons x v) w = dot_product x (vec_head w) + dot_product v (vec_tail w) :=
begin
  dsimp only [dot_product, vec_head, vec_tail],
  rw [fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      <-finset.sum_add_distrib],
  refine congr_arg finset.univ.sum _,
  ext i,
  rw fin.sum_univ_succ,
  simp_rw [cons_val_succ],  -- I wish I could move this further up, but simp_rw doesn't work higher
  refine congr_arg ((+) _) _,
  refine congr_arg finset.univ.sum _,
  ext j,
  refine congr_arg ((*) (v (j, i))) _,
  refine congr_arg w _,
  unfold_coes,
  cases j,
  simp [fin.eq_iff_veq, fin.succ_val],
  assumption, assumption, -- this line isn't needed in my actual file because sum_product_fintype is properly imported
end

end dot_product

end matrix

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 09:55):

I don't really understand this question -- the code compiles. You don't need all those _s in the rewrite in the last proof; rw will put them in for you.

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 09:56):

I don't know anything about big operators, I was just intrigued by the puzzle about succ :-)

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:03):

Why is fintype.prod_product_fintype not a valid simp lemma? This looks like the sort of proof simp would be really good at.

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:09):

def vec_cons {n : } (h : ι  α) (t : fin n × ι  α) : fin n.succ × ι  α
| ⟨⟨0, _⟩, i := h i
| ⟨⟨nat.succ k, hk, i := t (k, nat.lt_of_succ_lt_succ hk, i)

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:09):

I think it's better -- better definitional equalities maybe?

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:09):

e.g.

@[simp] lemma cons_val_succ (x : ι -> α) (u : fin n × ι  α) (i : fin n) (ix : ι) :
  (vec_cons x u) (i.succ, ix) = u (i, ix) :=
by { cases i, refl }

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:10):

Does this help you?

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:11):

@Mario Carneiro why isn't fintype.prod_product_fintype a valid simp lemma? I want simp to be doing the donkey work here, don't I?

view this post on Zulip Mario Carneiro (Jun 24 2020 at 10:11):

what's the statement?

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:12):

@Yakov Pechersky that change to the definition of vec_cons lets me move the simp_rw up. Have I solved your problem?

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:12):

/-- A product over the `univ` of a pair type equals to
the double product over the `univ`s of the `fst` and `snd` types. -/
@[to_additive "A sum over the `univ` of a pair type equals to
the double sum over the `univ`s of the `fst` and `snd` types."]
lemma fintype.prod_product_fintype {a : Type*} {β : Type*} {M : Type*} [fintype α] [comm_monoid M] [fintype β] {f : α × β  M} :
   (x : α × β), f x =  x : α,  y : β, f (x, y) :=
by { rw <-finset.prod_product, refl }

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:12):

Those pis are products not Pi types

view this post on Zulip Mario Carneiro (Jun 24 2020 at 10:13):

I don't know about this always being a simp lemma, but you should be able to simp with it

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:13):

simp [fintype.sum_product_fintype], -> invalid simplification lemma 'matrix.fintype.sum_product_fintype' (use command 'set_option trace.simp_lemmas true' for more details)

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:14):

[simp_lemmas.invalid] rule derived from 'matrix.fintype.sum_product_fintype' contains argument that is (a) not a Prop, (b) not an instance, and (c) not in the LHS of the rule

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:15):

fintype.sum_product_fintype :
  ∀ {α : Type u_1} {a : Type u_2} {β : Type u_3} {M : Type u_4} [_inst_5 : fintype α] [_inst_6 : add_comm_monoid M]
  [_inst_7 : fintype β] {f : α × β → M}, ∑ (x : α × β), f x = ∑ (x : α) (y : β), f (x, y)

view this post on Zulip Mario Carneiro (Jun 24 2020 at 10:15):

oh!

view this post on Zulip Mario Carneiro (Jun 24 2020 at 10:15):

what's a doing there

view this post on Zulip Mario Carneiro (Jun 24 2020 at 10:16):

I'm surprised the unused arguments linter didn't catch this

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:19):

This is not in mathlib

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 10:19):

Thanks! I should have linted!

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 11:02):

@Yakov Pechersky there's a typo in your definition of fintype.prod_product_fintype -- an {a : Type*} should be {α : Type*}. This explains the random Type goals you get after the rewrite (the assumption, assumption thing). It can now be made into a simp lemma.

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 11:12):

The linter also points out that you have two has_mul instances for cons_dot_product. Put #lint at the bottom of your file and take a look at the output.

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 18:39):

Thanks for all the insight! Your suggestions made me think of writing another simp lemma, which now makes these higher-dim dot product proofs pretty nice:

import data.fintype.card
import data.matrix.basic

open_locale big_operators

namespace matrix

universe u
variables {α : Type u}

open_locale matrix

section matrix_notation
variables {ι : Type} [fintype ι] [decidable_eq ι]

/-- `![]` is the vector with no entries. -/
def vec_empty : fin 0 × ι  α
| k, _⟩ := @fin_zero_elim (λ _, α) k

/-- `vec_cons h t` prepends an entry `h` to a vector `t`.

The inverse functions are `vec_head` and `vec_tail`.
The notation `![a, b, ...]` expands to `vec_cons a (vec_cons b ...)`.
-/
def vec_cons {n : } (h : ι  α) (t : fin n × ι  α) : fin n.succ × ι  α
| ⟨⟨0, _⟩, i := h i
| ⟨⟨nat.succ k, hk, i := t (k, nat.lt_of_succ_lt_succ hk, i)

notation `![` l:(foldr `, ` (h t, vec_cons h t) vec_empty `]`) := l

/-- `vec_head v` gives the first entry of the vector `v` -/
def vec_head {n : } (v : fin n.succ × ι  α) : ι -> α
| i := v (0, nat.succ_pos', i)

/-- `vec_tail v` gives a vector consisting of all entries of `v` except the first -/
def vec_tail {n : } (v : fin n.succ × ι  α) : fin n × ι  α
| ⟨⟨k, hk, i := v ⟨⟨nat.succ k, nat.succ_lt_succ hk, i

end matrix_notation

variables {n : } {n' : Type} [fintype n'] [decidable_eq n']
variables {ι : Type} [fintype ι] [decidable_eq ι]

section val

@[simp] lemma cons_val_succ (x : ι -> α) (u : fin n × ι  α) (i : fin n) (ix : ι) :
  (vec_cons x u) (i.succ, ix) = u (i, ix) :=
by { cases i, refl }

@[simp] lemma tail_cons' (u : fin (n + 1) × ι  α) (k : fin n) (ix : ι) :
  vec_tail u (k, ix) = u (k.succ, ix) :=
-- by { ext i, simp [vec_tail], cases i, cases i_fst, { simp [fin.cases], }, rw cons_val_succ'',  }
by { cases k, refl }

end val

section dot_product

/-- A product over the `univ` of a pair type equals to
the double product over the `univ`s of the `fst` and `snd` types. -/
@[to_additive "A sum over the `univ` of a pair type equals to
the double sum over the `univ`s of the `fst` and `snd` types."]
lemma fintype.prod_product_fintype {α : Type*} {β : Type*} {M : Type*} [fintype α] [comm_monoid M] [fintype β] {f : α × β  M} :
   (x : α × β), f x =  x : α,  y : β, f (x, y) :=
by { rw <-finset.prod_product, refl }

variables [add_comm_monoid α] [has_mul α]

@[simp] lemma cons_dot_product (x : ι -> α) (v : fin n × ι  α) (w : fin n.succ × ι  α) :
  dot_product (vec_cons x v) w = dot_product x (vec_head w) + dot_product v (vec_tail w) :=
begin
  dsimp only [dot_product, vec_head, vec_tail],
  rw [fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      <-finset.sum_add_distrib],
  refine congr_arg finset.univ.sum _,
  ext i,
  simpa [fin.sum_univ_succ],
end

@[simp] lemma dot_product_cons (v : fin n.succ × ι  α) (x : ι -> α) (w : fin n × ι  α) :
  dot_product v (vec_cons x w) = dot_product (vec_head v) x + dot_product (vec_tail v) w :=
begin
  dsimp only [dot_product, vec_head, vec_tail],
  rw [fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      <-finset.sum_add_distrib],
  refine congr_arg finset.univ.sum _,
  ext i,
  simpa [fin.sum_univ_succ],
end

end dot_product

end matrix

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 18:49):

Even further golfed:

@[simp] lemma cons_dot_product (x : ι -> α) (v : fin n × ι  α) (w : fin n.succ × ι  α) :
  dot_product (vec_cons x v) w = dot_product x (vec_head w) + dot_product v (vec_tail w) :=
begin
  dsimp only [dot_product, vec_head, vec_tail],
  rw [fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      <-finset.sum_add_distrib],
  simpa only [fin.sum_univ_succ, tail_cons', cons_val_succ]
end

@[simp] lemma dot_product_cons (v : fin n.succ × ι  α) (x : ι -> α) (w : fin n × ι  α) :
  dot_product v (vec_cons x w) = dot_product (vec_head v) x + dot_product (vec_tail v) w :=
begin
  dsimp only [dot_product, vec_head, vec_tail],
  rw [fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      fintype.sum_product_fintype, @finset.sum_comm ι _ _ _ _ _ _,
      <-finset.sum_add_distrib],
  simpa only [fin.sum_univ_succ, tail_cons', cons_val_succ]
end

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 18:52):

I'd been banging my head against the wall for like 4-5 days on these lemmas. And turns out just defining the fintype.prod_product_fintype and tail_cons' was all I needed. Is that a normal proof workflow? I feel like I went down too many wrong paths.

view this post on Zulip Patrick Massot (Jun 24 2020 at 18:54):

I haven't been following the discussion but the last message sounds like you're learning.

view this post on Zulip Patrick Massot (Jun 24 2020 at 18:55):

Without having any idea about the topic of the discussion, I can see uncurried functions and lots of underscores. This doesn't smell good.

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 18:57):

The uncurried functions are due to my trying to make lemmas about higher-dim "matrices" without having to rewrite them for b -> a, c -> b -> a, d -> c -> b -> a.

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 18:58):

where the head of those arguments is always a fin n. So my proving over the product type, I can stuff the rest of the arrows all into the \io index type.

view this post on Zulip Patrick Massot (Jun 24 2020 at 18:59):

This is fishy.

view this post on Zulip Patrick Massot (Jun 24 2020 at 19:00):

What do you call higher-dim matrices? Tensors?

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 19:02):

Yakov yes, this is exactly the workflow for learning Lean. You get interested in something, you work on it, you come up with some concrete Lean code which works, or doesn't quite work, you post it, others look at it and suggest fixes or changes, and proofs just get neater and neater. It doesn't matter how much you understand the material "informally", there is some kind of a knack for getting it into Lean which you learn by doing (writing code) and getting others to look at it. The vast majority of us learnt this way. The power of this chat is that there are a whole bunch of experts on it -- your situation was sufficiently basic that an intermediate user like me could help, but as you get better and maybe more specialised then you ask more challenging questions with your code and experts who know the system inside out give you more subtle answers (e.g. there are very subtle questions which can come up regarding e.g. the type class inference system).

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 19:12):

I'm a fan of that sort of learning style. The hope is that everyone remains eager to teach each other

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 19:12):

What do you find fishy about that sort of construction, Patrick?

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 19:13):

Once could call them tensors, although in a mathlib that seems inappropriate. Higher dim matrices is a numpy style moniker.

view this post on Zulip Kevin Buzzard (Jun 24 2020 at 19:14):

I thought they were called holors :-)

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 19:15):

Holors, from what I looked at in the lib, don't have good support for cons, they seemed to me to be more about construct-once at desired shape

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 19:17):

I'm editing the matrix notation file, and didn't want to write repetitive lemmas about matrices for the ones that were already written about vectors. Seemed to me that they were general statements about cons, head, tail, etc that did not rely on 1- or 2- dimensionality

view this post on Zulip Yakov Pechersky (Jun 24 2020 at 19:19):

And also a good way to flex/train my lean programming skills


Last updated: May 14 2021 at 06:16 UTC