Zulip Chat Archive

Stream: new members

Topic: induction on fin n


Yakov Pechersky (Aug 18 2020 at 15:06):

Currently, when I want to prove a lemma that deals indices in fin (n + 2), I'm often relying on induction. However, it looks like:

import data.fin
import data.matrix.basic

variables {R : Type*} [field R]
variables {n m : }

variables (A : matrix (fin (n + 2)) (fin (n + 2)) R)
variables (B : matrix (fin (n + 2)) (fin (m + 2)) R)

open_locale big_operators matrix

def swap_row (A : matrix (fin n) (fin m) R) (i j) : matrix (fin n) (fin m) R :=
A  equiv.swap i j

def drop (i : fin (n + 2)) (j : fin (m + 2)) : matrix (fin (n + 1)) (fin (m + 1)) R := B.minor (i.succ_above) (j.succ_above)

def det' {R : Type*} [field R] :
    Π {n : }, matrix (fin n) (fin n) R -> fin n -> R
| 0 _ _ := 1
| 1 M i := M i i
| (n + 2) M i := dite (i = 0)
  (λ h,  j, (M 0 j * (-1 : R) ^ (j.val) * det' (drop M 0 j) 0))
  (λ h,  j, (M i j * (-1 : R) ^ (i.val + j.val) * det' (drop M i j) (fin.pred i h)))

lemma det_swap_match_eq_neg_det'' (A : matrix (fin (n + 2)) (fin (n + 2)) R) (i j) (h : i  j) : det' (swap_row A i j) i = - det' A 0 :=
begin
  induction n with n hn,
    { sorry },
    { obtain i, hi := i,
      obtain j, hj := j,
      wlog hl : j < i,
      { rcases lt_trichotomy j i with H|rfl|H,
        { exact or.inl H },
        { contradiction },
        { exact or.inr H } },
      { induction i with i HI,
        { cases j,
          { contradiction },
          { exact absurd (zero_le j.succ) (not_le_of_lt hl) } },
        { induction j with j HJ,
          { sorry },
          { sorry } } },
      { sorry } }

Yakov Pechersky (Aug 18 2020 at 15:07):

I have to deconstruct the fin n into obtain ⟨i, hi⟩ := i and then induct on that. That makes dealing with existing lemmas about fin n much harder because of constantly having to convert back and forth between the ⟨i, hi⟩ syntax and the term itself.

Yakov Pechersky (Aug 18 2020 at 15:07):

Is there some induction i using ... that I don't know about? Or should I write one?

Yakov Pechersky (Aug 18 2020 at 15:08):

I've had some instances where I was able to invoke induction i.val but it didn't reduce the i to 0 and _. Not sure what it would have reduced it to.

Reid Barton (Aug 18 2020 at 15:11):

Can you either give a simpler example or explanation of what you want to do or translate your Lean code into English?

Reid Barton (Aug 18 2020 at 15:11):

You want to prove something for all i : fin n by induction on i I guess? What's the form of the argument?

Yakov Pechersky (Aug 18 2020 at 15:12):

I am trying to prove that the determinant of a matrix with two rows swapped is the negative of the determinant of the original matrix.

Reid Barton (Aug 18 2020 at 15:12):

What's det'?

Yakov Pechersky (Aug 18 2020 at 15:12):

det' is the Laplace expansion of the determinant

Yakov Pechersky (Aug 18 2020 at 15:12):

That expansion requires a row index to evaluate the determinant.

Yakov Pechersky (Aug 18 2020 at 15:13):

The argument is, either the rows that are swapped are (0, greater than 0), or (1 or greater, greater than 1)

Reid Barton (Aug 18 2020 at 15:14):

off-topic, but don't we already basically have this theorem in the form of det_permute?

Yakov Pechersky (Aug 18 2020 at 15:15):

That's totally right. But this is a necessary proof on the way that example : ∀ i, det' A i = det A := sorry

Yakov Pechersky (Aug 18 2020 at 15:16):

I want to prove that the Laplace expansion is a valid way of expressing det, that is the big picture.

Yakov Pechersky (Aug 18 2020 at 15:16):

So, I induct on the size of the matrix. The base case (which is dim 2) is just (0, 1) swap. That's easy and I have that.

Reid Barton (Aug 18 2020 at 15:17):

I'm still confused by det', the extra argument is which row to expand along? Why not always use the top row?

Yakov Pechersky (Aug 18 2020 at 15:17):

Then for (n + 2), the cases are (0, 1), (0, j + 1), (i + 1, j + 1). The (i + 1, j + 1) case is dealt with by the induction hypotheses for submatrices (from the n-induction) and the base cases.

Yakov Pechersky (Aug 18 2020 at 15:18):

Sure, one could always use the top row, and there would be a separate proof saying it does not matter which row you choose.

Yakov Pechersky (Aug 18 2020 at 15:18):

But to be able to prove that, det' has to be formulated to allow any row.

Yakov Pechersky (Aug 18 2020 at 15:19):

That might be useful if one had an explicit matrix with a very sparse row -- use that row.

Reid Barton (Aug 18 2020 at 15:19):

Wouldn't it be easier to use det in the definition of det'?

Reid Barton (Aug 18 2020 at 15:20):

I guess I don't see the big picture

Reid Barton (Aug 18 2020 at 15:21):

We also have adjugate_mul : adjugate A ⬝ A = A.det • 1which seems like roughly the statement you want to prove

Reid Barton (Aug 18 2020 at 15:24):

If you calculate the (i, i) entry of adjugate A ⬝ A it's something like your det' A i

Yakov Pechersky (Aug 18 2020 at 15:25):

You're right that they are very related definitions, that's part of what I want to try proving.

Yakov Pechersky (Aug 18 2020 at 15:26):

The Laplace expansion is a slow but sure way of actually calculating the det. The kernel knows:

variables (a b c d : R)

local notation a `+` b := field.add a b
local notation a `*` b := field.mul a b
local notation `-` a := field.neg a
local notation 0 := field.zero
local notation 1 := field.one

#reduce det' ![![a, b], ![c, d]] 0 /- a * 1 * d + (b * (-1 * 1) * c + 0) -/

#reduce matrix.det ![![a, b], ![c, d]] /- (0 + 1) * (a * (d * 1)) + (-(0 + 1) * (c * (b * 1)) + 0) -/

Yakov Pechersky (Aug 18 2020 at 15:28):

For those interested in computation, statements about speed or bounds of matrix calculation would need different examples. So, in this case, Laplace expansion versus anything more common.

Yakov Pechersky (Aug 18 2020 at 15:29):

In any case, is there a way to induction on terms of fin n without first destructing them?

Mario Carneiro (Aug 18 2020 at 15:32):

isn't there a recursor for fin?

Reid Barton (Aug 18 2020 at 15:36):

How about this strategy for proving your overall result:

  • Prove that det satisfies the same recursions as det': you can evaluate det by expanding along any row. You can prove it using adjugate_mul (but check the proof of that first to see if it uses another version of this fact).
  • Conclude that det' = det by induction on n.

Mario Carneiro (Aug 18 2020 at 15:37):

you can also skip step 2

Reid Barton (Aug 18 2020 at 15:40):

Yes, but if you specifically want to do this computation with #reduce or #eval for some reason then you can also not skip it

Mario Carneiro (Aug 18 2020 at 15:41):

but you also shouldn't use exponential algorithms for that

Yakov Pechersky (Aug 18 2020 at 15:42):

Let's say someone was writing a Linear Algebra Game without all of mathlib around.

Mario Carneiro (Aug 18 2020 at 15:44):

then you wouldn't have det

Mario Carneiro (Aug 18 2020 at 15:44):

so the theorem couldn't be stated

Yakov Pechersky (Aug 18 2020 at 15:47):

Sorry, I wasn't being clear. If in the game, the definition of det is the Laplace expansion, I'd still like to show the statement about how swapping rows negates the determinant value.

Yakov Pechersky (Aug 18 2020 at 15:47):

Without any reference to det = sum over permutations of products of ...

Yakov Pechersky (Aug 18 2020 at 15:48):

Is recursion induction over fin n in general not suggested?

Mario Carneiro (Aug 18 2020 at 15:49):

docs#fin.succ_rec

Yakov Pechersky (Aug 18 2020 at 15:53):

How is that used with the induction tactic? I've had difficulty understanding how induction ... using ... works.

Yakov Pechersky (Aug 18 2020 at 17:08):

Specifically, here are the errors I get:

example (A : matrix (fin (n + 2)) (fin (n + 2)) R) (i j) (h : i  j) : det' (swap_row A i j) i = - det' A 0 :=
begin
  induction n with n hn,
    { sorry },
    {
      induction i using fin.succ_rec,
    /-
induction tactic failed, argument #1 of major premise type
  fin (nat.succ n + 2)
is not a variable
state:
R : Type u_1,
_inst_1 : field R,
n : ℕ,
hn :
  ∀ (A : matrix (fin (n + 2)) (fin (n + 2)) R) (i j : fin (n + 2)), i ≠ j → det' (swap_row A i j) i = -det' A 0,
A : matrix (fin (n.succ + 2)) (fin (n.succ + 2)) R,
i j : fin (n.succ + 2),
h : i ≠ j
⊢ det' (swap_row A i j) i = -det' A 0
-/

    }
end

example (A : matrix (fin (n + 2)) (fin (n + 2)) R) (i j) (h : i  j) : det' (swap_row A i j) i = - det' A 0 :=
begin
  induction n with n hn,
    { sorry },
    {
      induction i using fin.succ_rec_on,
    /-
invalid user defined recursor, type of the major premise 'i' does not contain the recursor parameter 'i'
state:
R : Type u_1,
_inst_1 : field R,
n : ℕ,
hn :
  ∀ (A : matrix (fin (n + 2)) (fin (n + 2)) R) (i j : fin (n + 2)), i ≠ j → det' (swap_row A i j) i = -det' A 0,
A : matrix (fin (n.succ + 2)) (fin (n.succ + 2)) R,
i j : fin (n.succ + 2),
h : i ≠ j
⊢ det' (swap_row A i j) i = -det' A 0
-/

    }
end

Mario Carneiro (Aug 18 2020 at 17:12):

Oh, I don't use induction using much because it's a little flaky. You can just use refine fin.succ_rec_on i _ _

Yakov Pechersky (Aug 18 2020 at 17:17):

For the zero case, it does not provide the hypothesis that i = 0. Is there a way to do that?

Yakov Pechersky (Aug 18 2020 at 18:52):

I'm trying to emulate the hypothesis i = 0 by reverting the hypotheses that mention i. But I get a type mismatch, because the newly introduced N does not match with the existing n if I don't also revert j. Is there a way to show that N.succ = n.succ + 2 and replace occurrences of N with n.succ + 1? Because the introduced hl hypothesis isn't well-typed if that isn't the case.

Yakov Pechersky (Aug 18 2020 at 18:54):

import data.fin
import tactic.wlog

variables {n : }

example (i j : fin (n + 2)) (h : i  j) : true :=
begin
  induction n with n hn,
    { sorry },
    { wlog hl : j < i using [i j, j i],
      { rcases lt_trichotomy j i with H|rfl|H,
        { exact or.inl H },
        { contradiction },
        { exact or.inr H } },
      { revert hl h,
        refine fin.succ_rec_on i _ _,
        { intros N hl,
          exact absurd (fin.zero_le j) (not_le_of_lt hl) },
        /-
type mismatch at application
  not_le_of_lt hl
term
  hl
has type
  @has_lt.lt (fin N.succ) (@fin.has_lt N.succ) j 0
but is expected to have type
  @has_lt.lt (fin (@has_add.add nat nat.has_add (@has_add.add nat nat.has_add n.succ 1) 1))
    (@preorder.to_has_lt (fin (@has_add.add nat nat.has_add (@has_add.add nat nat.has_add n.succ 1) 1))
       (@partial_order.to_preorder (fin (@has_add.add nat nat.has_add (@has_add.add nat nat.has_add n.succ 1) 1))
          (@semilattice_inf.to_partial_order
             (fin (@has_add.add nat nat.has_add (@has_add.add nat nat.has_add n.succ 1) 1))
             (@lattice.to_semilattice_inf (fin (@has_add.add nat nat.has_add (@has_add.add nat nat.has_add n.succ 1) 1))
                (@lattice_of_decidable_linear_order
                   (fin (@has_add.add nat nat.has_add (@has_add.add nat nat.has_add n.succ 1) 1))
                   (@fin.decidable_linear_order
                      (@has_add.add nat nat.has_add (@has_add.add nat nat.has_add n.succ 1) 1)))))))
    j
    0
state:
n : ℕ,
hn : ∀ (i j : fin (n + 2)), i ≠ j → true,
i j : fin (n.succ + 2),
N : ℕ,
hl : j < 0
⊢ 0 ≠ j → true
-/

        {  },
      }
    }
end

Yakov Pechersky (Aug 18 2020 at 18:54):

Here's how reverting j makes it work.

example (i j : fin (n + 2)) (h : i  j) : true :=
begin
  induction n with n hn,
    { sorry },
    { wlog hl : j < i using [i j, j i],
      { rcases lt_trichotomy j i with H|rfl|H,
        { exact or.inl H },
        { contradiction },
        { exact or.inr H } },
      { revert hl h j,
        refine fin.succ_rec_on i _ _,
        { intros N j hl,
          exact absurd (fin.zero_le j) (not_le_of_lt hl) },
        { sorry  },
      }
    }
end

Yakov Pechersky (Aug 18 2020 at 18:54):

I guess this is what induction ... generalizing ... does?

Mario Carneiro (Aug 18 2020 at 18:57):

wait, you are doing two inductions over n here? is that deliberate?

Yakov Pechersky (Aug 18 2020 at 18:58):

I wanted to induct over i : fin (n + 2)

Mario Carneiro (Aug 18 2020 at 18:58):

yeah but what does that mean

Yakov Pechersky (Aug 18 2020 at 18:58):

And use a hypothesis that is valid for fin (n + 1)

Mario Carneiro (Aug 18 2020 at 18:58):

fin.succ_rec is doing simultaneous induction over i and n in i : fin n

Mario Carneiro (Aug 18 2020 at 18:59):

which you have to do in order to make something type correct

Yakov Pechersky (Aug 18 2020 at 18:59):

Okay, so I will try to rearrange the fin.succ_rec_on to be the outermost layer, and induct on n inside.

Mario Carneiro (Aug 18 2020 at 19:00):

the inner one can probably be a cases

Mario Carneiro (Aug 18 2020 at 19:22):

Oh! Your code is broken because a previous tactic gave you a bad state

Mario Carneiro (Aug 18 2020 at 19:28):

import data.fin
import tactic.wlog

example {n : } (i j : fin (n + 2)) (h : i  j) : false :=
begin
  wlog hl : j < i using [i j, j i],
  { sorry },
  { revert hl h,
    refine fin.succ_rec_on i sorry sorry }
end

This gives an error on the wlog line only if the refine line is there

Mario Carneiro (Aug 18 2020 at 19:28):

the tactic state coming out of wlog is actually malformed

Yakov Pechersky (Aug 18 2020 at 19:39):

so, avoid wlog?

Mario Carneiro (Aug 18 2020 at 19:40):

I think it might be having trouble with the dependent type?

Yakov Pechersky (Aug 18 2020 at 19:40):

Or do wlog only after all my induction steps?

Mario Carneiro (Aug 18 2020 at 19:41):

it could also be a bug in refine i.e. the elaborator, but that seems less likely given how much use it gets

Mario Carneiro (Aug 18 2020 at 19:44):

actually that's exactly the case:

example {n : } (i j : fin n) : j < i  false :=
fin.succ_rec_on i sorry sorry

Mario Carneiro (Aug 18 2020 at 19:46):

In fact I've seen this issue before, it is the same thing that rw says next to motive is not type correct. It attempted to generalize the goal and got something type incorrect, although it reports the error in a really weird way here

Yakov Pechersky (Aug 18 2020 at 19:47):

More examples of weirdness:

import data.fin

variables {n : }

example (i : fin (n + 2)) (h : (0 : fin (n + 2)) < i) : true :=
/-
type mismatch at application
  0
term
  @fin.has_zero (n + 1)
has type
  has_zero (fin (n + 1).succ)
but is expected to have type
  has_zero (fin _x)
-/

begin
  revert h,
  refine fin.succ_rec_on i _ _,
  { intros N h,
    exact absurd h (lt_irrefl (0 : fin (N.succ))) },
  { sorry }
end

example (i : fin (n + 2)) (h : (0 : fin (n + 2)) < i) : true :=
begin
  revert h,
  refine fin.succ_rec_on i _ _,
  { intros N h,
    exact absurd h (lt_irrefl 0) },
  /-
type mismatch at application
  absurd h (lt_irrefl 0)
term
  lt_irrefl 0
has type
  not (@has_lt.lt ?m_1 (@preorder.to_has_lt ?m_1 ?m_2) 0 0)
but is expected to have type
  not (@has_lt.lt (fin N.succ) (@fin.has_lt N.succ) 0 0)
state:
n : ℕ,
i : fin (n + 2),
N : ℕ,
h : 0 < 0
⊢ true
-/

  { sorry }
end

Yakov Pechersky (Aug 18 2020 at 19:49):

My first example is your example, but the error is deferred if I don't specify the type of 0 on the absurd line.

Yakov Pechersky (Aug 18 2020 at 19:51):

Should this be a separate thread? I guess I'll go back to

      { obtain i, hi := i,
        obtain j, hj := j,
        induction i with i IH,
          { induction j with j JH,

Mario Carneiro (Aug 18 2020 at 19:51):

You basically should not call fin.succ_rec_on unless the object has type i : fin n where n is a variable. You should generalize e : n + 2 = m beforehand to ensure this if you have to put n+2 in the theorem statement, but it would be even better if you avoided that

Yakov Pechersky (Aug 18 2020 at 19:52):

Can't refer to 0 if it's of type fin n, only fin (n + 1) at best.

Mario Carneiro (Aug 18 2020 at 19:52):

Just use a variable instead of 0

Mario Carneiro (Aug 18 2020 at 19:52):

you wanted to generalize that anyway, right?

Yakov Pechersky (Aug 18 2020 at 19:52):

True. Thanks!

Yakov Pechersky (Aug 18 2020 at 19:57):

Doesn't your

example {n : } (i j : fin n) : j < i  false :=
fin.succ_rec_on i sorry sorry

have n as a variable?

Mario Carneiro (Aug 18 2020 at 20:01):

it does, but it did not revert j

Mario Carneiro (Aug 18 2020 at 20:02):

the correct way to set up the induction is

example {n : } (i j : fin n) : j < i  false :=
fin.succ_rec_on i (λ n i j, sorry) (λ n i IH j, sorry) j

Yakov Pechersky (Aug 18 2020 at 20:13):

So something like:

example {n' : } (i' j' : fin n') (A' : matrix (fin n') (fin n') R) : i' < j'  det' (swap_row A' i' j') j' = - det' A' i' :=
fin.succ_rec_on i'
(λ n j A h, by {
  sorry,
})
(λ n i IH j A, by {
  sorry,
}) j' A'

Yakov Pechersky (Aug 18 2020 at 20:19):

But basically, that also means that I can't also induct on j within.

Mario Carneiro (Aug 18 2020 at 20:52):

you can case on j

Mario Carneiro (Aug 18 2020 at 20:55):

One thing that's not clear to me is what the overall structure of your induction is. Suppose the theorem you want to prove is P n i j. What are the relations between the P's at different arguments that you want to prove? Is it i < j -> P n i j -> P (n+1) (i+1) (j+1) and P (n+1) 0 0 or something else?

Yakov Pechersky (Aug 18 2020 at 21:10):

Roughly following the proof here:
https://www.quora.com/What-is-the-proof-for-the-Laplace-expansion-for-calculating-determinants
but I think this proof they outline has a circular argument.

Yakov Pechersky (Aug 20 2020 at 18:34):

Does #3869 help at all for fin comparisons? Relatedly, can one use \bot to refer to 0 and \top for fin.last _?


Last updated: Dec 20 2023 at 11:08 UTC