Zulip Chat Archive

Stream: new members

Topic: polynomial in span of coefficients


Damiano Testa (Aug 25 2020 at 07:19):

Dear All,

I am trying to prove that if f is a polynomial in one variable over a commutative ring, then f is contained in the ideal span of its coefficients. I hope that the code below states, at least correctly, the result that I want! Is there a proof already of this fact? I am having a hard time using the "support" of the polynomial and converting into a "sum representation".

Thank you!

import ring_theory.polynomial
import tactic

open polynomial

universe u

lemma finc {R : Type u} [comm_ring R] (f : polynomial R) : f  ideal.span {tt : polynomial R |  i  f.1 ,tt = (C (coeff f i))} :=
begin
    sorry,
end

Kevin Buzzard (Aug 25 2020 at 07:30):

oh wow data.polynomial has been broken up into lots of files recently, and I don't know my way around it any more.

Kevin Buzzard (Aug 25 2020 at 07:30):

There will presumably be some lemma in there somewhere which says that if aia_i is the coefficient of XiX^i in ff then f=iaiXif=\sum_i a_i X^i.

Kevin Buzzard (Aug 25 2020 at 07:32):

It's as_sum on line 196 of data.polynomial.degree.basic

Damiano Testa (Aug 25 2020 at 07:33):

I have been unable to locate this lemma: thank you very much! I will try to get it to work!

Kevin Buzzard (Aug 25 2020 at 07:36):

lemma finc {R : Type u} [comm_ring R] (f : polynomial R) :
  f  ideal.span {tt : polynomial R |  i  f.1 ,tt = (C (coeff f i))} :=
begin
  rw as_sum f,
  sorry
end

Kevin Buzzard (Aug 25 2020 at 07:37):

Now the goal becomes i Sai\sum_{i\ \in S}a_i \in\ldots oh oops, I've rewritten too many f's.

Kevin Buzzard (Aug 25 2020 at 07:38):

begin
  conv_lhs {rw as_sum f},
  sorry
end

Kevin Buzzard (Aug 25 2020 at 07:39):

(rewrite on LHS only). Now the goal is iSaispan(A)\sum_{i\in S}a_i \in span(A) and you know that each aia_i is in AA.

Damiano Testa (Aug 25 2020 at 07:41):

Ah, I did not know that you could make Lean rw only on the LHS...

Kevin Buzzard (Aug 25 2020 at 07:43):

So now we want to apply a lemma that says that if a finite number of things are in an ideal then so is their sum

Kevin Buzzard (Aug 25 2020 at 07:43):

And that will be called sum_mem probably

Kevin Buzzard (Aug 25 2020 at 07:46):

(sorry so slow, doing ten other things at once; just sat down to think about this now)

Damiano Testa (Aug 25 2020 at 07:47):

No worries! Besides, getting a slower feed, I learn more!

Kevin Buzzard (Aug 25 2020 at 07:47):

If I do #check sum_mem and then hit ctrl-space (and then escape and ctrl-space again) I see a list of possibilities

Damiano Testa (Aug 25 2020 at 07:47):

(and I get some extra time to process)

Kevin Buzzard (Aug 25 2020 at 07:48):

oh here we go, submodule.sum_mem should do it

Kevin Buzzard (Aug 25 2020 at 07:48):

refine submodule.sum_mem _ _,

Kevin Buzzard (Aug 25 2020 at 07:49):

The goal is now

⊢ ∀ (c : ℕ), c ∈ finset.range (f.nat_degree + 1) → ⇑C (f.coeff c) * X ^ c ∈ ideal.span {tt : polynomial R | ∃ (i : ℕ) (H : i ∈ f.support), tt = ⇑C (f.coeff i)}

Damiano Testa (Aug 25 2020 at 07:49):

Yes, I am getting the same! At least I know how to follow steps!

Kevin Buzzard (Aug 25 2020 at 07:50):

This would be a one-liner but for the fact that the ideal on the RHS is only generated by the coefficients in the support

Kevin Buzzard (Aug 25 2020 at 07:50):

whereas the coefficient could be zero, so we're going to have to split into two cases

Kevin Buzzard (Aug 25 2020 at 07:52):

begin
  conv_lhs {rw as_sum f},
  refine submodule.sum_mem _ _,
  intros i hi,
  by_cases h : i  f.support,
  { sorry },
  { sorry }
end

Kevin Buzzard (Aug 25 2020 at 07:52):

The first goal is when i is in the support:

h: i ∈ f.support
⊢ ⇑C (f.coeff i) * X ^ i ∈ ideal.span {tt : polynomial R | ∃ (i : ℕ) (H : i ∈ f.support), tt = ⇑C (f.coeff i)}

Kevin Buzzard (Aug 25 2020 at 07:53):

By the way, do you see that I am writing the argument completely backwards as far as a mathematician is concerned?

Damiano Testa (Aug 25 2020 at 07:53):

I am following!

Kevin Buzzard (Aug 25 2020 at 07:53):

We now want that if a is in I, then a*b is in I

Damiano Testa (Aug 25 2020 at 07:54):

Kevin Buzzard said:

By the way, do you see that I am writing the argument completely backwards as far as a mathematician is concerned?

Yes, every other line that I write in Lean seems to be a have...

Kevin Buzzard (Aug 25 2020 at 07:54):

that's because you're writing arguments forwards

Kevin Buzzard (Aug 25 2020 at 07:54):

Both ways are fine, but there are advantages to going backwards

Damiano Testa (Aug 25 2020 at 07:54):

I can imagine, I am just not proficient enough, yet

Kevin Buzzard (Aug 25 2020 at 07:55):

  { rw mul_comm,
    refine submodule.smul_mem _ _ _,
    sorry },

Kevin Buzzard (Aug 25 2020 at 07:55):

and now the first goal is

⊢ ⇑C (f.coeff i) ∈ ideal.span {tt : polynomial R | ∃ (i : ℕ) (H : i ∈ f.support), tt = ⇑C (f.coeff i)}

Kevin Buzzard (Aug 25 2020 at 07:56):

so we need the lemma that X is in the ideal spanned by X

Damiano Testa (Aug 25 2020 at 07:56):

ok

Kevin Buzzard (Aug 25 2020 at 07:56):

apply submodule.subset_span,

Kevin Buzzard (Aug 25 2020 at 07:57):

and now our first goal has become

⊢ ⇑C (f.coeff i) ∈ {tt : polynomial R | ∃ (i : ℕ) (H : i ∈ f.support), tt = ⇑C (f.coeff i)}

Damiano Testa (Aug 25 2020 at 07:57):

which seems at least true...

Kevin Buzzard (Aug 25 2020 at 07:57):

and the way that sets are set up, this is definitionally equal to the simpler

  (i_1 : ) (H : i_1  f.support), C (f.coeff i) = C (f.coeff i_1)

and you can change the goal to this with dsimp if you want (but I won't do this)

Kevin Buzzard (Aug 25 2020 at 07:58):

To solve an exists goal, you use use

Kevin Buzzard (Aug 25 2020 at 07:58):

use [i, h] (we have both things to hand)

Damiano Testa (Aug 25 2020 at 07:58):

yes!!

Kevin Buzzard (Aug 25 2020 at 07:58):

and now the goal is of the form X=X so Lean closes it automatically

Kevin Buzzard (Aug 25 2020 at 07:58):

begin
  conv_lhs {rw as_sum f},
  refine submodule.sum_mem _ _,
  intros i hi,
  by_cases h : i  f.support,
  { rw mul_comm,
    refine submodule.smul_mem _ _ _,
    apply submodule.subset_span,
    use [i, h] },
  { sorry }
end

Kevin Buzzard (Aug 25 2020 at 07:59):

The other case -- here i is not in the support so we're trying to prove that 0 is in the ideal

Damiano Testa (Aug 25 2020 at 07:59):

Kevin Buzzard said:

and the way that sets are set up, this is definitionally equal to the simpler

  (i_1 : ) (H : i_1  f.support), C (f.coeff i) = C (f.coeff i_1)

and you can change the goal to this with dsimp if you want (but I won't do this)

I did not know that you could deal with sets like this

Kevin Buzzard (Aug 25 2020 at 08:00):

x \in {y | P y} is definitionally equal to P x

Kevin Buzzard (Aug 25 2020 at 08:00):

If it weren't then there would be some simp lemma saying it was and you'd apply that

Damiano Testa (Aug 25 2020 at 08:00):

I see. I had probably read this somewhere, but it flowed out of my mind

Damiano Testa (Aug 25 2020 at 08:01):

so, for the second goal, my approach would be to have coeff... = 0

Kevin Buzzard (Aug 25 2020 at 08:01):

I tend to go backwards so I'd start with

convert submodule.zero_mem _,

Kevin Buzzard (Aug 25 2020 at 08:02):

h: i ∉ f.support
⊢ ⇑C (f.coeff i) * X ^ i = 0

Damiano Testa (Aug 25 2020 at 08:02):

I would prefer to follow your strategy

Kevin Buzzard (Aug 25 2020 at 08:02):

submodule.zero_mem J is the assertion that 0 is in JJ

Kevin Buzzard (Aug 25 2020 at 08:03):

Our goal is that something else is in J, but the convert tactic says "honestly, this line should close this goal"

Kevin Buzzard (Aug 25 2020 at 08:03):

and Lean's response is "OK then, you will have to prove that these things are equal before I believe you"

Damiano Testa (Aug 25 2020 at 08:03):

Ah, so you are pointing out to Lean that 0 is an element of the set and it tries to then prove that your given element is 0?

Kevin Buzzard (Aug 25 2020 at 08:03):

right

Damiano Testa (Aug 25 2020 at 08:04):

(or rather, asks you to prove!)

Kevin Buzzard (Aug 25 2020 at 08:04):

convert zero_mul _,

Kevin Buzzard (Aug 25 2020 at 08:04):

zero_mul _ is the statement that 0 * x = 0 and I'm telling Lean to close the goal with this

Damiano Testa (Aug 25 2020 at 08:04):

Ok, I am following

Kevin Buzzard (Aug 25 2020 at 08:04):

h: i  f.support
 C (f.coeff i) = 0

Kevin Buzzard (Aug 25 2020 at 08:05):

It's a bit disappointing that simp can't take it from here

Kevin Buzzard (Aug 25 2020 at 08:07):

There are two steps left: first to show that if i isn't in the support then f.coeff i = 0

Kevin Buzzard (Aug 25 2020 at 08:07):

and second to show that C 0 = 0

Damiano Testa (Aug 25 2020 at 08:08):

Ok, and C should be injective, right?

Kevin Buzzard (Aug 25 2020 at 08:08):

right, but injectivity in Lean means C x = C y -> x = y

Damiano Testa (Aug 25 2020 at 08:08):

(I am trying to reason the opposite way in which I would normally, in this baby case!)

Kevin Buzzard (Aug 25 2020 at 08:08):

and there's only one C here

Kevin Buzzard (Aug 25 2020 at 08:09):

I'm trying to find the interface for C

Kevin Buzzard (Aug 25 2020 at 08:09):

found it

Kevin Buzzard (Aug 25 2020 at 08:09):

I found a random file in data/polynomial which mentioned C and then I right clicked on it and I'm now in data.polynomial.monomial

Damiano Testa (Aug 25 2020 at 08:10):

Kevin Buzzard said:

found it

where? how?

Damiano Testa (Aug 25 2020 at 08:10):

Kevin Buzzard said:

I found a random file in data/polynomial which mentioned C and then I right clicked on it and I'm now in data.polynomial.monomial

ok, thanks!

Kevin Buzzard (Aug 25 2020 at 08:10):

/--
`C a` is the constant polynomial `a`.
`C` is provided as a ring homomorphism.
-/
def C : R →+* polynomial R := add_monoid_algebra.algebra_map' (ring_hom.id R)

@[simp] lemma monomial_zero_left (a : R) : monomial 0 a = C a := rfl

lemma C_0 : C (0 : R) = 0 := single_zero

lemma C_1 : C (1 : R) = 1 := rfl

...

Kevin Buzzard (Aug 25 2020 at 08:10):

I was looking for C_zero so now I see my mistake

Kevin Buzzard (Aug 25 2020 at 08:10):

convert C_0,

Kevin Buzzard (Aug 25 2020 at 08:10):

h: i  f.support
 f.coeff i = 0

Kevin Buzzard (Aug 25 2020 at 08:11):

and still simp doesn't do it :-/

Damiano Testa (Aug 25 2020 at 08:11):

the C disappeared!

Kevin Buzzard (Aug 25 2020 at 08:11):

C_0 says C 0 = 0

Damiano Testa (Aug 25 2020 at 08:12):

Yes, I had been able to understand this from what you copy-pasted!

Kevin Buzzard (Aug 25 2020 at 08:12):

so I told Lean that this was equal to the goal and it figured that the input to C had better be zero so asked me to check

Damiano Testa (Aug 25 2020 at 08:12):

(I keep feeling like those people who needed the LMGTFY)

Kevin Buzzard (Aug 25 2020 at 08:13):

So I know how to prove this by looking at the internals of finsupp

Damiano Testa (Aug 25 2020 at 08:14):

I find that finite and its relatives should be avoided at all costs. I do not know if it is an issue with finite being second-order or what, but I always have a hard time with that...

Kevin Buzzard (Aug 25 2020 at 08:14):

The idea is that internally polynomials are represented as some function with finite support, and part of the definition of finsupp is a proof of the assertion that i is in the support iff g(i) is non-zero

Damiano Testa (Aug 25 2020 at 08:15):

Kevin Buzzard said:

The idea is that internally polynomials are represented as some function with finite support, and part of the definition of finsupp is a proof of the assertion that i is in the support iff g(i) is non-zero

I had carved this out from somewhere and I got annoyed by it: wouldn't it make more sense to simply know that it is a finite set and outside the coefficients are zero, without necessarily knowing that inside they are not?

Kevin Buzzard (Aug 25 2020 at 08:16):

so this

    unfold coeff,
    have h2 := finsupp.mem_support_to_fun f i,
    simp * at *,

finishes it in a rather inelegant way but I'm sure we can do better

Damiano Testa (Aug 25 2020 at 08:16):

anyway, you already told me not to worry about definitions...

Kevin Buzzard (Aug 25 2020 at 08:16):

The issue is that finsupp is computable

Scott Morrison (Aug 25 2020 at 08:17):

I think the original statement you said you were after is already in mathllib, perhaps. I remember seeing something very similar recently. Have you looked in ring_theory.polynomial?

Kevin Buzzard (Aug 25 2020 at 08:17):

so internally a polynomial over R is actually defined as (a) the function from {1,X,X^2,...} to R, (b) a finite subset of {1,X,X^2,...} and (c) a proof that the function is non-zero precisely on the finite subset

Kevin Buzzard (Aug 25 2020 at 08:17):

@Scott Morrison ring_theory.polynomial is now four files I don't know my way around

Damiano Testa (Aug 25 2020 at 08:18):

Is the empty set finite? I think that I saw that something is finite and non-empty...

Kevin Buzzard (Aug 25 2020 at 08:18):

yes the empty set is finite

Damiano Testa (Aug 25 2020 at 08:18):

ok, at some point I was asked by Lean to show that the support of a polynomial was not finite, or so I thought

Kevin Buzzard (Aug 25 2020 at 08:19):

So currently the second proof looks like

  { convert submodule.zero_mem _,
    convert zero_mul _,
    convert C_0,
    unfold coeff,
    have h2 := finsupp.mem_support_to_fun f i,
    simp * at * },

Kevin Buzzard (Aug 25 2020 at 08:19):

but I'd rather change those last three lines into something nicer where I don't have to break the interface and start going on about finsupp

Damiano Testa (Aug 25 2020 at 08:19):

Anyway, I am so grateful for going though this proof! Honestly, I would not have learned as much from simply finding a reference to a mathlib proof

Kevin Buzzard (Aug 25 2020 at 08:20):

Let's take a look at the interface for coeff -- you can find it by right clicking on the coeff in the lemma we're proving

Kevin Buzzard (Aug 25 2020 at 08:22):

yeah I can't find it. I want i \in f.support \iff f.coeff i \ne 0 and/or \not (i \in f.support) \iff f.coeff i = 0

Kevin Buzzard (Aug 25 2020 at 08:23):

Maybe you're not supposed to use support of a polynomial?

Damiano Testa (Aug 25 2020 at 08:23):

I'd be happy to avoid it! In fact, most of the time, I would be happy with the universal property of polynomial rings!

Devon Tuma (Aug 25 2020 at 08:23):

I think you can get simp to do what you want at the end with simpa using h (I remember also being upset that simp didn't solve this for me even though it felt like it should, but I think that worked.)

Damiano Testa (Aug 25 2020 at 08:24):

Devon Tuma said:

I think you can get simp to do what you want at the end with simpa using h (I remember also being upset that simp didn't solve this for me even though it felt like it should, but I think that worked.)

I tried it and it worked for me!

Kevin Buzzard (Aug 25 2020 at 08:24):

import ring_theory.polynomial
import tactic

open polynomial

universe u

lemma finc {R : Type u} [comm_ring R] (f : polynomial R) :
  f  ideal.span {tt : polynomial R |  i  f.support ,tt = (C (coeff f i))} :=
begin
  conv_lhs {rw as_sum f},
  refine submodule.sum_mem _ _,
  intros i hi,
  by_cases h : i  f.support,
  { rw mul_comm,
    refine submodule.smul_mem _ _ _,
    apply submodule.subset_span,
    use [i, h] },
  { convert submodule.zero_mem _,
    convert zero_mul _,
    convert C_0,
    simpa using h }
end

Damiano Testa (Aug 25 2020 at 08:24):

Thanks!

Kevin Buzzard (Aug 25 2020 at 08:25):

I see. I was trying simp [h] which didn't work.

Kevin Buzzard (Aug 25 2020 at 08:25):

But

simp at h,
exact h

works, so simpa using h works

Damiano Testa (Aug 25 2020 at 08:25):

what does simp do?

Kevin Buzzard (Aug 25 2020 at 08:26):

Damiano -- the issue was that I needed to run the simplifier on h as well as the goal

Kevin Buzzard (Aug 25 2020 at 08:26):

The simplifier is a rewriting system.

Kevin Buzzard (Aug 25 2020 at 08:26):

If the simplifier knows that A = B and f = g it will be able to prove things like f(A) = g(B)

Damiano Testa (Aug 25 2020 at 08:26):

ok, i guess this explains why often i prove a hypothesis, i rw with it and then simp does more

Kevin Buzzard (Aug 25 2020 at 08:27):

= is not symmetric in Lean!

Kevin Buzzard (Aug 25 2020 at 08:27):

If you look at a simp lemma like zero_mul, it says 0 * x = 0 but it is not a coincidence that the left hand side of the equality is more complex than the right hand side

Damiano Testa (Aug 25 2020 at 08:28):

the written language of Lean is relatively straightforward to parse, but the various implications are very subtle, for me

Patrick Massot (Aug 25 2020 at 08:28):

Kevin Buzzard said:

= is not symmetric in Lean!

= is not definitionaly symmetric in Lean!

Kevin Buzzard (Aug 25 2020 at 08:28):

The simplifier will try and replace the left hand side of a simp lemma with the right hand side, so it's important that the more complicated side is the left hand side.

Kevin Buzzard (Aug 25 2020 at 08:28):

Patrick is explaining what I'm saying in a much better way :-)

Patrick Massot (Aug 25 2020 at 08:28):

Damiano, it's subtle for everyone. Even seasoned users sometimes get caught.

Damiano Testa (Aug 25 2020 at 08:29):

Kevin Buzzard said:

= is not symmetric in Lean!

I have learned the functions symmetry and symmetry' because of this...

Damiano Testa (Aug 25 2020 at 08:29):

Patrick Massot said:

Damiano, it's subtle for everyone. Even seasoned users sometimes get caught.

Strangely, this comment makes me hopeful!

Kevin Buzzard (Aug 25 2020 at 08:33):

That first subproof can be rewritten like this:

  { rw mul_comm,
    exact submodule.smul_mem _ _ (submodule.subset_span i, h, rfl) },

Kevin Buzzard (Aug 25 2020 at 08:33):

all this refine and apply is just function application

Kevin Buzzard (Aug 25 2020 at 08:33):

It's important to write it this way because then new users won't be able to understand what the heck is going on.

Kevin Buzzard (Aug 25 2020 at 08:34):

We need to cover our tracks to make sure that this stuff doesn't get big

Damiano Testa (Aug 25 2020 at 08:35):

I am really hoping that you will not edit your answers in this thread! I am planning of going over the whole reasoning a couple of times, still! Ahahaha

Kevin Buzzard (Aug 25 2020 at 08:36):

I always wanted simp to do the second subgoal, but it kept not doing it (I would try every time I did another convert). Devon pointed out the issue -- simp couldn't deal with h as it stood, I needed to use the simplifier on h as well.

Kevin Buzzard (Aug 25 2020 at 08:36):

But if you do this at the beginning then the simplifier can do the rest:

  { have h2 : f.coeff i = 0, by simpa using h,
    simp [h2]}

Kevin Buzzard (Aug 25 2020 at 08:37):

This proof is much more like a mathematician's proof now -- "the coefficient c of X^i is zero so c*X^i is clearly in this ideal, whatever the ideal is"

Kevin Buzzard (Aug 25 2020 at 08:38):

lemma finc {R : Type u} [comm_ring R] (f : polynomial R) :
  f  ideal.span {tt : polynomial R |  i  f.support ,tt = (C (coeff f i))} :=
begin
  conv_lhs {rw as_sum f},
  refine submodule.sum_mem _ _,
  intros i hi,
  by_cases h : i  f.support,
  { rw mul_comm,
    exact submodule.smul_mem _ _ (submodule.subset_span i, h, rfl) },
  { have h2 : f.coeff i = 0, by simpa using h,
    simp [h2]}
end

This is looking more like a mathlibby proof now

Kevin Buzzard (Aug 25 2020 at 08:39):

@Damiano Testa the thing which I didn't understand for the longest time is what to expect from the library.

Kevin Buzzard (Aug 25 2020 at 08:39):

You might think "my lemma is obvious, so probably it should be in the library". That's not now it works at all.

Kevin Buzzard (Aug 25 2020 at 08:40):

The library mostly contains "single-step" proofs, for example "0 \in J", or "if all the a_i are in J, then their sum is"

Damiano Testa (Aug 25 2020 at 08:40):

Kevin Buzzard said:

You might think "my lemma is obvious, so probably it should be in the library". That's not now it works at all.

Indeed, I do feel that this should be the purpose of the library. I also still have trouble navigating the files and finding what really is in the library and what is not

Kevin Buzzard (Aug 25 2020 at 08:40):

Right. But there's a secret here.

Damiano Testa (Aug 25 2020 at 08:41):

Will you write the secret? Should I take a picture of the screen, before it self-destructs?

Kevin Buzzard (Aug 25 2020 at 08:41):

most of the library is single-step lemmas, but conversely the mathlib philosophy is to make libraries as complete as possible, so the converse should also be true: any single-step lemma should be in the library somewhere

Kevin Buzzard (Aug 25 2020 at 08:41):

Furthermore, you can probably guess its name

Damiano Testa (Aug 25 2020 at 08:41):

(so far, my way of searching the library has been using grep from the command line)

Kevin Buzzard (Aug 25 2020 at 08:42):

The way to search in the library is to learn the naming conventions and to use ctrl-space

Damiano Testa (Aug 25 2020 at 08:42):

ok

Kevin Buzzard (Aug 25 2020 at 08:42):

I didn't know as_sum and I am unfamiliar with the polynomial library now because Jalex and others made a major refactor of it recently :heart:

Kevin Buzzard (Aug 25 2020 at 08:43):

but things like submodule.sum_mem, i found that lemma because I knew that that's what it would be called and I knew it would be there.

Kevin Buzzard (Aug 25 2020 at 08:43):

and similarly submodule.smul_mem and submodule.subset_span

Kevin Buzzard (Aug 25 2020 at 08:44):

Xspan(X)X\subseteq span(X) is going to be called subset_span

Damiano Testa (Aug 25 2020 at 08:44):

yes, I definitely need to develop a feeling for what to expect to be in the library

Kevin Buzzard (Aug 25 2020 at 08:44):

Every one-step lemma is there, and it is named appropriately. And if you find one that isn't, you should add it.

Kevin Buzzard (Aug 25 2020 at 08:45):

so then it's just a case of breaking everything down into these one-step lemmas, and then slowly over time you learn shortcuts, because for example there are some much more complex lemmas in there too

Kyle Miller (Aug 25 2020 at 08:45):

(Sometimes something that works is writing the lemma statement that you think should exist as an example then using library_search or library_search!, but you have to know what to expect and to have the correct imports. If it's not there, then I suppose you just need to fill in the proof then submit a PR.)

Damiano Testa (Aug 25 2020 at 08:46):

I see

Damiano Testa (Aug 25 2020 at 08:47):

These discussions give names to concepts that my mind had not yet formed

Kyle Miller (Aug 25 2020 at 08:48):

Another trick I use is to try to write the example in a way that things should simplify in the way I want, then I apply simp. If it worked, then replacing simp with squeeze_simp gives you the list of lemmas it used.

Kyle Miller (Aug 25 2020 at 08:49):

(And, I don't actually always use an example block. Most of the time I use a have expression in the middle of a tactic proof.)

Damiano Testa (Aug 25 2020 at 08:50):

Ok, I had been using haves quite a bit, but I will now use them with a better awareness!

Kevin Buzzard (Aug 25 2020 at 08:53):

I've been going to CS conferences and I've met a whole bunch of people who are very interested in various proof styles, and sometimes ask me questions about various proof styles and I can never answer them because I don't have a clue about any of the definitions. Maybe there are "ISAR-style proofs" and some other style invented by Lamport and I can't even remember the other ones, but they seem to be just high-level terms for whether you put a lot of haves or not. Oh -- "declarative style", I think that's one.

Kevin Buzzard (Aug 25 2020 at 08:54):

But somehow their questions are all about either which style I prefer, or which style is more readable, or something, and I never have a clue, because I don't try to make my proof code readable, I just add comments instead

Kevin Buzzard (Aug 25 2020 at 08:57):

lemma finc {R : Type u} [comm_ring R] (f : polynomial R) :
  f  ideal.span {tt : polynomial R |  i  f.support ,tt = (C (coeff f i))} :=
begin
  -- rewrite f as ∑ a_i X^i, where `a_i = coeff f i`
  conv_lhs {rw as_sum f},
  -- a finite sum of elements is in an ideal if all the elements are
  refine submodule.sum_mem _ _,
  -- so let's check a_i X^i is in the ideal
  intros i hi,
  -- two cases: either a_i = 0 or not
  by_cases h : i  f.support,
  { -- if it is, then a_i is one of the generators of the ideal and it's obvious
    rw mul_comm,
    exact submodule.smul_mem _ _ (submodule.subset_span i, h, rfl) },
  { -- if it's not then a_i = 0 and it's also obvious
    have h2 : f.coeff i = 0, by simpa using h,
    simp [h2]}
end

Kevin Buzzard (Aug 25 2020 at 08:57):

and then they complain about something to do with semantics

Kevin Buzzard (Aug 25 2020 at 08:57):

how now if you change the proof you have to change the comments independently

Kevin Buzzard (Aug 25 2020 at 08:58):

but I have long stopped caring because all I care about is proving the theorem.

Kyle Miller (Aug 25 2020 at 08:58):

Kevin Buzzard said:

I don't try to make my proof code readable, I just add comments instead

I think this shows that tactic proofs are the assembly language of math. I'm looking forward to whatever high-level languages we might come up with down the road, where your comments are part of the machine-readable proof.

Kyle Miller (Aug 25 2020 at 09:01):

Though, at least with assembly language, you could actually have a hope of reading through it without comments and figuring out what it's doing...

This isn't completely fair of a comparison, because the symbolic manipulations tactic proofs are doing can be well beyond what you'd want to go through by hand.

Mario Carneiro (Aug 25 2020 at 09:01):

never going to happen... you will always have to contort yourself a bit to express your intent to a computer. Comments are for humans, code is for computers. You can do okay at both simultaneously but you will never be optimal for both

Mario Carneiro (Aug 25 2020 at 09:01):

tactic proofs are for sure better than assembly

Mario Carneiro (Aug 25 2020 at 09:02):

they are high level language proofs, just not natural language

Mario Carneiro (Aug 25 2020 at 09:03):

What you are missing is that a tactic proof is one half of a conversation

Kyle Miller (Aug 25 2020 at 09:03):

I don't really accept most tactic proofs as being proofs. Most are merely monadic programs that construct proof terms. (And most proof terms I wouldn't consider to be proofs. I accept the truth of what they say, but they aren't proofs.)

Mario Carneiro (Aug 25 2020 at 09:03):

they are not intended to be understood without context

Kyle Miller (Aug 25 2020 at 09:04):

I'm not missing that; I think this is a weakness, since you cannot read the source code without a mechanical tool on hand.

Mario Carneiro (Aug 25 2020 at 09:04):

you can inline the context

Mario Carneiro (Aug 25 2020 at 09:04):

that's a mere presentation issue

Kyle Miller (Aug 25 2020 at 09:04):

That would be a comment, right?

Mario Carneiro (Aug 25 2020 at 09:05):

no, it can be formally checked if you like

Mario Carneiro (Aug 25 2020 at 09:06):

I agree they aren't proofs but maybe for a different reason. They are proof scripts, programs that build actual proofs (and maybe don't show them to you)

Kyle Miller (Aug 25 2020 at 09:08):

What I'm saying is that these proof scripts should, eventually, become a higher level language where you can figure out what it's doing easier without needing a computer aid

Kyle Miller (Aug 25 2020 at 09:09):

I'm not saying comments will go away completely. The point is that, if you need lots of comments, it probably reflects a failure of the language

Kyle Miller (Aug 25 2020 at 09:09):

that there are higher-order structures still on the table to exploit

Mario Carneiro (Aug 25 2020 at 09:11):

more layers just means more to understand. I don't buy it

Kyle Miller (Aug 25 2020 at 09:11):

This is something I was thinking about with the induction tactic. It really should just work, because it is a signpost that a particular moment in the argument has come. Using refine works, but it requires some boilerplate that obscures the point. (Though, when you use it enough, it becomes a pattern you recognize and it gains meaning.)

Mario Carneiro (Aug 25 2020 at 09:12):

I'm not sure what you mean. induction does a thing and does it pretty well

Kyle Miller (Aug 25 2020 at 09:12):

I'm not proposing more layers. I think we just haven't seen everything we can do with tactic scripts, though I'm not sure what that might be yet.

Mario Carneiro (Aug 25 2020 at 09:12):

I think we have seen a lot of quality of life tactics arise over the years

Mario Carneiro (Aug 25 2020 at 09:13):

once you identify a pattern, you can automate it away

Mario Carneiro (Aug 25 2020 at 09:13):

but you can't automate it until then

Kyle Miller (Aug 25 2020 at 09:13):

Mario Carneiro said:

I'm not sure what you mean. induction does a thing and does it pretty well

Not with user-defined recursors, which define what I'd want out of induction on particular terms.

Mario Carneiro (Aug 25 2020 at 09:13):

that's just an outstanding bug though

Mario Carneiro (Aug 25 2020 at 09:14):

I don't think that says much about the scheme

Mario Carneiro (Aug 25 2020 at 09:15):

induction is in core so it's been frozen with buggy behavior for a long time. People just don't use it

Mario Carneiro (Aug 25 2020 at 09:15):

if you pretend it doesn't exist you will be happier

Kyle Miller (Aug 25 2020 at 09:16):

It's clear tactics are very powerful and extensible, but do you think that 20-30 years from now this will still be the dominant way of writing proofs?

Mario Carneiro (Aug 25 2020 at 09:16):

tactics are programs. Do I think we will be using programs? of course

Mario Carneiro (Aug 25 2020 at 09:17):

Do I think they will go in begin end blocks? not so important

Kyle Miller (Aug 25 2020 at 09:17):

Tactics are a language for programs. We've changed how we program quite a lot since the early days.

Mario Carneiro (Aug 25 2020 at 09:17):

the language will change for sure

Kyle Miller (Aug 25 2020 at 09:17):

Like, I don't mean just syntax. Structured programming is one example of a paradigm that didn't exist early on.

Mario Carneiro (Aug 25 2020 at 09:17):

it adapts to the local needs of the users

Mario Carneiro (Aug 25 2020 at 09:19):

I don't think tactic proofs are the best way to present proofs by a long shot. They are optimized for writing, not reading

Kyle Miller (Aug 25 2020 at 09:20):

So, can you have something that's like a tactic proof, still easy to write, but you can actually read it? I suspect someone will stumble upon some way to make this work, eventually.

Mario Carneiro (Aug 25 2020 at 09:21):

The style I like is similar to what you find in metamath: a series of steps where you have full context at all times and you can drill down as far as you like

Mario Carneiro (Aug 25 2020 at 09:22):

The #explode tactic is a way of showing that style of proof for lean proofs but lean tactics are not optimized for that and so they tend to generate garbage terms

Kyle Miller (Aug 25 2020 at 09:22):

Someone might find other ways of organizing the construction of the proof terms beyond tactic mode's changing contexts in sequence, for example.

Mario Carneiro (Aug 25 2020 at 09:24):

It is possible to present it more like a computer program with scoping to cut down on repeated infomation, but that's just a proof term, which are actually quite hard to follow if you are not a typechecker

Mario Carneiro (Aug 25 2020 at 09:24):

I think it's much easier to read if you just cut out all the proof terms and leave only the types

Kyle Miller (Aug 25 2020 at 09:28):

(It looks like #explode is showing each term in the first column, its type in the last, its constructor in the third, and the constructor's arguments in the second? That's a reasonably nice way of showing a memory dump, basically, of a term.)

Mario Carneiro (Aug 25 2020 at 09:30):

it's supposed to look like a fitch style natural deduction proof

Mario Carneiro (Aug 25 2020 at 09:30):

it generally produces best results on basic logic proofs

Mario Carneiro (Aug 25 2020 at 09:31):

when you pick a random mathib theorem all the simps and linariths are going to produce a heap of garbage

Mario Carneiro (Aug 25 2020 at 09:33):

also now that html views are a thing #explode can make much better output now

Mario Carneiro (Aug 25 2020 at 09:35):

The hard part with proofs in this style is controlling the overload of information. That's lamport's "structured proof". Ideally this should also be automated, but you somehow need to analyze the proof to determine what is important and skip the boring stuff, but with the means to dig down when the reader isn't convinced

Mario Carneiro (Aug 25 2020 at 09:38):

Tactic proofs are at one level of abstraction, which is probably good for the original author but may be too detailed for some (perhaps most) mathematicians, and not detailed enough for introductory students and people like me who want to get to the bottom of things


Last updated: Dec 20 2023 at 11:08 UTC