Zulip Chat Archive
Stream: new members
Topic: need help on simple proof
Lars Ericson (Nov 15 2020 at 17:26):
Hi, I'm having problems figuring out this simple example. My second subgoal seems to bring me back to what I am trying to prove to begin with, so my argument is circular:
theorem simplify_or : p ∨ q → ¬ p → q :=
λ hpq, λ hnp,
by_cases
(assume hp : p, show false, from hnp hp)
(assume hp : ¬p, _)
Kevin Buzzard (Nov 15 2020 at 17:36):
Do you have a paper proof? It seems to me your problems have nothing to with lean here
Kevin Buzzard (Nov 15 2020 at 17:39):
And I will again assert that in some sense your main problem is that you're not using tactic mode. Did you play through the natural number game yet? There are a bunch of levels like this. My understanding is that your ultimate goal is to do mathematics in Lean. I teach students how to do mathematics in lean and I certainly don't teach them all this assume stuff. Theorem proving in lean, at least the earlier chapters, are written mainly with computer scientists in mind. Those early chapters teach you that a proof is a function, but you don't actually need to know this.
Lars Ericson (Nov 15 2020 at 17:46):
Well I do happen to have a PhD in Computer Science and a long time ago I worked for Dana Scott and later on spent time in Gerard Huet's lab, so in that sense "computer scientists in mind" works for me. The above is a subgoal of this proof, for a point where I am getting stuck:
theorem or_distributivity2 : (p ∨ q) ∧ (p ∨ r) → p ∨ (q ∧ r) :=
λ h,
have hl: p ∨ q, from h.left,
have hr: p ∨ r, from h.right,
by_cases
(assume hp : p, or.inl hp)
(assume hp : ¬p, _)
where the larger context is
theorem or_distributivity1 : p ∨ (q ∧ r) → (p ∨ q) ∧ (p ∨ r) :=
λ h, ⟨ h.elim (λ h1, or.inl h1)
(λ h2, or.inr h2.left) ,
h.elim (λ h1, or.inl h1)
(λ h2, or.inr h2.right) ⟩
and
theorem or_distributivity : p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r) :=
⟨ _, _ ⟩
Apparently the direction of or_distributivity2
is harder than or_distributivity1
because ProofWiki punts on it with the comment This needs considerable tedious hard slog to complete it.
Above it was suggested I explore _ mode which is what I'm doing. Tactic mode is Chapter 5, I'm not there yet!
Lars Ericson (Nov 15 2020 at 17:46):
I will go try the natural number game now and see if that gives me more intuition.
Kevin Buzzard (Nov 15 2020 at 17:48):
I know tactic mode (easy mode) is chapter 5 and I know you're not there yet, and what I'm trying to say is that if your ultimate goal is to do probability in lean then it's not clear to me why you are struggling through basic logic proofs in hard mode.
Lars Ericson (Nov 15 2020 at 17:51):
I'm struggling through because I don't know Lean and I am trying to do baby steps before I do something fancy like find a type for probability spaces that lets me generalize the type of probability distribution functions. Since I am getting stuck on not A and A or B implies B, baby steps seem prudent.
Lars Ericson (Nov 15 2020 at 17:51):
I'm on natural number game now, then I will come back to this when done.
Kevin Buzzard (Nov 15 2020 at 17:52):
Try the natural number game. Tactic mode makes all this stuff easy.
Lars Ericson (Nov 15 2020 at 17:52):
I'm there now, thank you for your help!
Kevin Buzzard (Nov 15 2020 at 17:52):
It is also written for beginners but shows you how to do logic puzzles like this in what I believe is a far more intuitive way.
Kevin Buzzard (Nov 15 2020 at 17:53):
Disclaimer: I wrote it ;-)
Lars Ericson (Nov 15 2020 at 23:24):
Hi @Kevin Buzzard , I'm stuck on level 2 of Addition World. I got to:
lemma add_assoc (a b c : mynat) : (a + b) + c = a + (b + c) :=
begin
induction a with d hd,
rw zero_add b,
rw zero_add,
refl,
end
but this puts me at
case mynat.succ
b c d : mynat,
hd : d + b + c = d + (b + c)
⊢ succ d + b + c = succ d + (b + c)
which seems one succ
more complicated than what I was trying to prove to begin with.
I looked at ProofWiki and they do something very very fancy which I think would be interesting to do in Lean once I finish the number game tutorial. It looks like they are doing something with set theory and integer equivalence classes:
Let 𝑥=[[𝑎,𝑏]], 𝑦=[[𝑐,𝑑]] and 𝑧=[[𝑒,𝑓]] for some 𝑥,𝑦,𝑧∈ℤ.
Then:
𝑥+(𝑦+𝑧) = [[𝑎,𝑏]]+([[𝑐,𝑑]]+[[𝑒,𝑓]]) Definition of Integers
= [[𝑎,𝑏]]+[[𝑐+𝑒,𝑑+𝑓]] Definition of Integer Addition
= [[𝑎+(𝑐+𝑒),𝑏+(𝑑+𝑓)]] Definition of Integer Addition
= [[(𝑎+𝑐)+𝑒,(𝑏+𝑑)+𝑓]] Natural Number Addition is Associative
= [[𝑎+𝑐,𝑏+𝑑]]+[[𝑒,𝑓]] Definition of Integer Addition
= ([[𝑎,𝑏]]+[[𝑐,𝑑]])+[[𝑒,𝑓]] Definition of Integer Addition
= (𝑥+𝑦)+𝑧 Definition of Integers
Is this representation of integers already in Lean somewhere? They are using this extremely complicated construction.
Yakov Pechersky (Nov 15 2020 at 23:31):
Well, you have the list of things you've proved so far in the frame on the left.
Yakov Pechersky (Nov 15 2020 at 23:31):
Is there some way to get the expression to be of a form such that one of the lemmas on the left can be used?
Lars Ericson (Nov 15 2020 at 23:32):
I see I could left-rewrite add_succ
maybe.
Yakov Pechersky (Nov 15 2020 at 23:33):
Yes, integers are in Lean. But they are constructed by using the naturals, which is exactly what you're proving things about right now. In any case, one shouldn't need integers to solve this world.
Yakov Pechersky (Nov 15 2020 at 23:33):
What do you mean by "left-rewrite add_succ
"?
Yakov Pechersky (Nov 15 2020 at 23:34):
I don't see a x + succ y
in your goal expression, not a succ (x + y)
.
Yakov Pechersky (Nov 15 2020 at 23:35):
In fact, I don't think there _is_ a way to get the expression you have currently into a form where add_succ
will help you, given the theorems you have in your toolset.
Yakov Pechersky (Nov 15 2020 at 23:35):
Perhaps there was a step in your proof that led you down the wrong path.
Lars Ericson (Nov 15 2020 at 23:37):
What I mean by left-rewrite is to use this notation rw ← h but it doesn't really apply because the succ
is showing up on the left and not the right.
I will try to start over.
Separately, is the proof wiki complicated construction an interesting exercise to do in Lean? Where is that coming from? Why do they do that as opposed to starting with the Peano axioms?
Lars Ericson (Nov 15 2020 at 23:38):
I.e. the Construction of Inverse Completion thing. It seems rather the long way around, but I'm sure there's some interesting motivation for it.
Kevin Buzzard (Nov 15 2020 at 23:39):
I don't know anything about proof wiki but I do know that the reason you're stuck is that you didn't follow the hint.
Yakov Pechersky (Nov 15 2020 at 23:40):
When they say "Definition of Integer Addition", I'm not sure exactly what they mean -- they are distributing the addition into the pairs. Then they say "Natural Number Addition is Associative". That is exactly what you're trying to prove here.
Yakov Pechersky (Nov 15 2020 at 23:42):
My statement seems tautological, but I guess it is an even stronger hint than the hint in the NNG, since I now checked ProofWiki, and there is a link to the solution. :oh_no:
Reid Barton (Nov 15 2020 at 23:45):
Lars Ericson said:
Separately, is the proof wiki complicated construction an interesting exercise to do in Lean? Where is that coming from? Why do they do that as opposed to starting with the Peano axioms?
You linked to a proof of associativity about the integers but the Peano axioms are about the natural numbers.
Yakov Pechersky (Nov 15 2020 at 23:46):
Or in handwavy CS lingo, a statement about unbounded int
vs unbounded uint
.
Lars Ericson (Nov 15 2020 at 23:48):
Thanks @Reid Barton you're right! For the Z, as opposed to N, I suppose that Lean does the same construction as ProofWiki, which is also replicated in Wikipedia. Is that correct?
Lars Ericson (Nov 15 2020 at 23:49):
@Kevin Buzzard you're right I followed the hint the wrong way, I applied it to leftmost variable, not rightmost variable. Thanks!
Kevin Buzzard (Nov 15 2020 at 23:52):
At this early stage you can manipulate a+0 and a+succ(b) but are less good at things like succ(b)+a so it's really important that you induct on the variable as far to the right as possible. Later on this becomes less of an issue.
Mario Carneiro (Nov 15 2020 at 23:53):
Lars Ericson said:
Thanks Reid Barton you're right! For the Z, as opposed to N, I suppose that Lean does the same construction as ProofWiki, which is also replicated in Wikipedia. Is that correct?
No. Lean uses a representation of int = nat + nat
rather than (nat x nat) / ~
where (a, b) ~ (c, d) <-> a + d = b + c
Mario Carneiro (Nov 15 2020 at 23:54):
That is, in lean an integer is either a number of the form n
or -(n+1)
where n : nat
Mario Carneiro (Nov 15 2020 at 23:54):
and all the operations are defined by cases
Mario Carneiro (Nov 15 2020 at 23:58):
The quotient construction is avoided because it is less constructive. Metamath would normally use a construction like this using equivalence classes, but in this case it also avoids the construction for int
because it brings in usage of the axiom of infinity earlier than necessary (the equivalence classes are infinite sets, so you don't know they exist without the axiom of infinity, even though you are just proving properties about a discrete countable set).
Mario Carneiro (Nov 15 2020 at 23:59):
I think Metamath uses the subset of nat x nat
of elements which are zero in at least one coordinate, and still kind of works with the equivalence relation as a "reduction" to the canonical elements
Lars Ericson (Nov 16 2020 at 00:05):
Thank @Mario Carneiro for the detail on Z.
@Kevin Buzzard doing the induction on the rightmost variable, I am stuck in a loop again. This:
lemma add_assoc (a b c : mynat) : (a + b) + c = a + (b + c) :=
begin
induction c with d hd,
rw add_zero,
rw ←add_succ,
rw add_zero,
refl,
rw add_succ,
rw add_succ,
rw add_succ,
rw add_succ,
end
leaves me with
case mynat.succ
a b d : mynat,
hd : a + b + d = a + (b + d)
⊢ succ (a + b + d) = succ (a + (b + d))
which is what I was trying to prove wrapped with a succ
.
Mario Carneiro (Nov 16 2020 at 00:06):
yes, but now notice that hd
is useful
Lars Ericson (Nov 16 2020 at 00:07):
Oh right thank you I should be able to do rw hd
.
Mario Carneiro (Nov 16 2020 at 00:08):
by the way are you still trying to solve the first question in this thread? The important step is to use or.elim hpq
Mario Carneiro (Nov 16 2020 at 00:08):
you don't need by_cases
Lars Ericson (Nov 16 2020 at 00:10):
Thanks, I will return to that when I am finished the the Natural Number Game. How do I apply hd
as a rewrite rule? rw hd
doesn't work.
Mario Carneiro (Nov 16 2020 at 00:10):
it should
Lars Ericson (Nov 16 2020 at 00:10):
You're right, forgot the comma!
Mario Carneiro (Nov 16 2020 at 00:10):
we need an emoji for this
Lars Ericson (Nov 16 2020 at 00:11):
Proof complete!
Lars Ericson (Nov 16 2020 at 13:54):
Hi, I'm getting stuck on this one. It seems like I should be able to use the inductive hypothesis with substitution of terms, but I don't know how to do that. This is level 6 of Multiplication World:
lemma succ_mul (a b : mynat) : succ a * b = a * b + b :=
induction a with d hd,
rw zero_mul,
rw zero_add,
rw ← one_eq_succ_zero,
rw one_mul,
refl,
I am at
tactic failed, there are unsolved goals
state:
case mynat.succ
b d : mynat,
hd : succ d * b = d * b + b
⊢ succ (succ d) * b = succ d * b + b
Intuitively if I replaced the d
in hd
with succ d
I should be done, but I don't know the tactic for that.
Reid Barton (Nov 16 2020 at 13:57):
That isn't how induction works. You need to assume and prove ; you can't just say "well let's replace by in the inductive hypothesis and we're done".
Kevin Buzzard (Nov 16 2020 at 13:58):
You can't replace d with succ(d) in an induction proof, or you'd be able to prove everything by induction (think about it: you know P(d), you want to prove P(d+1), so if you could just replace d in the hypothesis P(d) with d+1 then you'd be done). Your problem is that you have still not internalised the hint given on addition world level 2.
Reid Barton (Nov 16 2020 at 14:01):
Put differently, the hypothesis hd
in the proof state is something you know about the specific value d
in the proof state. If hd
was about all d
then there would be a ∀
in it.
Lars Ericson (Nov 16 2020 at 14:05):
Does this one need to start with an induction? Or do the other tactics suffice? I've been starting proofs with induction as a magic wand. In this case I don't have any tactics or prior theorems available which would rewrite succ(succ d)
or succ (d*b)
so I'm stuck. I've tried induction on b
and a
and they both come to this point.
Reid Barton (Nov 16 2020 at 14:06):
Is the addition world level 2 hint "figure out the math proof first"?
Reid Barton (Nov 16 2020 at 14:07):
I guess it's a bit unfair because the key point is the proof structure needs to match the definition of *
Kevin Buzzard (Nov 16 2020 at 14:14):
The addition world level 2 hint is that if in doubt, do induction on the rightmost variable because that's the variable you know the most theorems about
Lars Ericson (Nov 16 2020 at 14:58):
Following @Reid Barton 's suggestion I did the proof by hand and I'll see if I can translate this with the available tactics:
succ(a) * b = (a + 1) * b = a * b + 1*b = a* b + b qed
Lars Ericson (Nov 16 2020 at 15:03):
How can I apply one_mul
only to the rightmost occurrence of b
in (a + 1) * b = a * b + b
?
Lars Ericson (Nov 16 2020 at 15:06):
That is, with the tactics available, instead of above proof, I want to do something like
succ(a) * b = a * b + b =>
(a + 1) * b = a * b + b =>
(a + 1)*b = a * b + 1 * b =>
(a + 1)* b = (a + 1) * b
Lars Ericson (Nov 16 2020 at 15:34):
It doesn't seem that I can restrict application of the tactic to a single term, so if I start with
lemma succ_mul (a b : mynat) : succ a * b = a * b + b :=
rw succ_eq_add_one,
rw ← one_mul b,
this gives
(a + 1) * (1 * b) = a * (1 * b) + 1 * b
I want to "undo" the first two 1*b
substitutions. However rw one_mul
does 3 applications at once and simulatenously undoes all of them so I get
(a + 1) * b = a * b + b
which undoes the 1*b
I'm trying to get on the rightmost term in the equation.
Reid Barton (Nov 16 2020 at 15:35):
Lars Ericson said:
(a + 1) * b = a * b + 1*b
Is this left distributivity really a fact that you have available at this point? It looks to me like you are proving a lemma that will be needed to prove distributivity later.
Lars Ericson (Nov 16 2020 at 15:47):
@Reid Barton at this level I have available
mul_add
(t a b : mynat) : t * (a + b) = t * a + t * b
However now i see this is going in the wrong direction. I amt rying to rewrite a*b+1*b
to (a+1)*b
so I actually need a right_mul_add
right_mul_add
(t a b : mynat) : (a + b)*t =a*t+b*t
so I guess I should prove that as a subgoal.
Reid Barton (Nov 16 2020 at 15:51):
Well, that's true, except that I don't advise trying to prove it because you will find yourself needing the statement you're currently proving to do so!
Reid Barton (Nov 16 2020 at 15:51):
In fact I expect this statement to show up in a couple levels.
Lars Ericson (Nov 16 2020 at 16:03):
I am confused about how often tactics are applied. Often I need to type in a tactic multiple times to get it applied to more than one place in the equation. In the above example a single application produced 3 substitutions. Also the previous level pointed out repeat {rw mul_succ}
to explicitly do multiple applications, which implies that otherwise I should only be seeing one application at a time.
Ruben Van de Velde (Nov 16 2020 at 16:51):
So rw mul_succ
is short for rw mul_succ ?m1 ?m2
, so lean tries to find the first occurrence of ?m1 * succ (?m2)
; when it finds such an occurrence, it fills in actual values for ?m1 and ?m2, for example with x
and y
. Then it replaces all cases of x * succ (y)
, but not any cases with other values for ?m1 and ?m2
Ruben Van de Velde (Nov 16 2020 at 16:52):
Did that make sense?
Lars Ericson (Nov 16 2020 at 17:10):
Yes @Ruben Van de Velde thanks, what you are saying is that if E
is the equation, it unifies ?m1 * succ (?m2)
with some subterm of E
. Then (in Pythonish terms) it replaces E
with E.replace('{m1}*succ({m2}', '{m1}*{m2}+{m1}')
, where the replace will apply to all concrete matches.
Lars Ericson (Nov 16 2020 at 18:05):
So finally QED but steep learning curve for me:
rw succ_eq_add_one,
induction b with d hd,
rw mul_zero,
rw mul_zero,
rw add_zero,
refl,
rw mul_succ,
rw mul_succ,
rw succ_eq_add_one,
rw hd,
rw add_assoc,
rw ← add_assoc d,
rw add_assoc (a * d),
rw ← add_assoc a,
rw add_comm a,
refl,
Lars Ericson (Nov 16 2020 at 18:08):
Actually finally I can take the easy way out:
rw succ_eq_add_one,
induction b with d hd,
rw mul_zero,
rw mul_zero,
rw add_zero,
refl,
rw mul_succ,
rw mul_succ,
rw succ_eq_add_one,
rw hd,
simp,
Kevin Buzzard (Nov 16 2020 at 18:23):
I did quite a bad job of making simp
do what it is capable of, because when I made the game last year I didn't really know much at all about rewriting systems.
Lars Ericson (Nov 16 2020 at 19:17):
I finished all levels of Multiplication World (check marks on every level), but when I go to Main Page it is still blue, not green.
Kevin Buzzard (Nov 16 2020 at 19:17):
I'm afraid that part of the story was not the bit I did :-) But you can just go to any level, you can just ignore the blue/green thing.
Lars Ericson (Nov 16 2020 at 20:02):
I'm in level 6/8 of Power world and it is telling us to look at an invocation in level 9 of Multiplication World, namely
attribute [simp] mul_assoc mul_comm mul_left_comm
However it doesn't like this to be used inside a proof, and the GUI only allows typing into the proof.
Yakov Pechersky (Nov 16 2020 at 20:09):
The magic incantation has been whispered, you don't need to whisper it yourself.
Yakov Pechersky (Nov 16 2020 at 20:10):
In fact, it has to be whispered outside of the proof itself. What it means is that mul_assoc
, mul_comm
, mul_left_comm
are now part of the simp
set.
Yakov Pechersky (Nov 16 2020 at 20:10):
And you can just write simp
to take care of any steps that you'd before need to rely on those lemmas. That's what the actual end of the MulWorld 9 info indicates:
example (a b c d e : mynat) :
(((a*b)*c)*d)*e=(c*((b*e)*a))*d :=
begin
simp,
end
Mario Carneiro (Nov 16 2020 at 20:10):
if you want to do that inside a proof, you would use simp [mul_assoc, mul_comm, mul_left_comm]
instead of simp
, but the magic incantation means that the latter means the same as the former
Lars Ericson (Nov 16 2020 at 21:34):
Thanks, I got there for mul_pow
, it works as @Yakov Pechersky mentioned it has already been whispered:
induction n with d hd,
repeat {rw pow_zero},
rw mul_one,
refl,
repeat {rw pow_succ},
rw hd,
simp,
Lars Ericson (Nov 16 2020 at 21:40):
Just curious, I am on 7 of Power world. I found myself doing this:
induction n with d hd,
rw pow_zero,
rw mul_zero,
rw pow_zero,
I tried to shortcut the 3 rw
s with this:
induction n with d hd,
simp [pow_zero, mul_zero],
It didn't work, giving me the result
rewrite tactic failed, did not find instance of the pattern in the target expression
?m_1 ^ 0
So when would simp [x,y,z]
work for tactics x,y,z?
Mario Carneiro (Nov 16 2020 at 21:46):
that doesn't look like an error simp
would give
Mario Carneiro (Nov 16 2020 at 21:47):
simp
doesn't take tactics, it takes lemmas
Lars Ericson (Nov 16 2020 at 21:51):
Sorry @Mario Carneiro you're right, it actually eliminated the top inductive goal. So I can shorten this proof a little:
induction n with d hd,
rw pow_zero,
rw mul_zero,
rw pow_zero,
refl,
rw pow_succ,
rw hd,
rw succ_eq_add_one,
rw mul_add,
rw mul_one,
simp,
rw pow_add,
refl,
to
induction n with d hd,
simp [pow_zero, mul_zero],
rw pow_succ,
rw hd,
rw succ_eq_add_one,
rw mul_add,
rw mul_one,
simp,
rw pow_add,
refl,
Lars Ericson (Nov 17 2020 at 04:22):
I'm stuck on Advanced Proposition Level 8. I have
lemma and_or_distrib_left (P Q R : Prop) : P ∧ (Q ∨ R) ↔ (P ∧ Q) ∨ (P ∧ R) :=
split,
intro h,
left,
split,
have p := h.left,
exact p,
cases h with p qORr,
cases qORr with q r,
exact q,
This gets me to
2 goals
case or.inr
P Q R : Prop,
p : P,
r : R
⊢ Q
P Q R : Prop
⊢ P ∧ Q ∨ P ∧ R → P ∧ (Q ∨ R)
The first goal seems to be a dead end, I have no way of proving Q
.
I've tried a couple of different paths and I still end at this point.
Bryan Gin-ge Chen (Nov 17 2020 at 04:31):
Try holding off on using left
until later.
Lars Ericson (Nov 17 2020 at 04:45):
Thanks @Bryan Gin-ge Chen I got it:
split,
intro h,
have p := h.left,
have hqr := h.right,
cases hqr with q r,
left,
split,
exact p,
exact q,
right,
split,
exact p,
exact r,
intro h,
split,
cases h with pq qr,
exact pq.left,
exact qr.left,
cases h with pq qr,
left,
exact pq.right,
right,
exact qr.right,
However that doesn't seem nearly as compact as the term mode proof here:
⟨λ ⟨ha, hbc⟩, hbc.imp (and.intro ha) (and.intro ha), or.rec (and.imp_right or.inl) (and.imp_right or.inr)⟩
I know term mode is more terse but still based on this compactness I wonder if there is something redundant in my proof.
Yakov Pechersky (Nov 17 2020 at 05:23):
Here's a proof that avoids any mathlib lemmas like imp_right
, just relies on a more powerful tactic that combines intro with cases. You can get it to generate the right pattern using rintro?
and then rename the auto-generated names.
import tactic.basic
lemma and_or_distrib_left' (P Q R : Prop) : P ∧ (Q ∨ R) ↔ (P ∧ Q) ∨ (P ∧ R) :=
begin
split,
{ rintro ⟨p, q | r⟩,
{ exact or.inl ⟨p, q⟩ },
{ exact or.inr ⟨p, r⟩ } },
{ rintro (⟨p, q⟩ | ⟨p, r⟩),
{ exact ⟨p, or.inl q⟩ },
{ exact ⟨p, or.inr r⟩ } }
end
Yakov Pechersky (Nov 17 2020 at 05:24):
rintro
is able to handle conjunctions and disjunctions.
Lars Ericson (Nov 17 2020 at 14:55):
I am starting Advanced Addition world. The context is a little confusing. I first assume that we are trying to prove succ_inj
. Then I see in bottom left that it is an axiom and that we are trying to prove succ_inj'
which just seems to be a restatement of succ_inj
. A few questions:
-
I notice it elsewhere in the Lean sources, that if there is a
foo
there may also be afoo'
. What does the convention of using'
after a name tell the reader? -
I can do this in one step with
cc
. -
Without
cc
I don't know how to get started. I tried this path, but it doesn't leave me in a situation where I can applysucc_inj
:
theorem succ_inj' {a b : mynat} (hs : succ(a) = succ(b)) : a = b :=
induction a with d hd,
induction b with e he,
refl,
leaves me in state
state:
2 goals
case mynat.succ
e : mynat,
he : succ 0 = succ e → 0 = e,
hs : succ 0 = succ (succ e)
⊢ 0 = succ e
case mynat.succ
b d : mynat,
hd : succ d = succ b → d = b,
hs : succ (succ d) = succ b
⊢ succ d = b
Moses Schönfinkel (Nov 17 2020 at 15:03):
If you just want to continue this way, then looking at the goal, surely hs
is contradictory. s 0 = s $ s e
would imply 0 = s e
but zero
and s
are different constructors. cases hs
will likely be your best friend.
Lars Ericson (Nov 17 2020 at 15:04):
Thanks @Moses Schönfinkel , I just figure it out by lookin at another post that gives the syntax to apply the axiom:
rw succ_inj hs,
refl,
This is more enlightening than just doing cc
. I am still curious about the foo'
confusion. What is the '
used to signify?
Moses Schönfinkel (Nov 17 2020 at 15:06):
It's used fluidly, often indicating it's an "alternative" version of an already existing lemma / restatement with different argument order, etc.
Kevin Buzzard (Nov 17 2020 at 15:45):
That '
in succ_inj' means nothing more than "this is quite like
succ_inj` but not quite the same".
Lars Ericson (Nov 17 2020 at 16:00):
I'm on Advanced Addition Level 2/13. I'm confused about a comment in the text "remember that rw
should not be used with succ_inj
-- rw works only with equalities or ↔ statements, not implications or functions" . But this is exactly what I did above for level 1:
rw succ_inj hs,
Kevin Buzzard (Nov 17 2020 at 16:01):
I'm trying to warn readers away from a mistake I saw constantly in beta testing, which was that if h : P -> Q
then people would constantly try to rw h
to turn P into Q (either in a hypothesis or a goal). You didn't rw succ_inj
though, you rewrote succ_inj hs
, which is an equality rather than an implication.
Kevin Buzzard (Nov 17 2020 at 16:03):
As you no doubt realise by now, an implication in Lean is a function: h : P -> Q
can be thought of as a function from proofs of P
to proofs of Q
. You can't rw h
usually, but if you feed h
a proof of P
(e.g. hs
) then it spits out a proof of Q
, and if Q
is an equality, it can be rewritten.
Lars Ericson (Nov 17 2020 at 16:36):
@Kevin Buzzard for Advanced Addition level 5, I am here:
theorem add_right_cancel (a t b : mynat) : a + t = b + t → a = b :=
intro h,
induction t with d hd,
repeat { rw add_zero at h },
exact h,
repeat { rw add_succ at h },
the state is:
state:
case mynat.succ
a b d : mynat,
hd : a + d = b + d → a = b,
h : succ (a + d) = succ (b + d)
⊢ a = b
I want to apply succ_inj
to rewrite the hypothesis h
. It is an implication, not an equality. How do I apply an implication on a hypothesis?
Note: I can finish the proof with cc
but that is magic. I want to apply succ_inj
to simplify h
and then apply hd h
to finish.
So something like this but this doesn't work:
apply hd (rw succ_inj at h),
Kevin Buzzard (Nov 17 2020 at 16:46):
succ_inj
is a function, so you can just do something like have h2 := succ_inj h,
Kevin Buzzard (Nov 17 2020 at 16:48):
Alternatively you could write the proof backwards with apply hd
and then apply succ_inj
. Mathematicians are trained to write forwards, but this results in the state getting clogged up with one-used-then-forgotten hypotheses. In Lean it's sometimes easier to work on the goal rather than defining new things.
Kevin Buzzard (Nov 17 2020 at 16:49):
Probably exact hd (succ_inj h)
would work. hd
is a function too.
Lars Ericson (Nov 17 2020 at 16:49):
Thanks with your first hint I did it like this:
intro h,
induction t with d hd,
repeat { rw add_zero at h },
exact h,
repeat { rw add_succ at h },
have h2 := succ_inj h,
apply hd h2,
Kevin Buzzard (Nov 17 2020 at 16:50):
Probably even exact hd h2
would work, because hd h2
is exactly a proof of the goal. apply h
is used when h : P -> Q
and the goal is Q
, to turn the goal to P
.
Kevin Buzzard (Nov 17 2020 at 16:50):
However it's a very smart tactic and sometimes works outside this situation :-)
Lars Ericson (Nov 17 2020 at 16:55):
@Kevin Buzzard I am on to Advanced Addition level 6. I think I just figured out how to put Lean in an infinite loop, as follows:
theorem add_left_cancel (t a b : mynat) : t + a = t + b → a = b :=
intro h,
repeat { rw add_comm at h },
where the state before the repeat
is:
state:
t a b : mynat,
h : t + a = t + b
⊢ a = b
I want to rewrite h:t+a=t+b
to h:a+t=b+t
. I can only do half of this. I don't know how to get it to apply to the right hand side. The following doesn't work, only the first one works, on the left:
intro h,
rw add_comm at h,
rw add_comm at h.right,
Kevin Buzzard (Nov 17 2020 at 17:03):
Yes, it is possible to put computers in infinite loops. Try rw add_comm t b at h
, that will explicitly change t+b into b+t and not change anything else into anything.
Kevin Buzzard (Nov 17 2020 at 17:04):
add_comm
is a function too. It takes as input two numbers x and y, and returns a proof of x+y=y+x
.
Kevin Buzzard (Nov 17 2020 at 17:06):
rw
is a clever tactic. If you try rw add_comm
Lean will think "the user didn't tell me which numbers, so I will just look through the goal until I find something add something, and assume they meant that".
Lars Ericson (Nov 17 2020 at 17:38):
@Kevin Buzzard on the looping I was curious whether repeat
automatically detected repetitive reasoning chains as a heuristic in the internals. I.e. a cycle where it is clear that a sequence of states is repeating in an observable way which computably indicates non-termination. So I guess the answer is No for repeat
. Not that there's anything wrong with that, I was just curious.
So I've moved a long a little bit further:
theorem add_left_cancel (t a b : mynat) : t + a = t + b → a = b :=
begin
intro h,rw add_comm at h,rw add_comm t b at h,rw add_right_cancel h at h
intro h,
rw add_comm at h,
rw add_comm t b at h,
end
The state is
state:
t a b : mynat,
h : a + t = b + t
⊢ a = b
I want to do
rw add_right_cancel h at h,
to turn a+t=b+t
into a=b
, but it is not matching:
type mismatch at application
add_right_cancel h
term
h
has type
a + t = b + t : Prop
but is expected to have type
mynat : Type
state:
t a b : mynat,
h : a + t = b + t
⊢ a = b
Or this, but this also doesn't work:
apply add_right_cancel h,
giving
type mismatch at application
add_right_cancel h
term
h
has type
a + t = b + t : Prop
but is expected to have type
mynat : Type
state:
t a b : mynat,
h : a + t = b + t
⊢ a = b
How do I apply this correctly?
P.S. I accidentally found a back door going the other way, but I'm still interested in why I can't apply add_right_cancel
directly to rewrite h
. Here is the back door:
apply add_right_cancel a t b,
exact h,
Kevin Buzzard (Nov 17 2020 at 18:18):
You can read the error message. The errors are saying "you're giving me h but I don't want h yet"
Kevin Buzzard (Nov 17 2020 at 18:19):
"I want a, t and b first"
Lars Ericson (Nov 17 2020 at 18:29):
Thanks @Kevin Buzzard . I am on level 7 now. I am here:
theorem add_right_cancel_iff (t a b : mynat) : a + t = b + t ↔ a = b :=
begin
split,
intro h,
apply add_right_cancel a t b,
exact h,
intro h,
end
My state is
state:
t a b : mynat,
h : a = b
⊢ a + t = b + t
I guess I need to turn h:a=b
into h:a+t=b+t
. This is what I'm trying to prove though. Do I need an induction step here?
NOTE: I can finish with cc
but as before that is less explainable.
Alex J. Best (Nov 17 2020 at 18:33):
Looks like rw h
would do this?
Lars Ericson (Nov 17 2020 at 18:41):
Thanks @Alex J. Best I figured it out:
split,
intro h,
apply add_right_cancel a t b,
exact h,
intro h,
induction t with d hd,
repeat { rw add_zero },
exact h,
rw h,
refl,
Lars Ericson (Nov 17 2020 at 18:51):
I've got another right-hand-side of hypothesis rewrite question, for Advanced Addition level 8. I am here:
lemma eq_zero_of_add_right_eq_self {a b : mynat} : a + b = a → b = 0 :=
begin
intro h,
end
My state is
state:
a b : mynat,
h : a + b = a
⊢ b = 0
I want to apply rw ← add_zero a at h,
only to the right-hand side of h
, so I can get h: a+b=a+0
. However, I'm getting h:a+0+b=a+0
. Is there a way to restrict the rewrite to only the right hand side occurrence of a
?
EDIT: Never mind, I figured it out, but it's not as compact as being able to point to the right hand side of h
only:
intro h,
rw ← add_zero a at h,
rw add_assoc at h,
rw zero_add at h,
Reid Barton (Nov 17 2020 at 18:56):
There is a way to do this but I don't know whether it works in the natural number game.
Lars Ericson (Nov 17 2020 at 19:01):
@Reid Barton I am now here:
lemma eq_zero_of_add_right_eq_self {a b : mynat} : a + b = a → b = 0 :=
intro h,
rw ← add_zero a at h,
rw add_assoc at h,
rw zero_add at h,
My state is:
state:
a b : mynat,
h : a + b = a + 0
⊢ b = 0
I want to do
rw add_left_cancel at h,
to rewrite h
to b=0
. How do I do that? rw add_left_cancel at h,
doesn't work.
EDIT: Never mind, I got it
apply add_left_cancel a b 0 h,
Lars Ericson (Nov 17 2020 at 22:32):
I'm just finished Inequality 1. A few notes:
-
ring
is mentioned but isn't listed in the Tactics menu that I can see.use 1, ring,
makes a nice short proof. On the other handtauto
andrevert
are there but I don't recall any example where those were suggested or necessary. I've done every example of every preceding level up to this point. -
On all the levels, there is a big lag between 'no goals left' and 'Proof complete'. What's happening that I'm waiting for? It's a small point but I don't think it marks the level complete until it prints the 'Proof complete'.
Kevin Buzzard (Nov 17 2020 at 23:05):
The game is laggy because it's running a hacked together Javascript version of a supercomplicated C++ program all within a web browser. It's a miracle it works at all. Lean has a ton of tactics and rather than disable all the ones I've not mentioned I just left them in but didn't always import them, so random tactics may or may not work.
Lars Ericson (Nov 18 2020 at 00:07):
@Kevin Buzzard it's great, thanks for putting this together! I'm almost done. I'm on level 5 of Inequailty World. I am here:
theorem le_trans (a b c : mynat) (hab : a ≤ b) (hbc : b ≤ c) : a ≤ c :=
rw le_iff_exists_add,
rw le_iff_exists_add at hab,
rw le_iff_exists_add at hbc,
I'm doing these rewrites so I can think more clearly about the system of equations that results. That leaves me in this state:
state:
a b c : mynat,
hab : ∃ (c : mynat), b = a + c,
hbc : ∃ (c_1 : mynat), c = b + c_1
⊢ ∃ (c_1 : mynat), c = a + c_1
Now I would like to do a replacement for c
in the goal. I try this, and it doesn't work:
rw hbc,
This also doesn't work:
rw ← hbc
What should I really be doing at this point? The other approach I'm looking at is to start with an 8-case induction:
induction a with d hd,
induction b with e he,
induction c with f hf,
Lars Ericson (Nov 18 2020 at 02:40):
Ii would greatly appreciate help on level 6 of Inequality. I am here:
theorem le_antisymm (a b : mynat) (hab : a ≤ b) (hba : b ≤ a) : a = b :=
begin
cases hab with c hc,
cases hba with d hd,
induction a with e he,
induction b with f hf,
refl,
exfalso,
rw succ_add at hd,
exact (zero_ne_succ (f+d) hd),
end
This leaves me in state:
state:
case mynat.succ
b c d e : mynat,
he : b = e + c → e = b + d → e = b,
hc : b = succ e + c,
hd : succ e = b + d
⊢ succ e = b
I want to pick d=0
earlier on but cases hba with d hd
doesn't allow me to create the equivalent of a use 0
. How do I get a use 0
into the picture so that I can simplify hd
to succ e = b
?
Yakov Pechersky (Nov 18 2020 at 02:50):
Well if you did cases on your hab
to make a b = a + c
, and then hba
to get a = b + d
, why would you do induction on a
or b
? You can induct directly on c
or d
.
Yakov Pechersky (Nov 18 2020 at 02:51):
You can use the new equalities you've generated directly to rw
in the other.
Lars Ericson (Nov 18 2020 at 04:19):
Thanks @Yakov Pechersky that takes me all the way!
theorem le_antisymm (a b : mynat) (hab : a ≤ b) (hba : b ≤ a) : a = b :=
begin
cases hab with c hc,
cases hba with d hd,
induction c with e he,
induction d with f hf,
rw add_zero at hd,
exact hd,
rw add_zero at hc,
rw hc,
refl,
apply he,
rw hc at hd,
rw add_assoc at hd,
rw add_succ at hc,
rw succ_add e d at hd,
exfalso,
symmetry at hd,
have h := eq_zero_of_add_right_eq_self hd,
have hp :=
end
Lars Ericson (Nov 18 2020 at 20:21):
Hi I'm stuck on Level 8. I think I'm going in circles, any hints would be greatly appreciated. I am here:
lemma succ_le_succ (a b : mynat) (h : a ≤ b) : succ a ≤ succ b :=
begin
induction a with d hd,
induction b with e he,
have hp := le_refl(succ 0),
exact hp,
have hp := zero_le(e),
have hp1 := he(hp),
exact (le_succ (succ 0) (succ e) hp1),
cases b with f hf,
cases d with g hg,
exfalso,
rw ← one_eq_succ_zero at h,
have h2 := zero_le(1),
have h3 := le_antisymm 0 1 h2 h,
rw one_eq_succ_zero at h3,
exact ne_succ_self 0 h3,
exfalso,
have h4 := zero_le (succ (succ g)),
have h5 := le_antisymm 0 (succ (succ g)) h4 h,
exact (zero_ne_succ (succ g) h5),
end
This leaves me in state
state:
case mynat.succ
d f : mynat,
hd : d ≤ succ f → succ d ≤ succ (succ f),
h : succ d ≤ succ f
⊢ succ (succ d) ≤ succ (succ f)
which is right back to what I am supposed to prove to begin with, plus a succ
. Where am I going wrong?
Ruben Van de Velde (Nov 18 2020 at 20:27):
You don't actually need induction here. \le is defined in the game as the existence of a c
that you add to a
to get b
. What's the relation between the c
's for the \le in the hypothesis and the goal?
Yakov Pechersky (Nov 18 2020 at 20:39):
Lars, your preference to employ induction makes you prove things you've already proven before. Through the worlds, you've proven and made accessible an API about le
. Now all you need to do is use it.
Yakov Pechersky (Nov 18 2020 at 20:40):
That is, what does it mean for a <= b
?
Lars Ericson (Nov 18 2020 at 22:35):
@Ruben Van de Velde and @Yakov Pechersky starting new this way, I am having problems applying rules in presence of existential quantifiers. For example if I start with
lemma succ_le_succ (a b : mynat) (h : a ≤ b) : succ a ≤ succ b :=
begin
rw le_iff_exists_add,
rw le_iff_exists_add at h,
end
my state is
state:
a b : mynat,
h : ∃ (c : mynat), b = a + c
⊢ ∃ (c : mynat), succ b = succ a + c
If I then try to rewrite the goal from succ a + c
to succ (a+c)
using
rw succ_add a c,
I get the error
unknown identifier 'c'
Similarly if I try to rewrite the hypothesis b=a+c
as succ b = succ(a+c)
using
rw succ_eq_succ_of_eq b (a+c) h,
I also get unknown identifier 'c'
. If I try to do just
use c,
it also complains failed to instantiate goal with c
. It also doesn't really help in the sense that what I need to do is a use c
of some sort on the hypothesis
h : ∃ (c : mynat), b = a + c
That is, I don't know how to either apply a rewrite rule to an existentially quantified hypothesis and I don't know how to get rid of the quantifier. Which is why I was trying induction because it seemed like my only way to get rid of a quantifier.
I have the cases
tactic but that seems to be syntactic sugar for induction. I really want to be able to apply succ_eq_succ_of_eq
inside quantified expressions. Is there a way to do that?
Sorry I am getting really lost here.
Kevin Buzzard (Nov 18 2020 at 22:36):
There is no c
. h : ∃ (c : mynat), b = a + c
is the same as h : ∃ (w : mynat), b = a + w
. You need to extract the natural from h
using cases
.
Lars Ericson (Nov 18 2020 at 22:42):
Thank you @Kevin Buzzard, I see now that the starting pattern I need, to be able to apply a rule like succ_eq_succ_of_eq
, is
rw le_iff_exists_add at h,
cases h,
Yakov Pechersky (Nov 18 2020 at 22:43):
In the NNG, you don't even need to do the first rw
, you can just cases h with c hc
if you want.
Yakov Pechersky (Nov 18 2020 at 22:43):
"Give me the c : mynat
such that hc : b = a + c
"
Kevin Buzzard (Nov 18 2020 at 22:44):
Ooh I discourage use of definitional equalities like that in NNG. But it is true :-)
Mario Carneiro (Nov 18 2020 at 22:45):
You could make le
irreducible if you want to block that
Reid Barton (Nov 18 2020 at 22:46):
then you could just write rw le at h
:upside_down:
Mario Carneiro (Nov 18 2020 at 22:46):
that's still using the lemma though
Mario Carneiro (Nov 18 2020 at 22:47):
you could also give it a really obscure definition, filter all the operations through an opaque typeclass, or axiomatize everything
Mario Carneiro (Nov 18 2020 at 22:49):
I think you can also break rw le
if you remove the equation lemma or add the definition by tactics so it doesn't have one
Lars Ericson (Nov 18 2020 at 22:55):
Alas I am still on the edge but not there. If I do
lemma succ_le_succ (a b : mynat) (h : a ≤ b) : succ a ≤ succ b :=
begin
cases h with c hc,
have hp := succ_eq_succ_of_eq hc,
rw ← succ_add a c at hp,
end
then I get to state
a b c : mynat,
hc : b = a + c,
hp : succ b = succ a + c
⊢ succ a ≤ succ b
My hp
seems to be the fact I'm looking for, but it's not in the form of an inequality. If I say
exact hp
it says
invalid type ascription, term has type
succ b = succ a + c
but is expected to have type
succ a ≤ succ b
On the other hand if I say
rw le_iff_exists_add,
exact hp
I get to state
a b c : mynat,
hc : b = a + c,
hp : succ b = succ a + c
⊢ ∃ (c : mynat), succ b = succ a + c
but it still refuses to match hp
with the goal. This is really hard.
Yakov Pechersky (Nov 18 2020 at 22:59):
Have you used use
in any level?
Lars Ericson (Nov 18 2020 at 23:01):
Thank you @Yakov Pechersky that got me over the line
cases h with c hc,
have hp := succ_eq_succ_of_eq hc,
rw ← succ_add a c at hp,
rw le_iff_exists_add,
use c,
exact hp,
Yakov Pechersky (Nov 18 2020 at 23:02):
Abusing the definitional equalities that Kevin is trying to avoid:
cases h with c hc,
use c,
rw hc,
rw succ_add,
refl
Lars Ericson (Nov 18 2020 at 23:33):
@Yakov Pechersky nice! The cases h with c hc
is declaring a c
which is now available for use c
which is quietly "seeing" the existential quantifier in ⊢ succ a ≤ succ b
and then eliminating it without it ever being displayed. I will try to remember that. The quantifier is there hiding behind the ≤
. If I remember the pattern then I don't have to bring it out to reason about it. There's a lot to keep in mind.
Yakov Pechersky (Nov 18 2020 at 23:35):
But as Kevin said, it's not polite to access the existential behind the le
definition the way I did it. So you could do
rw le_iff_exists_add at h ⊢,
cases h with c hc,
use c,
rw hc,
rw succ_add,
refl
and that is more polite.
Lars Ericson (Nov 19 2020 at 00:24):
Hi I'm on level 9 of Inequality World. The goal is ⊢ a ≤ b ∨ b ≤ a
. I see that as an application of the Law of the Excluded Middle, which was not discussed in Proposition World or Advanced Proposition World. This would be more clear if the goal was
⊢ a ≤ b ∨ b < a
which seems to be equivalent, because
⊢ a ≤ b ∨ b ≤ a
is really saying
⊢ a <b ∨ a=b ∨ b<a ∨ a=b
which is saying
⊢ a ≤ b ∨ b < a
when you take out the extra a=b
, and that is really
⊢ a ≤ b ∨ ¬ (a ≤ b)
How can I apply Law of Excluded Middle in this example? Is it necessary? I clearly can't operate on each goal individually because a ≤ b
and b ≤ a
do not stand on their own, they are connected by Law of Excluded Middle. Just saying left
or right
gives me unprovable statements. There isn't any other discussion in Natural Number Game of how to deal with what might be called a "mutually dependent disjunction".
Mario Carneiro (Nov 19 2020 at 01:00):
You can use cases
on excluded middle to reason separately about the two branches
Lars Ericson (Nov 19 2020 at 01:16):
@Mario Carneiro cases
works on a hypothesis. My opening state is
state:
a b : mynat
⊢ a ≤ b ∨ b ≤ a
so there is no hypothesis. I can't invoke cases
in this context. I need to do an induction
first but in a previous problem I was suggested to skip the induction. In this case it seems like I can't.
Mario Carneiro (Nov 19 2020 at 01:16):
cases works on an expression. cases (em (a <= b))
will case on whether a <= b is true or false
Mario Carneiro (Nov 19 2020 at 01:17):
you can also use have
and then cases
if you want a hypothesis:
have := em (a <= b),
cases this
Lars Ericson (Nov 19 2020 at 01:17):
I don't have em
yet.
Goals
Messages
14:7: error:
unknown identifier 'em'
state:
a b : mynat
⊢ a ≤ b ∨ b ≤ a
Lars Ericson (Nov 19 2020 at 01:18):
trying cases (em (a ≤ b)),
Mario Carneiro (Nov 19 2020 at 01:18):
You said that EM comes up in this proof? What does the description call the theorem?
Mario Carneiro (Nov 19 2020 at 01:18):
the lean name is classical.em
, so that probably works regardless
Mario Carneiro (Nov 19 2020 at 01:19):
but NNG changes some things and it would have been reasonable to open classical
for this theorem
Lars Ericson (Nov 19 2020 at 01:19):
It only comes up in the sense that I look at "a ≤ b ∨ b ≤ a" and it intuitively looks like a 1-step application of excluded middle.
Lars Ericson (Nov 19 2020 at 01:19):
NNG doesn't talk about excluded middle at all which makes me think I am going at this wrong.
Mario Carneiro (Nov 19 2020 at 01:19):
It is also true that you can prove this without EM by induction
Mario Carneiro (Nov 19 2020 at 01:20):
you do induction on a
generalizing b
Mario Carneiro (Nov 19 2020 at 01:20):
and then try to prove a <= b \/ b <= a -> succ a <= succ b \/ succ b <= succ a
Mario Carneiro (Nov 19 2020 at 01:21):
with special cases when either a
or b
is 0
Lars Ericson (Nov 19 2020 at 01:21):
It let me do the cases in this form
cases (classical.em (a ≤ b))
Lars Ericson (Nov 19 2020 at 01:21):
I think that will get me done, it looks easy from there.
Mario Carneiro (Nov 19 2020 at 01:22):
If you go at it that way, you will want to prove that not (a <= b) -> b <= a
, which might not be as easy as you think
Mario Carneiro (Nov 19 2020 at 01:23):
depending on what theorems you have proven so far
Lars Ericson (Nov 19 2020 at 01:23):
In this section of NNG it would be helpful for other students to introduce classical.em
.
It's giving me these two goals:
2 goals
case or.inl
a b : mynat,
h : a ≤ b
⊢ a ≤ b ∨ b ≤ a
case or.inr
a b : mynat,
h : ¬a ≤ b
⊢ a ≤ b ∨ b ≤ a
The first one looks immediate and the other one could get interesting.
Mario Carneiro (Nov 19 2020 at 01:23):
that's right
Mario Carneiro (Nov 19 2020 at 01:25):
NNG doesn't really hide much of the tactic framework from you. I can give you tips equivalent to skipping a dozen levels accidentally :grinning_face_with_smiling_eyes:
Lars Ericson (Nov 19 2020 at 01:25):
Thanks for the help I think it looks do-able.
Lars Ericson (Nov 19 2020 at 02:03):
I am getting this error which was supposed to be fixed in Inequality World 9. I would rather convert ¬a ≤ b
to implication than carry the negation around which ends up outside of existential quantifiers which becomes confusing.
Lars Ericson (Nov 19 2020 at 02:04):
That is, not_iff_imp_false
is not directly available.
Mario Carneiro (Nov 19 2020 at 02:19):
you can use unfold not
instead
Mario Carneiro (Nov 19 2020 at 02:21):
but mostly I would suggest getting used to it as a logical connective
Lars Ericson (Nov 19 2020 at 02:21):
Without being able to get rid of negation, I am stuck. I can get rid of ≤, but then I've got negative existential quantifiers. I am doing this:
theorem le_total (a b : mynat) : a ≤ b ∨ b ≤ a :=
begin
cases (classical.em(a ≤ b)),
left,
exact h,
right,
rw le_iff_exists_add,
rw le_iff_exists_add at h,
end
which brings me to state
state:
a b : mynat,
h : ¬∃ (c : mynat), b = a + c
⊢ ∃ (c : mynat), a = b + c
I don't know how to deal with this. Any advice would be greatly appreciated.
Mario Carneiro (Nov 19 2020 at 02:22):
that's what I meant earlier:
Mario Carneiro said:
If you go at it that way, you will want to prove that
not (a <= b) -> b <= a
, which might not be as easy as you think
Lars Ericson (Nov 19 2020 at 02:22):
Yes so I am out of ideas. Let me try the unfold not
.
Yakov Pechersky (Nov 19 2020 at 02:23):
The induction idea is probably a better path
Mario Carneiro (Nov 19 2020 at 02:23):
ultimately, you need to prove that le is a total order, and pure logical manipulation isn't going to get you far because it's not true that all partial orders are total
Lars Ericson (Nov 19 2020 at 02:24):
OK so induction and arithmetic relationships.
Mario Carneiro (Nov 19 2020 at 02:24):
I don't think you need properties of +, you probably have everything you need about le already
Mario Carneiro (Nov 19 2020 at 02:25):
you just need to know that 0 <= n and a <= b -> succ a <= succ b
Lars Ericson (Nov 19 2020 at 02:27):
Yes because this path:
cases (classical.em(a ≤ b)),
left,
exact h,
right,
brings me to
state:
a b : mynat,
h : ¬a ≤ b
⊢ b ≤ a
which is equivalent to a ≤ b ∨ b ≤ a
, so that's just a round trip.
Mario Carneiro (Nov 19 2020 at 02:35):
right
Mario Carneiro (Nov 19 2020 at 02:35):
em is a tautology, so it's not going to take you anywhere logically different from the goal
Lars Ericson (Nov 19 2020 at 02:52):
From this:
induction b with c hc,
right,
exact (zero_le a),
cases hc,
left,
have hp := le_succ a c hc,
exact hp,
it leaves me having to prove
a c : mynat,
hc : c ≤ a
⊢ a ≤ succ c ∨ succ c ≤ a
In my head I can reason that either c<a
or c=a
. If c=a
then a ≤ succ c=succ a
. If c<a
then either succ c=a
or succ c < a
. I'm not quite sure how to get the mechanics of that done in Lean. If I do
induction b with c hc,
right,
exact (zero_le a),
cases hc,
left,
have hp := le_succ a c hc,
exact hp,
cases hc with d h2,
rw h2,
rw succ_eq_add_one,
then I am in state
state:
a c d : mynat,
h2 : a = c + d
⊢ c + d ≤ c + 1 ∨ c + 1 ≤ c + d
This seems like it should reduce to ⊢ d ≤ 1 ∨ 1 ≤ d
. I don't have the equivalent of add_left_cancel
for inequalities so I have to get rid of the inequalities at this point. Then I have a univariate form of the original goal. I should be able to do induction on the univariate form to finish. I hope that doesn't result in another round trip.
Mario Carneiro (Nov 19 2020 at 02:53):
Have you learned how to generalize variables in an induction?
Mario Carneiro (Nov 19 2020 at 02:54):
for this theorem you want to do induction on a
generalizing b
Mario Carneiro (Nov 19 2020 at 02:56):
you want your goal to look like succ a <= succ b \/ succ b <= succ a
Mario Carneiro (Nov 19 2020 at 02:56):
after the induction and cases
Lars Ericson (Nov 19 2020 at 03:28):
I am trying it but it introduces a ∀
quantifier, not discussed up to this point. I start with
theorem le_total (a b : mynat) : a ≤ b ∨ b ≤ a :=
begin
induction a generalizing b,
left,
exact (zero_le b),
right,
cases b with p q,
exact (zero_le (succ a_n)),
end
leading to state
state:
case mynat.succ
a_n : mynat,
a_ih : ∀ (b : mynat), a_n ≤ b ∨ b ≤ a_n,
p : mynat
⊢ succ p ≤ succ a_n
I don't know how to work with the disjunction a_ih
. The left disjunct leads to proof. The right disjunct does not.
Lars Ericson (Nov 19 2020 at 20:24):
So I have to think of
theorem le_total (a b : mynat) : a ≤ b ∨ b ≤ a :=
as saying
Theorem. For all a,b in N, either
- TA. a=b
- TB. Exists z > 0. a = b + z
- TC. Exists z > 0. b = a + z
Proof.
By induction on a
Base Case: a=0
0 = b => a = b (TA)
a != b = b = a+ b (TC)
Inductive Step:
Assume a > 0, show s(a).
Case. a=b
s(a) = s(b) = b+z so s(a) = b+z for z=q (TB)
Case. a = b + z
s(a) = s(b+z) = b + s(z) (TB)
Case b = a + z
Subcase z=1
s(x) = y (TB)
Subcase z != 1 and Exists v != 0 : z = s(v)
b = a+s(v) = s(a) + v (TC)
QED
Still a lot of mental machinery to translate that into Lean but it's closer to looking automatable.
Mario Carneiro (Nov 19 2020 at 20:34):
The proof I am leading you towards doesn't need any of that
Mario Carneiro (Nov 19 2020 at 20:35):
you need to cases b
not cases a_ih
. The point is to separate into the case 0
and succ b
so that your goal is succ a <= succ b \/ succ b <= succ a
Mario Carneiro (Nov 19 2020 at 20:36):
The proofwiki version is proving a < b \/ a = b \/ b < a
instead of a <= b \/ b <= a
and it is doing it with a different definition of <
Yakov Pechersky (Nov 19 2020 at 20:55):
There are two proofs that I've worked out, one that uses induction ... generalizing ...
which is the simplest one and the one that Mario is referring to. There is another one that does not need generalizing
that has you prove some other things along the way like le_succ_self
.
Lars Ericson (Nov 19 2020 at 21:06):
The above proof is from Franz Lemmermeyer at Bilkent University in Turkey, not from Proofwiki but from his course notes: http://www.fen.bilkent.edu.tr/~franz/nt/ch1.pdf
@Mario Carneiro I follow your lead. Lemmermeyer's proof may give me some guidance on how to work the inductive step.
Lars Ericson (Nov 19 2020 at 21:16):
Now I am here:
theorem le_total (a b : mynat) : a ≤ b ∨ b ≤ a :=
begin
induction a generalizing b,
left,
exact (zero_le b),
right,
cases b with hp hq,
exact (zero_le (succ a_n)),
rw le_iff_exists_add,
use 0,
rw add_zero,
end
which brings me to
state:
a_n : mynat,
a_ih : ∀ (b : mynat), a_n ≤ b ∨ b ≤ a_n,
hp : mynat
⊢ succ a_n = succ hp
How do I instantiate the universal quantifier with succ hp
in place of b
to get something I can work on further?
Mario Carneiro (Nov 19 2020 at 21:19):
have := a_ih (succ hp)
Mario Carneiro (Nov 19 2020 at 21:20):
although hp
is a weird name for a nat
Lars Ericson (Nov 19 2020 at 21:20):
Thanks @Mario Carneiro I'm getting used to the syntax for cases I didn't realize I was naming variables.
Lars Ericson (Nov 19 2020 at 21:20):
I will fix it.
Mario Carneiro (Nov 19 2020 at 21:21):
I prefer obtain
's syntax, which should also be available in NNG: obtain _ | b := b
Mario Carneiro (Nov 19 2020 at 21:22):
I don't think hq
names anything there
Mario Carneiro (Nov 19 2020 at 21:23):
alternatively, rcases b with _ | b
looks a bit more like cases
Lars Ericson (Nov 19 2020 at 21:24):
Now I'm here:
induction a generalizing b,
left,
exact (zero_le b),
right,
cases b with e f,
exact (zero_le (succ a_n)),
have h1 := a_ih e,
with state
state:
a_n : mynat,
a_ih : ∀ (b : mynat), a_n ≤ b ∨ b ≤ a_n,
e : mynat,
h1 : a_n ≤ e ∨ e ≤ a_n
⊢ succ e ≤ succ a_n
I think I am busted again because right of h1
proves the goal but left of h1
does not.
Mario Carneiro (Nov 19 2020 at 21:24):
maybe don't use right so early
Mario Carneiro (Nov 19 2020 at 21:25):
otherwise this is looking good
Ruben Van de Velde (Nov 19 2020 at 22:16):
Yeah, left
and right
are unsafe tactics - they can transform a provable goal into an unprovable one
Donald Sebastian Leung (Nov 20 2020 at 01:57):
@Lars Ericson Expanding on Mario's point, from my experience, the safest thing to do when trying to prove a disjunction is often to try to perform case analysis on existing hypotheses as far as possible before attempting to do left
/right
on the goal, since this leaves you with more flexibility with which side to prove later on.
Lars Ericson (Nov 20 2020 at 02:36):
Thank you @Donald Sebastian Leung . This snippet of proof from Lemmermeyer is a pretty clear guide to what I should be driving Lean to do. Lean's goal summary on the right hand panel, the notation, maps very closely into the proof's notation:
Inductive Step:
Assume a > 0, show s(a).
Case. a=b
s(a) = s(b) = b+z so s(a) = b+z for z=q
Case. a = b + z
s(a) = s(b+z) = b + s(z)
Case b = a + z
Subcase z=1
s(a) = b
Subcase z != 1 and Exists v != 0 : z = s(v)
b = a+s(v) = s(a) + v
There is a lot to learn about the tactics and their application. I can clearly see from the proof state whether the tactics I apply are taking the state in the direction of the above proof snippet or not, so I think I'm pretty close now.
Mario Carneiro (Nov 20 2020 at 03:27):
I'm telling you it's a lot easier than that
Mario Carneiro (Nov 20 2020 at 03:27):
that's not the same theorem anyway
Lars Ericson (Nov 20 2020 at 04:00):
@Mario Carneiro what is the material difference between
theorem le_total (a b : mynat) : a ≤ b ∨ b ≤ a :=
and
Theorem. For any a,b in N, then exactly one of the following is true:
(i) a=b,
(ii) a=b+c for c >0,
(iii) b=a+c for c > 0.
given
le_iff_exists_add (a b : mynat)
a ≤ b ↔ ∃ (c : mynat), b = a + c
They seem to describe exactly the same situation.
Mario Carneiro (Nov 20 2020 at 04:02):
They are both true of course, but they logically have very distinct forms. (The "exactly one of these three" makes the second one even more different, but I will ignore that.) If we use the iff statement you have identified, the actual translation will be something more like:
Theorem. For any a,b in N, at least one of the following is true:
(i) a=b+c for some c >= 0,
(ii) b=a+c for some c >= 0.
Mario Carneiro (Nov 20 2020 at 04:04):
While it is useful to ignore minor differences in formulation to an extent, it makes a big difference to how to go about proving a theorem so you shouldn't overdo it. At the extreme, all of these theorems just look equivalent to "true" so it provides no help on how to go about proving them
Mario Carneiro (Nov 20 2020 at 04:08):
In this case, the biggest difference with the inductive proof I'm suggesting is that it doesn't use le_of_exists_add
at all; so you never need any theorems about +
Lars Ericson (Nov 20 2020 at 04:30):
@Mario Carneiro I am starting to develop a better sense of the "state of the proof engine" and how various rules move that state.
While in one sense all provable theorems just look equivalent to true, what I was trying to say about the le_total
versus the Lemmerman formulation is this: If le_total
or Lemmerman
was the only thing you knew about N
, is there anything that you could prove knowing the one that you could not prove knowing the other, or are the sets of provable statements exactly the same? So that's more than saying true iff true
.
In that sense, the two statements looked equivalent to me. One of the suggestions above was to always write down the proof before trying to do it in Lean. Lemmerman gave me a proof of an equivalent statement. Under the hood, the text says that a ≤ b
is really just syntactic sugar for ∃ (c : mynat), b = a + c
. Although it may not be the shortest path, the Lemmerman proof gives a say of thinking about induction on the successor operation.
I'm still struggling with it and I greatly appreciate your suggestions, I'm just trying to explain why I looked at this -- basically because I was looking for any and all written proofs that would give me a guide, and after quite a lot of Googling, this was the only satisfying, directly related proof that I came across.
Mario Carneiro (Nov 20 2020 at 04:33):
While in one sense all provable theorems just look equivalent to true, what I was trying to say about the le_total versus the Lemmerman formulation is this: If le_total or Lemmerman was the only thing you knew about N, is there anything that you could prove knowing the one that you could not prove knowing the other, or are the sets of provable statements exactly the same? So that's more than saying true iff true.
The thing is, in a formal proof setting (well more generally in mathematics but you sense this especially in a formal setting), the set of things you "know" is constantly changing, and with it your sense of "equivalent" also changes. So things can be not equivalent at the beginning of a file and become equivalent by the end. Plus there is a cost metric associated with the equivalence, the amount of proof work that would be needed to go from A to B. When you find a random proof off the internet the set of previously proved facts will be different and with it the costs will be off; so an optimal proof in that setting may not be an optimal proof in your setting.
Mario Carneiro (Nov 20 2020 at 04:35):
Generally, if you want to follow a book you should stick with it for a whole sequence of proofs, so that the set of facts available to you matches up well with the set of facts available to the book. In your case NNG picks the theorems and the order, so you should try to find a proof that leverages recently proven theorems
Mario Carneiro (Nov 20 2020 at 04:40):
I don't have NNG, but here's a plain lean proof of the theorem in a way that should translate well to NNG:
Yakov Pechersky (Nov 20 2020 at 14:41):
NNG proof with induction generalizing
Yakov Pechersky (Nov 20 2020 at 14:42):
NNG proof with induction, no generalizing, uses underlying addition
Lars Ericson (Nov 20 2020 at 16:57):
Thank you @Mario Carneiro and @Yakov Pechersky for these spoilers. There is a lot of new notation in these such as underscores and nested cases and the ability to name variables in the induction
that I will learn from. The last two lines of Mario's proof need to be rewritten slightly to work in NNG:
{ left, exact succ_le_succ a b h },
{ right, exact succ_le_succ b a h },
I am still working on my own proof. It starts like this:
rw le_iff_exists_add,
rw le_iff_exists_add,
induction a,
left,
use b,
symmetry,
exact (zero_add b),
induction b,
cases a_ih with p q,
right,
This leaves me with 3 goals:
state:
3 goals
a_n : mynat,
p : ∃ (c : mynat), 0 = a_n + c
⊢ ∃ (c : mynat), succ a_n = 0 + c
case or.inr
a_n : mynat,
q : ∃ (c : mynat), a_n = 0 + c
⊢ (∃ (c : mynat), 0 = succ a_n + c) ∨ ∃ (c : mynat), succ a_n = 0 + c
case mynat.succ
a_n b_n : mynat,
b_ih :
((∃ (c : mynat), b_n = a_n + c) ∨ ∃ (c : mynat), a_n = b_n + c) →
((∃ (c : mynat), b_n = succ a_n + c) ∨ ∃ (c : mynat), succ a_n = b_n + c),
a_ih : (∃ (c : mynat), succ b_n = a_n + c) ∨ ∃ (c : mynat), a_n = succ b_n + c
⊢ (∃ (c : mynat), succ b_n = succ a_n + c) ∨ ∃ (c : mynat), succ a_n = succ b_n + c
I know this is the "wrong approach" but I will try to learn from techniques in your examples to make it go through. I will feel like I am cheating if I don't either carry the thought process through or come to a self-realization that it is wrong headed. What I am trying to avoid is a situation where I am turning the crank on the proof engine without insight into the meaning of what I am trying to prove, i.e. I would like to avoid a gaming mindset where my skill is in finding the right reduction rules to eliminate a goal, as opposed to knowing the meaning of the underlying space and using that meaning to guide tactic selection.
In that sense if I can finish the above proof, then I think I can claim I got some insight from Lemmerman's proof. I understand Mario's comments about the setup providing the direction. I really want to avoid that. I should be able to step back from one path of breadcrumbs being laid down in the forest, to a different path of breadcrumbs on a different path through the forest, and see how both paths lead to the same spot.
I have a language question. I have two theorems X
and Y
. I have a logic with a set of axioms A
and a proof engine P
which allows me to apply the axioms of A
to a goal state to generate a proof. There is some infinite set of theorems that can be derived from A
. Let's call that infinite set Proofs(P,A)
. X
and Y
are both in Proofs(P,A)
. I'm trying to say something like Proofs(P,{X})=Proofs(P,{Y})
. I don't know what the usual logicianly language is for discussing that. It's not the same as saying all models of X
are models of Y
. I'm not talking about models, just enumerable sets of proofs. So, without arguing the merits of context, what is the correct logicianly way of expressing the idea Proofs(P,{X})=Proofs(P,{Y})
?
Kevin Buzzard (Nov 20 2020 at 17:41):
Your first two goals look easy, your third just looks like what you would get if you just tried to prove the original question by induction. Note that you are using induction twice and every complete proof only uses it once. As for your logic question, isn't the answer just ?
Lars Ericson (Nov 20 2020 at 18:58):
Thank you @Mario Carneiro @Yakov Pechersky @Kevin Buzzard I didn't get where I was going with the Turkish proof. I was able to slightly personalize Yakov's second proof and learn some new notation. This is what I got, you can give me a D- for asking for too many hints:
induction a with a ha,
left,
exact zero_le b,
cases ha with hab hba,
rw le_iff_exists_add a b at hab,
cases hab with c hc,
rw hc,
cases c,
right,
ring,
have hasa := le_refl a,
exact (le_succ a a hasa),
left,
use c,
rw ← add_one_eq_succ c,
rw ← add_one_eq_succ a,
ring,
right,
exact le_succ b a hba,
Kevin Buzzard (Nov 20 2020 at 19:00):
One criticism of the natural number game is that it does not encourage people to write well-structured proofs (I felt it was something which didn't need to be emphasized to people who were just playing for fun). But really you should only ever have one goal, and use { }
s when suddenly you have two. It makes proofs easier to read.
Kevin Buzzard (Nov 20 2020 at 19:07):
theorem le_total (a b : mynat) : a ≤ b ∨ b ≤ a :=
begin
induction a with a ha,
{ left,
exact zero_le b },
{ cases ha with hab hba,
{ rw le_iff_exists_add a b at hab,
cases hab with c hc,
rw hc,
cases c,
{ right,
rw add_zero,
have hasa := le_refl a,
exact (le_succ a a hasa) },
{ left,
use c,
rw ← add_one_eq_succ c,
rw ← add_one_eq_succ a,
ring } },
{ right,
exact le_succ b a hba,} }
end
Kevin Buzzard (Nov 20 2020 at 19:51):
I golfed your proof a bit:
induction a with a ha,
{ left,
exact zero_le b },
{ rw succ_eq_add_one,
cases ha with hab hba,
{ cases hab with c hc,
subst hc,
cases c,
{ right,
use 1,
ring },
{ left,
use c,
ring } },
{ right,
apply le_succ,
assumption } }
By the way, the game itself knows all the solutions. If you want to see the solutions I wrote last year, you can check out the github repository
Kevin Buzzard (Nov 20 2020 at 19:55):
Note that my proof does something called "abusing definitional equality". This is a non-mathematical idea. There is more than one kind of equality in computer science, as I discovered to my genuine surprise a few years ago. You might recall from the first addition world that add_zero
is an axiom but zero_add
had to be proved by induction. As a consequence, n + 0 = n
is true by definition, but 0 + n = n
is true because of a theorem. If two things are equal by definition then Lean will often treat them as being identical, and I used this in e.g. the last assumption
. As a mathematician I would not like to distinguish between the two theorems really, but Lean really treats them differently.
Lars Ericson (Nov 21 2020 at 04:30):
Thanks @Kevin Buzzard I have finished the NNG and I can finally close this thread with a tactic-style proof of the theorem I started with, which is one of the exercises in 3.7.1. Here is the proof:
open classical
variables p q r : Prop
theorem or_distributivity : p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r) :=
begin
split,
{ intro h,
split,
{ cases h,
{ simp *,
},
{ cases h with f g,
simp *,
}
},
{ cases h with f g,
{ simp *,
},
{ simp *,
}
}
},
{ intro h,
cases h with f g,
cases f,
{ simp *,
},
{ simp *,
}
},
end
I am using simp
a lot because I still don't understand some basic things. For example when I get down to here:
theorem or_distributivity : p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r) :=
begin
split,
{ intro h,
split,
{ cases h,
{
then I have goal state
case or.inl
p q r : Prop,
h : p
⊢ p ∨ q
I don't know the syntax or tactic that allows me to go from h:p
to ⊢ p ∨ q
. From the above goal state, how do I prove the disjunction using a tactic without invoking simp *
?
There is no example that I can find in Proposition World or Advanced Proposition World that proves a disjunction. I couldn't find a clear example of a tactic that results in a disjunction in Chapter 5. I did stumble across simp *
, which works fine but is a black box. The end of section 3.6 gives a proof but it is in term style. I can't step through the term style proof in the same way as in tactic mode, so I can't see where the disjunction is introduced. Also I can't look at the terms and directly map them into tactics, I don't know the equivalences. Is there a cheat sheet somewhere that maps between term style and tactic style? Also is there a master list of tactics that is compact where I can look up tactics like the one to prove a disjunction by proving one of the disjuncts?
Damiano Testa (Nov 21 2020 at 06:21):
The tactics left
and right
convert a goal of the form p ∨ q
to either p
or q
. From there, exact _
should work. Alternatively, you can use exact or.inl _
or exact or.inr _
, replacing the _
with the appropriate assumption.
Lars Ericson (Nov 21 2020 at 15:20):
Thank you @Damiano Testa , here is the de-simp
ified pure tactic proof:
open classical
variables p q r : Prop
theorem or_distributivity : p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r) :=
begin
split,
{ intro h,
split,
{ cases h,
{ left,
exact h,
},
{ cases h with f g,
right,
exact f,
}
},
{ cases h with f g,
{ left,
exact f,
},
{ cases g with hq hr,
right,
exact hr,
}
}
},
{ intro h,
cases h with f g,
cases f,
{ left,
exact f,
},
{ cases g,
{ left,
exact g,
},
{ right,
split,
exact f,
exact g,
}
}
},
end
Last updated: Dec 20 2023 at 11:08 UTC