Zulip Chat Archive

Stream: new members

Topic: Hypotheses and avoiding repetition


Wrenna Robson (Aug 20 2020 at 23:44):

Suppose I have (P or Q) and (P -> Q -> R) theorems. I have R as a goal. I have a proof P -> Q and a proof Q -> P (which are similar but not the same). What's the most efficient way of putting these all together?

Kevin Buzzard (Aug 20 2020 at 23:53):

finish?

Kevin Buzzard (Aug 20 2020 at 23:53):

tauto!?

Kevin Buzzard (Aug 20 2020 at 23:54):

You know, you could just state this question completely in lean and sorry the proof and post here in triple back ticks

Kevin Buzzard (Aug 20 2020 at 23:55):

Then it's much easier for other people to play along

Kevin Buzzard (Aug 20 2020 at 23:58):

I'm on mobile right now but I'll gamble.

import tactic

variables (P Q R : Prop)

example (h1 : or P Q) (h2 : P -> Q -> R) : (h3 : P -> Q) (h4 : Q -> P) : R := sorry

Kevin Buzzard (Aug 20 2020 at 23:59):

I don't have Unicode on my phone. Can someone recommend a better keyboard for Android?

Yury G. Kudryashov (Aug 21 2020 at 00:00):

If you want a term mode proof then h1.elim (λ hp, h2 hp (h3 hp)) (λ hq, h2 (h4 hq) hq) should work.

Wrenna Robson (Aug 21 2020 at 00:04):

Sorry! I was on my phone too!

Wrenna Robson (Aug 21 2020 at 00:07):

To be more precise, then: I'm on the 3rd tutorial sheet.

example (f :   ) (h : non_decreasing f) (h' :  x, f (f x) = x) :  x, f x = x :=
begin
 sorry,
end

It recommends using:
le_total x y : x ≤ y ∨ y ≤ x

and of course, I have le_antisymm, although I don't know if I will need it.

Given the hypotheses, it's not hard to show that f(x) ≤ x if and only if x ≤ f(x)! What I want to avoid is splitting the cases wrongly and ending up repeating myself! I had a proof but it felt insufficiently optimised.

Yury G. Kudryashov (Aug 21 2020 at 00:25):

This is not an #mwe because it lacks the definition of non_decreasing.

Wrenna Robson (Aug 21 2020 at 00:29):

Ah, I'm sorry.

def non_decreasing (f :   ) :=  x₁ x₂, x₁  x₂  f x₁  f x₂
example (f :   ) (h : non_decreasing f) (h' :  x, f (f x) = x) :  x, f x = x :=
begin
  intro x,
  have bw : x  f(x)  f(x)  x,
  split,
  {
    intro t,
    calc f(x)  f(f(x)) : h x (f x) t
    ...       = x       : h' x,
  },
  {
    intro t,
    calc x = f(f(x))  : by rw (h' x)
    ...     f(x)     : h (f x) x t,
  },
  have j := le_total x (f x),
  apply le_antisymm,
  finish,
  finish,
end

Kyle Miller (Aug 21 2020 at 00:29):

Yury G. Kudryashov said:

This is not an #mwe because it lacks the definition of non_decreasing.

That should be ok; this is Massot's tutorial, exercise 0028 in 03_forall_or.lean (though it would have been better if @Wrenna Robson mentioned this :smile:)

Wrenna Robson (Aug 21 2020 at 00:30):

That's what I now have. Which is... OK. But the end feels untidy.

Yury G. Kudryashov (Aug 21 2020 at 00:30):

import order.basic

example {α : Type*} [linear_order α] {f : α  α} (hf : monotone f) (hf₂ :  x, f (f x) = x) (x) :
  f x = x :=
(le_total x (f x)).elim (λ hx, le_antisymm (by simpa only [hf₂] using hf hx) hx)
  (λ hx, le_antisymm hx $ by simpa only [hf₂] using hf hx)

Kyle Miller (Aug 21 2020 at 00:33):

I found my solution to this. It looks like it's pretty much the same, except things are not annunciated with calc statements, and I used cases rather than proving an iff first:

example (f :   ) (h : non_decreasing f) (h' :  x, f (f x) = x) :  x, f x = x :=
begin
  intro x,
  unfold non_decreasing at h,
  have h₁ := h (f x) x,
  have h₂ := h x (f x),
  rw h' at h₁, rw h' at h₂,
  cases le_total (f x) x with hc hc,
  { specialize h₁ hc, linarith},
  { specialize h₂ hc, linarith},
end

Wrenna Robson (Aug 21 2020 at 00:34):

Oh, that's quite nice. You avoid the use of le_antisymm (I assume that is cracked by the hammer that is linarith?)

Kyle Miller (Aug 21 2020 at 00:34):

It can be simplified a little:

example (f :   ) (h : non_decreasing f) (h' :  x, f (f x) = x) :  x, f x = x :=
begin
  intro x,
  have h₁ := h (f x) x,
  have h₂ := h x (f x),
  rw h' at h₁ h₂,
  cases le_total (f x) x with hc hc,
  { specialize h₁ hc, linarith},
  { specialize h₂ hc, linarith},
end

Wrenna Robson (Aug 21 2020 at 00:35):

Yes; the unfold doesn't do Lean any benefit?

Kyle Miller (Aug 21 2020 at 00:35):

Wrenna Robson said:

You avoid the use of le_antisymm (I assume that is cracked by the hammer that is linarith?)

Yeah, that's what it looks like is happening.

Wrenna Robson (Aug 21 2020 at 00:36):

In rw h' at h₁ h₂, what is this doing: apply rw h' at each of them in turn?

Kyle Miller (Aug 21 2020 at 00:36):

The unfold must have been there for me to notice I could apply f x and x as arguments to it, but Lean will unfold the definition for you as you do the application.

Yury G. Kudryashov (Aug 21 2020 at 00:36):

I guess linarith won't work for a linear_order α, it needs real

Kyle Miller (Aug 21 2020 at 00:37):

And yes, rw can be at multiple hypotheses to do multiple rewrites. You can explicitly rewrite the goal with at ⊢ (entered with \|-).


Last updated: Dec 20 2023 at 11:08 UTC