Zulip Chat Archive

Stream: new members

Topic: more basics


Iocta (Jan 19 2020 at 02:14):

open classical

variables (α : Type) (p q : α  Prop)
variable a : α
variable r : Prop

example : (x :α, p x)  α :=
λ h, exists.elim h, _

Iocta (Jan 19 2020 at 02:16):

@Bryan Gin-ge Chen continuing

Bryan Gin-ge Chen (Jan 19 2020 at 02:17):

(link to my past reply in the other thread)

Since α lives in Type, you have to use something like exists.classical_rec_on from mathlib's logic.basic. Also, the example becomes noncomputable because it uses the choice axiom.

Iocta (Jan 19 2020 at 02:18):

Hmm, maybe this isn't what I need then. I'm doing this one from the exercises

open classical

variables (α : Type) (p q : α  Prop)
variable a : α
variable r : Prop

example : ( x, p x  r)  ( x, p x)  r :=
iff.intro
(λ h,
and.intro (

(exists.elim h (λ a' h', (exists.intro a') h'.left)))
(exists.elim h (λ a' h', h'.right)))
(λ h,
exists.intro (have ex: _, from h.left, _) _)

Iocta (Jan 19 2020 at 02:21):

Is the hole I need to fill here equivalent to the one in the smaller snippet?

Iocta (Jan 19 2020 at 02:28):

or maybe I'm just going about this the wrong way..

Bryan Gin-ge Chen (Jan 19 2020 at 02:31):

This one can be done without classical. This should hopefully get you on the right track:

example : ( x, p x  r)  ( x, p x)  r :=
iff.intro
(λ h,
  and.intro
    (exists.elim h (λ a' h', (exists.intro a') h'.left))
    (exists.elim h (λ a' h', h'.right))
)
(λ h, exists.elim h.1 (λ x hx, _))

Iocta (Jan 19 2020 at 02:38):

Got it, thanks.

Iocta (Jan 27 2020 at 04:47):

I'm trying to figure out how to use exists.elim, so I wanted to put an underscore in that (\lambda z, _) and have it tell me what type z has. How can I make it tell me?

example : ( x, p x  q x)  ( x, p x)  ( x, q x) :=
iff.intro
(λ h,
(have foo: _, from exists.elim h (λ z, _),
_)
)
_

Kenny Lau (Jan 27 2020 at 05:19):

well what is the type of p?

Kenny Lau (Jan 27 2020 at 05:20):

also you should be able to see the type of z in the underscore after it?

Iocta (Jan 27 2020 at 06:16):

also you should be able to see the type of z in the underscore after it?

That's what I expected, but it doesn't appear

Mario Carneiro (Jan 27 2020 at 06:21):

When I put that in lean, there is an error over "exists" saying it can't synthesize |- Prop, but I can still hover over z to see z : α

Mario Carneiro (Jan 27 2020 at 06:22):

You can't use exists.elim unless lean knows what you are trying to prove (the _ in foo: _)

Mario Carneiro (Jan 27 2020 at 06:23):

If I remove the have and just use

example : ( x, p x  q x)  ( x, p x)  ( x, q x) :=
iff.intro
(λ h,
(exists.elim h (λ z, _))
)
_

then the underscores are highlighted as you would expect

Iocta (Jan 27 2020 at 06:31):

ah putting the cursor over z does display its type

Iocta (Jan 27 2020 at 06:33):

I didn't see it originally because there's a fair amount of latency before it appears

Iocta (Jan 27 2020 at 06:35):

dunno if that's an intentional delay in lean-mode to make it less distracting or it's just slow

Mario Carneiro (Jan 27 2020 at 06:38):

I would guess it's a mix of intentional latency by vscode and lean latency. Hovers that are too aggressive can really get in your way

Mario Carneiro (Jan 27 2020 at 06:38):

there is a setting for hover delay in preferences

Iocta (Jan 27 2020 at 06:40):

i'm using emacs, it just appears in the bottom bar rather than hovering

Iocta (Jan 27 2020 at 07:43):

How can I fill the holes?

variables (α : Type) (p q : α  Prop)
variable a : α
variable r : Prop

example : ( x, p x  q x)  ( x, p x)  ( x, q x) :=
iff.intro
(λ h1,
  exists.elim h1
  (λ a' hpq, or.elim hpq
    (λ pa, or.inl (exists.intro a' pa))
    (λ pq, or.inr (exists.intro a' pq))))
(λ h2, exists.intro a
  (or.elim h2
    (λz, or.inl (exists.elim z (λ s s', _)))
    (λz, or.inr _)))

Kevin Buzzard (Jan 27 2020 at 08:23):

Neither of them look true to me so you can't fill them in.

Mario Carneiro (Jan 27 2020 at 08:51):

the bad move was exists.intro a. The variable a is not necessary and has nothing to do with the x that satisfies the assumption

Iocta (Jan 27 2020 at 20:49):

Ok got it.

Iocta (Jan 27 2020 at 20:50):

Might be nice if it would warn me I was spending time trying to fill something that wasn't fillable, if it could tell

Kevin Buzzard (Jan 27 2020 at 21:40):

I am pretty sure that such a thing is undecidable!

Iocta (Jan 29 2020 at 07:08):

Am I on the right track?

open classical
example : ( x, p x)  ¬ ( x, ¬ p x) :=
iff.intro
(assume p_all_x exists_npx,
  exists.elim
    exists_npx
    (λ a hnpa,
      have hpa: p a, from  p_all_x a,
      absurd hpa hnpa ))
(assume exists_npx a,
  or.elim (em  (p a))
  id
  (λ hnpa, _))

Kevin Buzzard (Jan 29 2020 at 07:14):

Well you're not using tactic mode so you're already at a big disadvantage

Kevin Buzzard (Jan 29 2020 at 07:15):

I'm not at lean right now but if you post the local context at the underscore (the assumptions and goal) it should be easy to see whether or not the hole is fillable

Iocta (Jan 29 2020 at 07:18):

 04e.lean    60  12 error           don't know how to synthesize placeholder
 context:
 α : Type,
 p : α  Prop,
 exists_npx : ¬∃ (x : α), ¬p x,
 a : α,
 hnpa : ¬p a
  p a (lean-checker)

Johan Commelin (Jan 29 2020 at 07:19):

How would you prove it in maths?

Kevin Buzzard (Jan 29 2020 at 07:27):

Yeah -- the hole is fillable but as Johan says you need to know where you're going.

Kevin Buzzard (Jan 29 2020 at 07:29):

and it would be a little easier to fill it in tactic mode. Are you using Lean in a funny way? I'd be tempted to just type "begin end" in place of that underscore. Are you not using Lean "live"?

Iocta (Jan 29 2020 at 07:33):

I have emacs running lean-mode with flycheck, not sure if that's normal

Iocta (Jan 29 2020 at 07:35):

I haven't seen begin or end yet. been going through the tutorial. currently at https://leanprover.github.io/theorem_proving_in_lean/quantifiers_and_equality.html

Johan Commelin (Jan 29 2020 at 07:38):

@Iocta Honest question: are you a computer scientist or a mathematician? This seems to matter a lot for which style is best when learning Lean.

Johan Commelin (Jan 29 2020 at 07:39):

You are currently learning in CS mode. But there are also tutorials that approach Lean in math mode.

Johan Commelin (Jan 29 2020 at 07:40):

E.g. http://wwwf.imperial.ac.uk/~buzzard/xena/natural_number_game/

Iocta (Jan 29 2020 at 07:46):

Oh interesting, I hadn't seen that, will take a look.

Kevin Buzzard (Jan 29 2020 at 08:19):

I haven't seen begin or end yet. been going through the tutorial. currently at https://leanprover.github.io/theorem_proving_in_lean/quantifiers_and_equality.html

You are just one chapter away from Nirvana :-)

Iocta (Feb 07 2020 at 04:22):

I'm returning to this problem. I got one step forward(?) but not seeing how to proceed.

variables (α : Type) (p q : α  Prop)
variable r : Prop


theorem contrapositive {p q : Prop} : (p  q)  (¬q  ¬p) :=
λ h hnq hp, hnq (h hp)


example : ( x, p x)  ¬ ( x, ¬ p x) :=
iff.intro
(assume p_all_x exists_npx,
  exists.elim
    exists_npx
    (λ a hnpa,
      have hpa: p a, from  p_all_x a,
      absurd hpa hnpa ))
(λ h,
  have cont: ¬∀ x, p x  x, ¬ p x,
  from (λ z, h (z (_) _)),
  contrapositive cont
)

Iocta (Feb 07 2020 at 04:25):

In math I'm not sure I've dug beneath the level of "when negating, just flip the quantifiers" so maybe some strategy there is what I'm missing

Yury G. Kudryashov (Feb 07 2020 at 04:34):

You can't prove ¬ (∃ x, ¬ p x) → (∀ x, p x) without using classical reasoning. Your term proves ¬ (∃ x, ¬ p x) → ¬¬(∀ x, p x).

Daniel Keys (Feb 09 2020 at 02:34):

@Iocta Here is a lengthy solution to that problem. I also had some issues with one of the directions, but ended up with two solutions after I was helped. The L_2 one using tactic mode is the one that I thought of in the first place but didn't know how to implement. TheL_1can be found within the topics in this chat room. You'll need to open classical.

variables (α : Type) (p q : α  Prop)
variable a : α

theorem T05R : ( x, p x)  ¬ ( x, ¬ p x) :=
begin
    intro hall,
    intro hexi,
    cases hexi with w hw,
    exact hw (hall w)
end

theorem T05L_1 : ¬ ( x, ¬ p x)  ( x, p x) :=
    (assume hnExnpx : ¬ ( x, ¬ p x),
    (λ x, or.elim (em (p x))
    (λ hpx : p x, hpx)
    (λ hnpx : ¬ p x,
    false.elim (hnExnpx (exists.intro x (λ hpx : p x, hnpx hpx))))))

local attribute [instance] classical.prop_decidable
theorem T05L_2 : ¬ ( x, ¬ p x)  ( x, p x) :=
begin
    intros h x,
    by_contradiction H,
    apply h,
    existsi x,
    exact H
end
theorem T05 : ( x, p x)  ¬ ( x, ¬ p x) :=
begin
    exact iff.intro (T05R α p a) (T05L_2 α p a)
end

Dan Stanescu (Feb 17 2020 at 14:14):

Newbie question: is there a built-in way to change a goal of the form a ≠ b to a = b -> false? I tried doing by_contradiction on it, but it changes instead to ¬a ≠ b. I can get what I want by something like:

theorem zNEo1 : (0 ≠ 1) = ¬ (0 = 1) := rfl
theorem zNEo2 : ¬ (0 = 1) → (0 ≠ 1) :=
begin
    intro h,
    rw zNEo1,
    exact h,
end

but I feel there should be a better way.

Patrick Massot (Feb 17 2020 at 14:16):

theorem zNEo2 : 0  1 :=
begin
    change 0 = 1  false,
end

Is this what you want?

Dan Stanescu (Feb 17 2020 at 14:38):

Yes, apparently this takes care of it. Didn't encounter change before. Thank you!

Chris Hughes (Feb 17 2020 at 14:39):

It might also be useful to just intro h, and it automatically unfolds \ne

Patrick Massot (Feb 17 2020 at 14:39):

It changes the goal to anything which is the same by definition (in a very precise sense).

Dan Stanescu (Feb 17 2020 at 14:45):

I thought I tried intro, but apparently not. So 0 ≠ 1 is indeed defined as 0 = 1 → false then?

Patrick Massot (Feb 17 2020 at 14:45):

Yes.

Patrick Massot (Feb 17 2020 at 14:46):

Are you using VScode?

Dan Stanescu (Feb 17 2020 at 14:46):

Yes.

Patrick Massot (Feb 17 2020 at 14:48):

You can right-click on the symbol, choose "Go to definition" to get to https://github.com/leanprover/lean/blob/master/library/init/logic.lean#L97-L98 then do the same on the negation symbol to go to https://github.com/leanprover/lean/blob/master/library/init/core.lean#L142-L143

Patrick Massot (Feb 17 2020 at 14:49):

This may look like a hard way of learning (sounds like: code is the documentation). But this is actually part of why proof assistants are so cool compared to traditional textbooks: you can always ask and get the answer.

Dan Stanescu (Feb 17 2020 at 15:08):

That is great advice. I didn't know about right-click, I only knew to hover the mouse over symbols to get how to type them.

Dan Stanescu (Feb 17 2020 at 15:13):

Actually one more question along the same lines. The manual talks about double negation elimination after the excluded middle em, but is there a way to have Lean replace ¬ ¬ P by P without writing our own tool?

Dan Stanescu (Feb 17 2020 at 15:27):

Just realized I can probably change it again. :smiley:

Chris Hughes (Feb 17 2020 at 15:27):

You can't just change it because they're not equal by definition. There is a lemma classical.not_not you could use to rewrite it.

Dan Stanescu (Feb 17 2020 at 15:28):

Great! Thanks again!

Patrick Massot (Feb 17 2020 at 15:33):

push_neg should do that for you as well.

Daniel Keys (Feb 25 2020 at 18:03):

Can someone explain why I can evaluate the truth value of the 'divide' but not the 'belongs' proposition below? Is there a way to evaluate tt or ff for the latter?

import data.set

def A1 : set  := {1,2,3}
#check (1  A1)
#reduce (1  A1)
-- #reduce to_bool (1 ∈ A1) fails
#check (2  (0:))
#reduce to_bool (2  (0:))

Daniel Keys (Feb 25 2020 at 18:48):

Or is this a bug?

Bryan Gin-ge Chen (Feb 25 2020 at 18:48):

Hmm, I can't reproduce this. With no imports, with both Lean 3.4.2 and Lean 3.5.1, it's the last line which fails with

failed to synthesize type class instance for
⊢ decidable (2 ∣ 0)

With a recent mathlib imported (e.g. with import data.int.basic at the start of the file), I get no errors at all.

Can you say more about what version of Lean you're using / if you're importing some version of mathlib?

Bryan Gin-ge Chen (Feb 25 2020 at 18:49):

Oh, sorry, I misunderstood. You commented out the line that fails. Let me try again.

Daniel Keys (Feb 25 2020 at 18:50):

I only import set, the divides works with just that. I edited to include the import.

Bryan Gin-ge Chen (Feb 25 2020 at 18:51):

Well, it's not a bug. The error is

failed to synthesize type class instance for
⊢ decidable (1 ∈ A1)

which makes sense, since general set membership is not decidable.

Daniel Keys (Feb 25 2020 at 18:54):

Is there something I can do about actually evaluating it? The #reduce does show a Prop which should evaluate to tt.

Bryan Gin-ge Chen (Feb 25 2020 at 18:54):

For explicit finite sets like this, you might want to use finsets instead:

import data.finset

def F1 : finset  := {1,2,3}
#reduce to_bool (1  F1) -- tt

Daniel Keys (Feb 25 2020 at 18:55):

OK, thanks! That works!

Kevin Buzzard (Feb 25 2020 at 18:56):

There's a difference between "I know how to prove this in this one case" and "Lean has an algorithm which will solve all questions of this type", that's what's going on

Bryan Gin-ge Chen (Feb 25 2020 at 18:58):

There is a question here that I don't know how to answer. How would you tell Lean to reduce the argument (1 ∈ A1) before attempting to apply to_bool to it? Somehow, Lean can reduce (1 ∈ A1) to (1 = 3 ∨ 1 = 2 ∨ 1 = 1 ∨ false) and #reduce to_bool (1 = 3 ∨ 1 = 2 ∨ 1 = 1 ∨ false) returns tt, as expected.

Daniel Keys (Feb 25 2020 at 18:58):

What is interesting is that the #reduce alone seems to show the same output for both finset and set. Like 1 = 3 ∨ 1 = 2 ∨ 1 = 1 ∨ false for both.

Daniel Keys (Feb 25 2020 at 19:00):

Obviously I was typing at the same time as Bryan Gin-ge Chen.

Bryan Gin-ge Chen (Feb 25 2020 at 19:02):

You might find it interesting to include set_option pp.all true at the start of the file and then look at #print A1 and #print F1 to see how their internals differ.

Kevin Buzzard (Feb 25 2020 at 19:04):

def A3 : set  := {n |  a b c : , n + a ^ 3 = b ^ 3 + c ^ 3}

#reduce to_bool (42  A3)

Here's an illustration of the original set problem. Until a few months ago, it was not known whether 42 was in this set or not.

Kevin Buzzard (Feb 25 2020 at 19:06):

import tactic

def A3 : set  := {n |  a b c : , n + a ^ 3 = b ^ 3 + c ^ 3}

example : 42  A3 :=
begin
  use 80538738812075974,
  use 80435758145817515,
  use 12602123297335631,
  norm_num
end

Kevin Buzzard (Feb 25 2020 at 19:11):

import tactic

def A1 : set  := {1, 2, 3}

instance {n : } : decidable (n  A1) := by unfold A1; simp; apply_instance

#eval to_bool (1  A1) -- tt

Daniel Keys (Feb 25 2020 at 19:11):

This result is very recent indeed, Kevin.

I saw the difference in finset versus set. Although I can't read through all of it, finset has several places with decidable. I guess that makes for the different result.

Daniel Keys (Feb 25 2020 at 19:12):

Thank you both!

Bryan Gin-ge Chen (Feb 25 2020 at 19:14):

It may be more enlightening to look at how finsets are implemented as compared to sets. In short, finsets are backed by explicit lists modulo ordering and a proof that each element occurs once, whereas sets are simply abstract functions to Prop.

Daniel Keys (Feb 25 2020 at 19:15):

Yes, indeed - this makes it clear.

Kevin Buzzard (Feb 25 2020 at 19:16):

finset.decidable_mem : Π {α : Type u_1} [h : decidable_eq α] (a : α) (s : finset α), decidable (a ∈ s)

Note that you still need decidable equality on your type, it's not just finiteness. The naturals have decidable equality.

Kevin Buzzard (Feb 25 2020 at 19:17):

import tactic
import data.finset
import data.real.basic

noncomputable def A1 : finset  := {1, 2, 3}

#eval to_bool ((1 : )  A1) -- fails

The reals don't.

Iocta (Mar 01 2020 at 07:06):

Thanks.

Iocta (Mar 01 2020 at 07:06):

When applying simp, is there a way to find out what it did?

Bryan Gin-ge Chen (Mar 01 2020 at 07:13):

Yes, to a certain extent. See the options described here.

Bryan Gin-ge Chen (Mar 01 2020 at 07:14):

There's also squeeze_simp if you import mathlib's tactics.

Iocta (Mar 01 2020 at 07:21):

aha

Reid Barton (Mar 01 2020 at 15:16):

I usually use set_option trace.simplify.rewrite true instead, that shows you only what simp actually did (at least approximately, I remember there being some weird caveats). The trace.simplify output is everything simp tried (which is to say, everything) even if it didn't work.

Reid Barton (Mar 01 2020 at 15:16):

Oh I see that's also documented a bit further down.

Daniel Keys (Mar 01 2020 at 20:23):

For a general a :ℝ, do we have anything that can be used to produce 0 ≤ a ^ 2? I've been trying with norm_num, but it doesn't work for all a.

Alex J. Best (Mar 01 2020 at 20:28):

pow_two_nonneg in algebra/group_power.lean

Daniel Keys (Mar 01 2020 at 20:29):

Thanks! That is what I was after, but didn't know where to find it.

Alex J. Best (Mar 01 2020 at 20:32):

Yeah I had to guess the name and then search for it in the vscode menu, which requires a little experience, but in this case it is a mathlib convention that squaring is always called pow_two and that 0 ≤ is called nonneg.

Kevin Buzzard (Mar 01 2020 at 20:45):

You could probably find this with library_search

Daniel Keys (Mar 01 2020 at 20:52):

Didn't use library_search yet. Not sure how to use it. Don't you need to know something to start with, like pow_two for example?

Alex J. Best (Mar 01 2020 at 20:56):

Kevin is right you don't need to know anything for this really:

import data.real.basic

example (a : ) : 0  a^2 := by library_search

Alex J. Best (Mar 01 2020 at 20:56):

and library_search tells you exactly the same lemma name

Daniel Keys (Mar 01 2020 at 20:58):

I see! Great, thanks!

Daniel Keys (Mar 01 2020 at 20:59):

This will help a lot, it was very frustrating sometimes as I was searching through lots of files for almost everything.

Iocta (Mar 06 2020 at 22:14):

-- With simp
example : (p  q)  r  p  (q  r) :=
iff.intro
begin
intro h,
simp *
end
begin
intro h,
simp *
end
 05.1.lean    87   1 info            0. [simplify.rewrite] [h]: p ==> true
 0. [simplify.rewrite] [h]: q ==> true
 0. [simplify.rewrite] [h]: r ==> true
 0. [simplify.rewrite] [and_self]: true  true ==> true
 0. [simplify.rewrite] [and_self]: true  true ==> true
 (lean-checker)
 05.1.lean    91   1 info            0. [simplify.rewrite] [h]: p ==> true
 0. [simplify.rewrite] [h]: q ==> true
 0. [simplify.rewrite] [and_self]: true  true ==> true
 0. [simplify.rewrite] [h]: r ==> true
 0. [simplify.rewrite] [and_self]: true  true ==> true
 (lean-checker)

How do I translate what trace is saying into code?

Johan Commelin (Mar 06 2020 at 22:16):

Use squeeze_simp?

Johan Commelin (Mar 06 2020 at 22:17):

It will turn it into code for you.

Iocta (Mar 08 2020 at 00:52):

squeeze_simp doesn't seem to reduce the options much here.

example : (p  q)  r  p  (q  r) :=
iff.intro
begin
intro h,
squeeze_simp *, -- Try this: simp only [*, and_self]
end
begin
intro h,
simp *
end

How do I turn the output of set_option trace.simplify.rewrite true (in the example above) manually? I'm not sure what it's trying to tell me. How do I rewrite [h] : q ==> true?

Reid Barton (Mar 08 2020 at 01:03):

This example is kind of too simple and relies more on internal behavior of simp than on lemmas.

Reid Barton (Mar 08 2020 at 01:04):

But you could start with rw h.1.2

Reid Barton (Mar 08 2020 at 01:04):

Wait no

Kevin Buzzard (Mar 08 2020 at 01:04):

cases h

Reid Barton (Mar 08 2020 at 01:05):

You need whatever lemma it is that says if h : p, then p = true

Kevin Buzzard (Mar 08 2020 at 01:05):

just use cases and split

Reid Barton (Mar 08 2020 at 01:05):

This rewriting strategy is not a natural way to prove this particular example

Kevin Buzzard (Mar 08 2020 at 01:06):

high-powered tactics like simp and rw are not really needed for basic logic questions like this, just use the basic tactics.

Iocta (Mar 08 2020 at 01:54):

example : (p  q)  r  p  (q  r) :=
iff.intro
begin
intro h,
cases h with hpq hr,
cases hpq,
repeat {apply and.intro},
repeat {assumption},
end
begin
intro h,
cases h with hp hqr,
cases hqr,
repeat {apply and.intro},
repeat {assumption},
end

works. What would I do with split here?

Kevin Buzzard (Mar 08 2020 at 01:55):

split stops you having to do repeat

Kevin Buzzard (Mar 08 2020 at 01:55):

it splits an and goal into two goals

Iocta (Mar 08 2020 at 02:17):

Not quite seeing how to use it here. What's the function I want to apply to both branches?

Iocta (Mar 08 2020 at 07:02):

There must be a shorter way to write this?

example : ¬p  ¬q  ¬(p  q) :=
begin
intros h hpq,
cases h with hnp hnq,
cases hpq with hp hq,
apply hnp hp,
cases hpq with hp hq,
apply hnq hq,
end

Kenny Lau (Mar 08 2020 at 07:07):

import logic.basic tactic.tauto

variables (p q : Prop)
example : ¬p  ¬q  ¬(p  q) := not_and_of_not_or_not
example : ¬p  ¬q  ¬(p  q) := λ h, h.cases_on (mt and.left) (mt and.right)
example : ¬p  ¬q  ¬(p  q) := by tauto

Iocta (Mar 08 2020 at 07:08):

neat

Iocta (Mar 08 2020 at 23:57):

Am I down the wrong path?

example : ¬(p  q)  p  ¬q :=
begin
intro h,
cases (em p) with hp hnp,
cases (em q) with hq hnq,
constructor,
exact hp,
intro hq',
apply h,
intro hp',
exact hq,
constructor,
exact hp,
exact hnq,
cases (em q) with hq hnq,
constructor,
sorry,
end

Iocta (Mar 08 2020 at 23:58):

 example   335   1 error           tactic failed, there are unsolved goals
 state:
 2 goals
 p q : Prop,
 h : ¬(p  q),
 hnp : ¬p,
 hq : q
  ¬q

 case or.inr
 p q : Prop,
 h : ¬(p  q),
 hnp : ¬p,
 hnq : ¬q
  p  ¬q (lean-checker)

Kevin Buzzard (Mar 09 2020 at 00:06):

You know not X is by definition X -> false? So in the first goal you can intro hq2 and then revert h! When your hypotheses can be used to get a contradiction as yours can, in both cases) then exfalso is sometimes a useful tactic; it replaces the goal by false.

Iocta (Mar 09 2020 at 00:28):

thanks

Iocta (Mar 09 2020 at 01:21):

 05.01.b.    15   3 error           solve1 tactic failed, focused goal has not been solved
 state:
 α : Type,
 p q : α  Prop,
 h :  (x : α), p x  q x,
 x : α
  p x (lean-checker)

In tactic mode: What's the equivalent of intro h here? I want to get inside of the \forall.

Iocta (Mar 09 2020 at 01:22):

(or is that not idiomatic?)

Mario Carneiro (Mar 09 2020 at 01:35):

It is generally easier to work forward at this point; have h2 := h x will give you a proof of p x ∧ q x

Iocta (Mar 09 2020 at 02:05):

ok

Reid Barton (Mar 09 2020 at 04:32):

(what would you expect the tactic state to look like after intro h?)

Iocta (Mar 09 2020 at 18:40):

I guess intro isn't really the right word. What Mario said does what I wanted.

Iocta (Mar 09 2020 at 19:05):

How can I avoid repeating assumption?

example (p q r : Prop) (hp : p) :
(p  q  r)  (q  p  r)  (q  r  p) :=
by split; try {split}; {left, assumption} <|> {right, left, assumption} <|> {right, right, assumption}

Johan Commelin (Mar 09 2020 at 19:19):

@Iocta I'm not sure if there is much you can do.

Johan Commelin (Mar 09 2020 at 19:20):

But if you want to golf this, then certainly by tauto is a lot shorter...

Iocta (Mar 09 2020 at 19:23):

Alright

Johan Commelin (Mar 09 2020 at 19:26):

I mean... I don't know if you wanted such a logic hammer...

Mario Carneiro (Mar 10 2020 at 01:33):

It's actually not a bad example of an introductory level tactic

namespace tactic.interactive
meta def leftright (t : itactic) : tactic unit :=
t <|> (tactic.left >> leftright) <|> (tactic.right >> leftright)
end tactic.interactive

example (p q r : Prop) (hp : p) :
(p  q  r)  (q  p  r)  (q  r  p) :=
by repeat {split}; leftright {assumption}

Mario Carneiro (Mar 10 2020 at 01:35):

There is a completely different style of proof represented by

example (p q r : Prop) (hp : p) :
(p  q  r)  (q  p  r)  (q  r  p) :=
by simp [hp]

Here the idea is that you use rewrite rules p ==> true, plus true \/ a ==> true, a \/ true ==> true and true /\ true ==> true to simplify the entire goal to true

Iocta (Mar 10 2020 at 04:40):

Ah cool

Iocta (Mar 11 2020 at 00:38):

How to fix this?

import tactic.squeeze

namespace hidden

inductive natural : Type
| zero : natural
| succ : natural  natural


open natural


def add (m n : natural) : natural :=
natural.rec_on n m (λ n add_m_n, succ add_m_n)

def mul (m n : natural) : natural :=
natural.rec_on n zero (λ n mul_m_n, add mul_m_n m)


def pred (n : natural) : natural :=
natural.rec_on n zero (λ n pred_n, n)

def sub (m n : natural) : natural :=
natural.rec_on n m (λ n sub_m_n, pred sub_m_n)

local infix ` <*> ` := mul

theorem mul_dist_add (m n k : natural ) : k <*>  add m n = add (k <*> m) (k <*> n) :=
sorry

theorem mul_assoc (m n k : natural ) : m <*> ( n <*> k ) = ( m <*> n ) <*> k :=
natural.rec_on k
(show m <*> (n <*> zero) = (m <*> n) <*> zero, from rfl)
(assume k,
  (assume ih:  m <*> ( n <*> k ) = ( m <*> n ) <*> k,
    show  m <*> ( n <*> (succ k) ) = ( m <*> n ) <*> (succ k),
    from calc
      m <*> ( n <*> (succ k) ) = m <*> (add (n <*> k) n ) : by refl
      ... = add (m <*> (n <*> k)) (m <*> n)  : by apply mul_dist_add
      ... = add ((m <*> n) <*> k) (m <*> n) : by apply ih

  ))

end hidden
 06.1.lean    40  50 error           invalid apply tactic, failed to unify
   add (m <*> (n <*> k)) (m <*> n) = add (m <*> n <*> k) (m <*> n)
 with
   m <*> (n <*> k) = m <*> n <*> k
 state:
 m n k k : natural,
 ih : m <*> (n <*> k) = m <*> n <*> k
  add (m <*> (n <*> k)) (m <*> n) = add (m <*> n <*> k) (m <*> n) (lean-checker)

Reid Barton (Mar 11 2020 at 00:41):

The easy way is by rw ih

Reid Barton (Mar 11 2020 at 00:41):

there's also a harder way

Iocta (Mar 11 2020 at 00:47):

How do I do the harder way? I tried using trace to see what simp * was doing but it wasn't all that helpful

Reid Barton (Mar 11 2020 at 00:47):

Using congr_arg somehow

Reid Barton (Mar 11 2020 at 00:48):

maybe congr_arg (\lam x, add x (m <*> n)) ih

Reid Barton (Mar 11 2020 at 00:48):

anyways, this is where rw starts to become indispensable

Iocta (Mar 11 2020 at 00:53):

ok that works

Mario Carneiro (Mar 11 2020 at 00:54):

You can probably use \t for this

Mario Carneiro (Mar 11 2020 at 00:55):

ih ▸ rfl works

Mario Carneiro (Mar 11 2020 at 01:02):

The unnecessarily compressed proof:

theorem mul_assoc (m n k) : m <*> (n <*> k) = (m <*> n) <*> k :=
natural.rec_on k rfl $ λ k ih,
(mul_dist_add _ _ _).trans (ih.symm  rfl)

Iocta (Mar 11 2020 at 01:04):

What's the difference between exact and apply? Example: exact (eq.subst ih rfl) but apply (eq.subst ih rfl) doesn't.

Mario Carneiro (Mar 11 2020 at 01:05):

apply f means refine f or refine f _ or refine f _ _ or ... depending on the type of f. Sometimes it can't guess right, and more importantly for your example, it can't communicate the expected type of f when elaborating it because it doesn't know yet

Mario Carneiro (Mar 11 2020 at 01:05):

On the other hand eq.subst requires that the expected type be known

Iocta (Mar 11 2020 at 01:08):

ah ok

Iocta (Mar 11 2020 at 19:24):

theorem zero_add (n : natural) : add n zero = n :=
natural.cases_on n
(by refl)
(assume m: natural,
  (assume ih : add m zero = m,
  show add (succ m) zero = succ m,
  from calc
    add (succ m) zero = succ (add m zero) : by refl
    ... = succ m : by simp * at *))
 06.1.lean    31   1 error           "eliminator" elaborator type mismatch, term
   λ (m : natural) (ih : add m zero = m),
     show add (succ m) zero = succ m, from
       eq.trans (eq.refl (add (succ m) zero))
         (eq.mpr
            (id
               (eq.trans
                  (eq.trans
                     ((λ (a a_1 : natural) (e_1 : a = a_1) (a_2 a_3 : natural) (e_2 : a_2 = a_3),
                         congr (congr_arg eq e_1) e_2)
                        (succ (add m zero))
                        (succ m)
                        ((λ (a a_1 : natural) (e_1 : a = a_1), congr_arg succ e_1) (add m zero) m ih)
                        (succ m)
                        (succ m)
                        (eq.refl (succ m)))
                     succ.inj_eq)
                  (propext (eq_self_iff_true m))))
            trivial)
 has type
    (m : natural), add m zero = m  add (succ m) zero = succ m
 but is expected to have type
    (a : natural), add (succ a) zero = succ a

Iocta (Mar 11 2020 at 19:25):

How to fix this?

Iocta (Mar 11 2020 at 19:26):

I think it's saying "you're trying to depend on ih but I'm not providing it"

Mario Carneiro (Mar 11 2020 at 19:26):

use rec_on not cases_on

Iocta (Mar 11 2020 at 19:26):

oh. that was easy :-)

Iocta (Mar 11 2020 at 19:29):

is it a good idea to introduce that m to avoid confusing that variable with the enclosing n, or should I just use n again?

Iocta (Mar 11 2020 at 19:30):

(really just asking what the convention is)

Iocta (Mar 11 2020 at 19:40):

aha, the code I can find looks to pretty uniformly reuse n

Reid Barton (Mar 11 2020 at 20:02):

Yes, this seems to be the normal convention

Iocta (Mar 12 2020 at 00:57):

namespace hidden

universe u

inductive list (α : Type u)
| nil {} : list
| cons : α  list  list

namespace list

variable {α : Type}

notation h :: t  := cons h t

def append (s t : list α) : list α :=
list.rec t (λ x l u, x::u) s

notation s ++ t := append s t

theorem nil_append (t : list α) : nil ++ t = t := rfl

theorem cons_append (x : α) (s t : list α) :
  x::s ++ t = x::(s ++ t) := rfl

theorem append_nil (t : list α) : t ++ nil = t :=
list.rec_on t
rfl
(assume t,
  (assume ih: t ++ nil = t, sorry ))


end list

end hidden
 06.2.lean    29  17 error           none of the overloads are applicable
 error for hidden.list.append
 type mismatch at application
   append t
 term
   t
 has type
   α
 but is expected to have type
   list ?m_1

 error for has_append.append
 failed to synthesize type class instance for
 α : Type,
 t : list α,
 t : α
  has_append α

Iocta (Mar 12 2020 at 00:58):

I'm guessing I'm doing the namespacing wrong?

Reid Barton (Mar 12 2020 at 01:23):

I think your code is just wrong

Reid Barton (Mar 12 2020 at 01:25):

in the inductive case, what happened to the element that is the first argument to cons? You aren't taking enough arguments

Mario Carneiro (Mar 12 2020 at 01:55):

I think the issue is that the ++ notation is conflicting with the one from core (which uses the has_append typeclass)

Mario Carneiro (Mar 12 2020 at 01:57):

oh, reid is right, there is another argument to list.rec_on. The setup should look like this:

theorem append_nil (t : list α) : t ++ nil = t :=
list.rec_on t
rfl
(assume a t,
  (assume ih: t ++ nil = t,
    show (a :: t) ++ nil = a :: t, from
    sorry ))

Iocta (Mar 12 2020 at 05:07):

Oh, I see

Iocta (Mar 12 2020 at 20:11):

https://leanprover.github.io/theorem_proving_in_lean/inductive_types.html

def length : Π {α : Type u}, list α  nat :=
(λ α, (λ s, list.rec_on s 0 (λ a s len, len + 1)))

How do I state a theorem "length nil = 0"?

Iocta (Mar 12 2020 at 20:13):

example : length  nil  = 0 :=
sorry
 06.2.lean    55  19 error           don't know how to synthesize placeholder
 context:
  Type ? (lean-checker)

Patrick Massot (Mar 12 2020 at 20:16):

Lean complains it doesn't know the type of nil.

Patrick Massot (Mar 12 2020 at 20:16):

Because there is no way it can reconstruct α from your statement.

Patrick Massot (Mar 12 2020 at 20:17):

Good news is: one the statement will type-check, the proof will be pretty short.

Reid Barton (Mar 12 2020 at 20:17):

There's no implicit generalization like in Haskell and some other languages. You have to write something like example (a : Type) : length (nil : list a) = 0

Iocta (Mar 12 2020 at 20:19):

aha, that works

Patrick Massot (Mar 12 2020 at 20:20):

Because Reid gave away the statement, you are allowed only three characters to write the proof.

Iocta (Mar 12 2020 at 20:22):

yeah, got it

Iocta (Mar 12 2020 at 21:09):

def reverse : Π {α : Type u}, list α  list α :=
(λ α,
  (λ s,
    list.rec_on s
    nil
    (λ (x: α) (s: list α) (reversed: list α),
      append reversed (x::nil)  )))
 06.2.lean    90   7 error           type mismatch at application
   append reversed
 term
   reversed
 has type
   list α : Type u
 but is expected to have type
   list ?m_1 : Type
...
   type mismatch, term
     ?m_2 ++ ?m_3
   has type
     list ?m_1 : Type
   but is expected to have type
     list α : Type u (lean-checker)

Iocta (Mar 12 2020 at 21:11):

I can just delete the u and it works, but what is the difference between u and no u?

Iocta (Mar 12 2020 at 21:12):

because the def length : Π {α : Type u}, list α → nat := one does have a u

Reid Barton (Mar 12 2020 at 21:16):

Somewhere you must have Type with no u

Iocta (Mar 12 2020 at 21:18):

yeah I do

Iocta (Mar 13 2020 at 22:26):

I'm not getting the picture here. "Define an inductive data type consisting of terms built up from the following constructors..." https://leanprover.github.io/theorem_proving_in_lean/inductive_types.html

namespace hidden

universe u
variable {α : Type u}


inductive expr (α : Type u)
| const (n: nat) : expr
| var (n: nat) : expr
| plus (s t : expr)  : expr
| times (s t: expr)  : expr

namespace expr
def eval (x: expr α) : nat :=
begin
cases x,
case const : {exact x},
case var : {exact x},
case plus : s t {exact ((eval s) + (eval t))},
case times : s t  {exact ((eval s) * (eval t))},
end
end expr
end hidden
 06.03.le    19  26 error           unknown identifier 'eval'
 state:
 case hidden.expr.plus
 α : Type u,
 s t : expr α
   (lean-checker)

Kevin Buzzard (Mar 13 2020 at 22:28):

You can't use eval in the definition of eval

Iocta (Mar 13 2020 at 22:29):

Then how do I recurse?

Kevin Buzzard (Mar 13 2020 at 22:31):

This is all a bit CS for me, so don't take what I have to say too seriously.

Kevin Buzzard (Mar 13 2020 at 22:31):

but when you made the inductive type, Lean made a recursor for you

Kevin Buzzard (Mar 13 2020 at 22:31):

After the definition of expr, try #print prefix expr

Kevin Buzzard (Mar 13 2020 at 22:32):

wait, better is #print prefix hidden.expr

Kevin Buzzard (Mar 13 2020 at 22:33):

All that crazy stuff got made when you defined expr. In particular hidden.expr.rec got made, and if that doesn't do the job then nothing will

Kevin Buzzard (Mar 13 2020 at 22:35):

or you can use match but maybe that's not covered until chapter 8

Kevin Buzzard (Mar 13 2020 at 22:35):

hmm maybe you can't use match, I should stop talking. Does the recursor do it for you?

Kevin Buzzard (Mar 13 2020 at 22:37):

I think you're supposed to be eating f : nat -> nat as well, and var n is supposed to evaluate to f n

Iocta (Mar 13 2020 at 22:37):

If you mean like this

def eval (x: expr α) : nat :=
begin
cases x,
case const : n {exact n },
case var : n {exact n},
case plus : s t {exact (s.rec + t.rec)},
case times : s t  {exact (s.rec ) * (t.rec)},
end

then

 06.03.le    21  18 error           type mismatch at application
   expr.rec s
 term
   s
 has type
   expr α : Type
 but is expected to have type
   Π (n : ), ?m_1 (const ?m_2 n) : Sort (imax 1 ?)

Chris Hughes (Mar 13 2020 at 22:37):

This is how you recurse

def eval : expr α  nat
| (const _ x) := x
| (var _ x) := x
| (plus s t) := eval s + eval t
| (times s t) := eval s * eval t

Kevin Buzzard (Mar 13 2020 at 22:38):

You're not really supposed to be making data using tactic mode, but that is not the issue right now. How about induction x instead of cases x?

Chris Hughes (Mar 13 2020 at 22:38):

You can use induction instead of cases as well, but eval won't be called eval it will be called something like x_ih

Iocta (Mar 13 2020 at 22:39):

Aha, that helps

Kevin Buzzard (Mar 13 2020 at 22:41):

That usage of the equation compiler isn't covered until chapter 8

Chris Hughes (Mar 13 2020 at 22:44):

The chapter 7 approved way

def eval (x : expr α) : nat :=
expr.rec_on x id id (λ s t ns nt, ns + nt) (λ s t ns nt, ns * nt)

Kevin Buzzard (Mar 13 2020 at 22:46):

I think the idea is that the second id should be an f

Kevin Buzzard (Mar 13 2020 at 22:46):

which is an input function

Iocta (Mar 13 2020 at 22:49):

Oh that's pretty close to something I had earlier

def eval (x: expr α) : nat :=
expr.rec_on x
id
id
(λ s t, s + t)
(λ s t, s * t)

Iocta (Mar 13 2020 at 22:51):

which doesn't work because the latter functions are supposed to take ( outside outside inside inside) not just (inside inside)

Kevin Buzzard (Mar 13 2020 at 22:51):

You know you can use _ to figure out what is going on?

Iocta (Mar 13 2020 at 22:51):

yeah I just didn't grok what it was telling me

Kevin Buzzard (Mar 13 2020 at 22:53):

def eval (x: expr α) : nat :=
expr.rec_on x
id
id
(λ s t, _)
(_)

->

scratch2.lean:18:8: error
don't know how to synthesize placeholder
context:
α : Type u,
x s t : expr α
⊢ ℕ → ℕ → ℕ
▹
scratch2.lean:19:1: error
don't know how to synthesize placeholder
context:
α : Type u,
x : expr α
⊢ expr α → expr α → ℕ → ℕ → ℕ

so the first hole is expecting two more inputs and the second is expecting four

Kevin Buzzard (Mar 13 2020 at 22:54):

and you can see that s and t have type expr alpha not nat

Iocta (Mar 13 2020 at 22:54):

I do see it now :-)

Kevin Buzzard (Mar 13 2020 at 22:54):

:-)

Kevin Buzzard (Mar 13 2020 at 22:54):

My ability to debug soared once I realised I could understand the error messages

Kevin Buzzard (Mar 13 2020 at 22:55):

when I was a beginner I was just all "oh crap it still doesn't work" but now I learnt how to read the error messages I can go "oh look, type class inference failed" or "oh look, I put the wrong number of variables" etc. It is liberating :-)

Iocta (Mar 13 2020 at 22:56):

That's consistent with my limited experience so far as well

Iocta (Mar 13 2020 at 23:05):

now should I be able to use eval? #reduce expr.eval (const 0)

 06.03.le    18  26 error           failed to synthesize type class instance for
 α : Type u
  has_zero (Type ?) (lean-checker)

Kevin Buzzard (Mar 13 2020 at 23:05):

What is the type of const?

Iocta (Mar 13 2020 at 23:06):

const : Π (α : Type u_1),   expr α

Kevin Buzzard (Mar 13 2020 at 23:07):

The error says "you wrote 0, so I'll try and figure out a term of the type I was expecting called 0, oh wait, I was expecting a term of type Type u_1

Iocta (Mar 13 2020 at 23:07):

aha, so #reduce expr.eval (const nat 0)

Kevin Buzzard (Mar 13 2020 at 23:07):

const alpha 0

Kevin Buzzard (Mar 13 2020 at 23:08):

what is this alpha doing in expr? That's the problem

Kevin Buzzard (Mar 13 2020 at 23:08):

Why not just remove the alpha?

Iocta (Mar 13 2020 at 23:09):

yeah that works

Iocta (Mar 13 2020 at 23:09):

I haven't got the hang of the alphas and us yet

Kevin Buzzard (Mar 13 2020 at 23:10):

I had to read TPIL three times before I got the hang of it (but then again I suspect I had less of a CS background than you). What helped with me was just coming up with projects and working on them.

Chris Hughes (Mar 13 2020 at 23:18):

You have to write const α 0

Chris Hughes (Mar 13 2020 at 23:19):

You can change the definition of expr to fix this. Use brackets like this | const {} (n: nat) : expr

Kevin Buzzard (Mar 13 2020 at 23:19):

But is alpha playing any role at all? How about we remove all occurences of alpha?

Iocta (Mar 15 2020 at 21:04):

I need some help with "Use pattern matching to prove that the composition of surjective functions is surjective"

open function

#print surjective

universes u v w
variables {α : Type u} {β : Type v} {γ : Type w}
open function

lemma surjective_comp {g : β  γ} {f : α  β}
  (hg : surjective g) (hf : surjective f) :
surjective (g  f) :=
sorry

Iocta (Mar 15 2020 at 21:07):

if I intro gam: \gamma can I destruct it into an element of beta that it came from?

Patrick Massot (Mar 15 2020 at 21:59):

Do you want to use mathlib or are you doing TPIL exercises?

Patrick Massot (Mar 15 2020 at 22:07):

It seems to be a TPIL exercise. I have no idea what Jeremy had in mind. The closest to a term-mode proof I could do is

lemma surjective_comp {g : β  γ} {f : α  β}
  (hg : surjective g) (hf : surjective f) :
surjective (g  f) :=
assume z,
let y, hy := hg z, x, hx := hf y in
x, by rwa  hx at hy

Patrick Massot (Mar 15 2020 at 22:07):

You could replace let with match, adjusting the syntax as needed.

Patrick Massot (Mar 15 2020 at 22:08):

But I would rather import mathlib tactics and write:

begin
  intro z,
  rcases hg z with y, rfl,
  rcases hf y with x, rfl,
  use x,
end

Iocta (Mar 16 2020 at 01:22):

that's pretty slick

Iocta (Mar 16 2020 at 04:08):

What's the match version look like? I'm not getting the syntax right.

match hg, hf with
y, hy, x, hx := hg g , hf f
end
x, by simp [hx, hy]

Iocta (Mar 18 2020 at 06:23):

instance sum_inhabited {a b : Type u} [inhabited a] [inhabited b] : inhabited  (sum a b) :=
 default _ 

works but how do I find out what's going in the _?

Iocta (Mar 18 2020 at 06:36):

or, (independent of how to find out), what is going in the _?

Bryan Gin-ge Chen (Mar 18 2020 at 06:39):

#print sum_inhabited helps here:

@[instance]
protected def sum_inhabited : Π {a b : Type u} [_inst_1 : inhabited a] [_inst_2 : inhabited b], inhabited (a  b) :=
λ {a b : Type u} [_inst_1 : inhabited a] [_inst_2 : inhabited b], {default := default (a  b) sum.inhabited_right}

(Sometimes you have to turn off pretty printing and notation with #print with set_option pp.all true to get the info you want, but this seems like enough.)

Bryan Gin-ge Chen (Mar 18 2020 at 06:41):

Compare the output with set_option pp.notation false:

@[instance]
protected def sum_inhabited : Π {a b : Type u} [_inst_1 : inhabited a] [_inst_2 : inhabited b], inhabited (sum a b) :=
λ {a b : Type u} [_inst_1 : inhabited a] [_inst_2 : inhabited b], {default := default (sum a b) sum.inhabited_right}

Iocta (Mar 18 2020 at 06:44):

I see

Mario Carneiro (Mar 18 2020 at 06:51):

By the way, the only way a proof of an inhabited instance like ⟨ default _ ⟩ can work is if there is already an inhabited instance for that type

Mario Carneiro (Mar 18 2020 at 06:51):

in other words, the instance is useless because it's already there, and by apply_instance should also work

Iocta (Mar 18 2020 at 06:52):

mhmm

Iocta (Mar 19 2020 at 22:00):

can I import a module import foo.bar and access foo.bar.baz without opening bar?

Iocta (Mar 19 2020 at 22:05):

import logic.basic seems to open it automatically?

Kevin Buzzard (Mar 19 2020 at 22:06):

The definitions in logic.basic are probably not in any namespace called logic or logic.basic

Kevin Buzzard (Mar 19 2020 at 22:07):

Definition X in file Y is just called X after you import Y

Iocta (Mar 19 2020 at 22:07):

#check not_or_distrib doesn't work unless I import logic.basic

Kevin Buzzard (Mar 19 2020 at 22:08):

that's because this function is defined in logic.basic

Kevin Buzzard (Mar 19 2020 at 22:09):

but the only namespace used in logic.basic is classical, so after you import logic.basic you have stuff like not_or_distrib (in the root namespace) and classical.not_forall (in the classical namespace)

Kevin Buzzard (Mar 19 2020 at 22:10):

there is no logic.basic namespace, even after importing it

Iocta (Mar 19 2020 at 22:12):

I see

Iocta (Mar 20 2020 at 03:54):

Doing #3 from https://leanprover.github.io/logic_and_proof/sets_in_lean.html

import data.set
import logic.basic

variables {I U : Type}
variables (A : I  set U) (B : I  set U) (C : set U)



theorem Inter.intro {x : U} (h :  i, x  A i) : x   i, A i :=
by simp; assumption

@[elab_simple]
theorem Inter.elim {x : U} (h : x   i, A i) (i : I) : x  A i :=
by simp at h; apply h

theorem Union.intro {x : U} (i : I) (h : x  A i) :
  x   i, A i :=
by {simp, existsi i, exact h}

theorem Union.elim {b : Prop} {x : U}
(h₁ : x   i, A i) (h₂ :  (i : I), x  A i  b) : b :=
by {simp at h₁, cases h₁ with i h, exact h₂ i h}



example : ( i, A i)  ( i, B i)  ( i, A i  B i) :=
assume x h T h2,
begin
simp * at *,
apply exists.elim h2,
intro i,
intro teq,
apply and.elim h,
intros fa fb,
apply eq.subst teq,
apply and.intro,
apply fa i,
apply fb i,
end

I have a proof using the regular logic functions, but I'm not sure how to write the proof using Inter.elim etc.

example : ( i, A i)  ( i, B i)  ( i, A i  B i) :=
begin
intros x h S h2,

end

Iocta (Mar 20 2020 at 03:55):

I U : Type,
A B : I  set U,
x : U,
h : x  ( (i : I), A i)   (i : I), B i,
S : set U,
h2 : S  set.range (λ (i : I), A i  B i)
 x  S

Kenny Lau (Mar 20 2020 at 04:15):

thanks for creating a copy-paste-able example.

Kenny Lau (Mar 20 2020 at 04:17):

you might want to start from here:

example : ( i, A i)  ( i, B i)  ( i, A i  B i) :=
begin
  intros x h,
  have h1 : x  ( (i : I), A i) := h.1,
  have h2 : x  ( (i : I), B i) := h.2,
end
state:
I U : Type,
A B : I  set U,
x : U,
h : x  ( (i : I), A i)   (i : I), B i,
h1 : x   (i : I), A i,
h2 : x   (i : I), B i
 x   (i : I), A i  B i

Kevin Buzzard (Mar 20 2020 at 07:17):

Note that you can also do this directly:

example : ( i, A i)  ( i, B i)  ( i, A i  B i) :=
begin
rintros x h1, h2,

end

Iocta (Mar 22 2020 at 03:35):

example : ( i, A i)  ( i, B i)  ( i, A i  B i) :=
begin
intros j h ,
cases h,
apply Inter.intro _ _,
simp only [set.mem_inter_eq],
intro i,
split,
apply Inter.elim,
apply h_left,
apply Inter.elim,
apply h_right,
end

works but I'm guessing there's a better solution, this isn't easy to follow.

Iocta (Mar 22 2020 at 07:44):

section
parameters {A : Type} {R : A  A  Prop}
parameter (irreflR : irreflexive R)
parameter (transR : transitive R)

local infix < := R

def R' (a b : A) : Prop := R a b  a = b
local infix  := R'




theorem reflR' (a : A) : a  a :=
or.inr rfl

theorem transR' {a b c : A} (h1 : a  b) (h2 : b  c):
  a  c :=
or.elim h1
(assume rab, or.elim h2
  (assume rbc,  (or.inl (transR rab rbc)))
  (assume eqbc, (or.inl (eq.subst eqbc rab) )) )
(assume eqab, or.elim h2
  (assume rbc, (or.inl (eq.substr eqab rbc)))
  (assume eqbc, (or.inr (eq.substr eqab eqbc))))


theorem transR'' {a b c : A} (h1 : a  b) (h2 : b  c):
  a  c :=
begin
intros,
cases h1,
cases h2,
left,
apply transR
end
 lap14.4.…    35   1 error           unknown identifier 'transR'

I did a term proof, now trying the tactic proof. In the last line, how come it doesn't know about transR?

Shing Tak Lam (Mar 22 2020 at 08:29):

@Iocta

Add include transR below the parameter (transR : transitive R)

So the beginning looks like

section
parameters {A : Type} {R : A  A  Prop}
parameter (irreflR : irreflexive R)
parameter (transR : transitive R)
include transR

(I found it in this GitHub issue here, and the page number referred to doesn't seem to be correct... Maybe there was a prior edition to TPIL?)

Kevin Buzzard (Mar 22 2020 at 09:24):

Don't use parameters, use variables

Iocta (Mar 22 2020 at 22:12):

that was from https://leanprover.github.io/logic_and_proof/relations_in_lean.html

Iocta (Apr 01 2020 at 22:01):

I want to say "a < b and b < a so a < a but that's not allowed so by contradiction a = b" but I'm not sure how to write that in Lean at the sorry.

section
parameters {A : Type} {R : A  A  Prop}
variable (irreflR : irreflexive R)
variable (transR : transitive R)

local infix < := R

def R' (a b : A) : Prop := R a b  a = b
local infix  := R'


theorem reflR' (a : A) : a  a :=
or.inr rfl


include transR

theorem transR'' {a b c : A} (h1 : a  b) (h2 : b  c):
  a  c :=
begin
intros,
cases h1,
cases h2,
left,
apply transR h1 h2,
left,
apply eq.subst h2,
exact h1,
apply eq.substr h1,
exact h2,
end



theorem antisymmR' {a b : A} (h1 : a  b) (h2 : b  a) :
  a = b :=
begin
cases h1,
cases h2,
sorry
end


end

Iocta (Apr 01 2020 at 22:09):

Specifically, why doesn't this work

include irreflR
theorem antisymmR' {a b : A} (h1 : a  b) (h2 : b  a) :
  a = b :=
begin
cases h1,
cases h2,
apply irreflR,
end

Yury G. Kudryashov (Apr 01 2020 at 22:21):

Try to apply transitivity first

Iocta (Apr 01 2020 at 22:23):

theorem antisymmR' {a b : A} (h1 : a  b) (h2 : b  a) :
  a = b :=
begin
cases h1,
cases h2,
apply transR h1 h2,
end

"invalid apply tactic"

Kevin Buzzard (Apr 01 2020 at 22:41):

After cases h2 you have three goals. The tactic state for your first goal is

case or.inl
A : Type,
R : A → A → Prop,
transR : transitive R,
a b : A,
h1 : R a b,
h2 : R b a
⊢ a = b

Kevin Buzzard (Apr 01 2020 at 22:41):

apply irreflR doesn't work because irreflR says R x x -> false and your goal is not false.

Kevin Buzzard (Apr 01 2020 at 22:42):

apply transR h1 h2 does not work because transR h1 h2 has type R a a which is not a function (so you can't apply it) and is not equal to a = b anyway.

Kevin Buzzard (Apr 01 2020 at 22:43):

The way to make progress with this first goal is exfalso. Although I am slightly concerned here because I don't see irreflR in your local context. Did you mean to include it?

Kevin Buzzard (Apr 01 2020 at 22:46):

You're playing a bit fast and loose with multiple goals as well. Sometimes this can come back to bite you -- e.g. some tactics randomly apply to all goals or whatever, and things change when you don't mean them to. It's maybe better to write something like this:

theorem antisymmR' {a b : A} (h1 : a  b) (h2 : b  a) :
  a = b :=
begin
  cases h1,
  { cases h2,
    { exfalso,
      sorry},
    { sorry}
  },
  { sorry}
end

Iocta (Apr 02 2020 at 00:09):

Ah exfalso is what I needed

Iocta (Apr 02 2020 at 00:09):

Is there a shorter way to write apply eq.subst foo?

Alex J. Best (Apr 02 2020 at 00:11):

rw foo?

Iocta (Apr 02 2020 at 18:40):

What do I do here?

section
parameters {A : Type} {R : A  A  Prop}
parameter (reflR : reflexive R)
parameter (transR : transitive R)

def S (a b : A) : Prop := R a b  R b a

include reflR
include transR

open classical

example : transitive S :=
begin
intros a b c h1 h2,

end

end

Kevin Buzzard (Apr 02 2020 at 19:08):

Do you understand the goal you are trying to prove? unfold S if you don't?

Kevin Buzzard (Apr 02 2020 at 19:08):

hmm rw [S] seems to work better

Kevin Buzzard (Apr 02 2020 at 19:09):

rw [S] at *, maybe even better. But you probably don't need to do this as far as Lean is concerned -- if you can finish the proof from here then try deleting this line and see if the proof still works.

Iocta (Apr 02 2020 at 19:44):

Ah ok

example : transitive S :=
begin
intros a b c h1 h2,
cases h1 with rab rba,
cases h2 with rbc rcb,
apply and.intro,
apply transR rab rbc,
apply transR rcb rba,
end

Iocta (Apr 02 2020 at 19:44):

am I supposed to put { } somewhere in there for the reasons you mentioned yesterday?

Reid Barton (Apr 02 2020 at 19:45):

Looks like around each of the last two lines.

Reid Barton (Apr 02 2020 at 19:46):

After apply and.intro you have two goals, so I usually add

  { },
  { }

to my tactic block immediately

Reid Barton (Apr 02 2020 at 19:47):

(Actually, I add

  {
},
  {
}

then fix up the } when I finish the subgoals.)

Iocta (Apr 02 2020 at 19:48):

alright

Iocta (Apr 02 2020 at 19:52):

there's no autoformatter for lean is there?

Daniel Keys (Apr 03 2020 at 01:26):

Actually VS Code does do some general formatting. For example, if you type { it inserts {}; if you then put your cursor in between the braces and press Enter, the closing brace is indented to the same level as its opening counterpart. With a little practice you can adjust your style to it (or the other way around).

Johan Commelin (Apr 03 2020 at 05:06):

@Iocta If you mean, take a file with "bad" formatting style, and automatically turn it into something that stylistically will make mathlib maintainers happy.... no, I'm not aware of such a tool. And I would be very happy to know about it if you find one!

Iocta (Apr 03 2020 at 19:57):

What's this '' notation mean?

def surj_on (f : X  Y) (A : set X) (B : set Y) := B  f '' A

Ryan Lahfa (Apr 03 2020 at 19:58):

Iocta said:

What's this '' notation mean?

def surj_on (f : X  Y) (A : set X) (B : set Y) := B  f '' A

It means take the image of f over A

Ryan Lahfa (Apr 03 2020 at 19:58):

f '' A := f(A) if you prefer

Ryan Lahfa (Apr 03 2020 at 19:58):

You have the same for preimage using a ''⁻¹ if I'm not wrong.

Iocta (Apr 03 2020 at 20:01):

Oh I see

Kevin Buzzard (Apr 03 2020 at 20:06):

You can always put #print notation '' etc to see definition of notation in lean. You can also hover over it, and even jump to the definition of the underlying term

Iocta (Apr 03 2020 at 20:08):

nice! the online books don't have that hover-info feature but copying into my editor isn't hard

Iocta (Apr 04 2020 at 07:46):

This feels too manual. Is there a simp_harder or something?

open nat

example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from calc
  (nat.succ m) * (n + k) = m * (n + k) + (n + k)   : by rw [nat.succ_mul]
  ... = (m * n) + (m * k) + (n + k) : by rw [ih]
  ... = (m * n) + ((m * k) + n) + k : by simp
  ... = (m * n) + (n + (m * k)) + k : by simp [add_comm]
  ... = (nat.succ m) * n + (nat.succ m) * k  : by simp [succ_mul] )))

Johan Commelin (Apr 04 2020 at 07:48):

Replace all the nat.succ bla with bla + 1

Johan Commelin (Apr 04 2020 at 07:48):

After that by ring should do it.

Johan Commelin (Apr 04 2020 at 07:49):

But I don't know if you have access to ring...

Johan Commelin (Apr 04 2020 at 07:50):

Also... ring will just use mul_add which is the statement that you are trying to prove.

Johan Commelin (Apr 04 2020 at 07:50):

So it would be cheating

Kenny Lau (Apr 04 2020 at 08:59):

induct on k

Kenny Lau (Apr 04 2020 at 09:01):

open nat

example :  m n k : nat, m * (n + k) = m * n + m * k :=
λ m n k, nat.rec_on k rfl $ λ k ih, show m * (n + k) + m = m * n + (m * k + m), by rw [ih, add_assoc]

Kenny Lau (Apr 04 2020 at 09:02):

here is the official proof in core:

private meta def sort_add :=
`[simp [nat.add_assoc, nat.add_comm, nat.add_left_comm]]

protected lemma left_distrib :  (n m k : ), n * (m + k) = n * m + n * k
| 0        m k := by simp [nat.zero_mul]
| (succ n) m k :=
  begin simp [succ_mul, left_distrib n m k], sort_add end

Kevin Buzzard (Apr 04 2020 at 10:28):

This is a level in the natural number game and it might be interesting to compare with the tactic mode proof I wrote

Kevin Buzzard (Apr 04 2020 at 10:29):

@Scott Morrison said to me recently that he thought I should exploit more automation in the natural number game but when I wrote the lean code I didn't understand anything about automation

Iocta (Apr 04 2020 at 18:21):

Since this works,

open nat
example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from calc
  (nat.succ m) * (n + k) = (m * n) + ((m * k) + n) + k : by simp [*, succ_mul]
  ... = (m * n) + (n + (m * k)) + k : by simp [*, succ_mul, ih, add_comm]
  ... = (nat.succ m) * n + (nat.succ m) * k  : by simp [succ_mul] )))

how come I can't delete between the first two = in the calc?

open nat
example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from calc
  (nat.succ m) * (n + k) = (m * n) + (n + (m * k)) + k : by simp [*, succ_mul, ih, add_comm]
  ... = (nat.succ m) * n + (nat.succ m) * k  : by simp [succ_mul] )))

Mario Carneiro (Apr 04 2020 at 18:24):

The answer to "why didn't simp work" is almost always visible from the final goal state in the error message, which is where simp got stuck

 n + (k + (m * n + m * k)) = k + (m * n + (n + m * k))

Mario Carneiro (Apr 04 2020 at 18:24):

It needs to reassociate some things to finish, so add add_left_comm

Iocta (Apr 04 2020 at 18:26):

How come it didn't find the path I used?

Mario Carneiro (Apr 04 2020 at 18:28):

because it found another path first

Mario Carneiro (Apr 04 2020 at 18:28):

simp doesn't explore all paths, it just rewrites with random things from its set

Iocta (Apr 04 2020 at 18:28):

once it makes any progress on one path, it won't try others?

Mario Carneiro (Apr 04 2020 at 18:29):

right

Iocta (Apr 04 2020 at 18:29):

so is there a simp_harder?

Mario Carneiro (Apr 04 2020 at 18:29):

so it is important that you give simp a confluent system of rewrites

Mario Carneiro (Apr 04 2020 at 18:30):

the point is that it's not supposed to be a tree search, it's more like a net where all paths lead to the desired result

Mario Carneiro (Apr 04 2020 at 18:31):

You can use simp_rw to control the order of rewrites

Mario Carneiro (Apr 04 2020 at 18:31):

or simp [foo, bar]; simp [baz]

Mario Carneiro (Apr 04 2020 at 18:32):

but if you put all the rewrites in one bag don't expect to get any particular order

Iocta (Apr 04 2020 at 18:35):

is tactic.interactive.solve_by_elimthe tree-search one?

Mario Carneiro (Apr 04 2020 at 18:35):

That does implicational theorems, not rewrites

Mario Carneiro (Apr 04 2020 at 18:36):

cc is maybe closer to this

Mario Carneiro (Apr 04 2020 at 18:36):

But I don't think there is any mechanism that explores all paths in the simp set. The number of paths grows very rapidly

Kevin Buzzard (Apr 04 2020 at 18:38):

Here's the nng proof fwiw:

lemma mul_add (t a b : mynat) : t * (a + b) = t * a + t * b :=
begin [nat_num_game]
  induction b with d hd,
  { rewrite [add_zero, mul_zero, add_zero],
  },
  {
    rw add_succ,
    rw mul_succ,
    rw hd,
    rw mul_succ,
    rw add_assoc, -- ;-)
    refl,
  }
end

You don't need the last refl if you remove [nat_num_game]

Iocta (Apr 04 2020 at 18:39):

(nat.succ m) * (n + k) = (m * n) + (n + (m * k)) + k : by cc

doesn't work

Mario Carneiro (Apr 04 2020 at 18:39):

Given h : f 0 = f 1, there are 2^n points in the space of all rewrites of [f 0, ..., f 0]

Iocta (Apr 04 2020 at 18:41):

sure. and I guess with practice i'll remember what all the relevant theorems are called so I won't have to think about it

Mario Carneiro (Apr 04 2020 at 18:41):

Or just be more controlled in your rewrites so it does what you want

Mario Carneiro (Apr 04 2020 at 18:42):

I don't actually use cc, there is some trick to making it useful

Mario Carneiro (Apr 04 2020 at 18:42):

you probably have to put all the relevant lemmas in the context

Iocta (Apr 04 2020 at 18:44):

with include or something?

Mario Carneiro (Apr 04 2020 at 18:44):

with have

Iocta (Apr 04 2020 at 19:01):

as in have succ_mul, from succ_mul,?

Kevin Buzzard (Apr 04 2020 at 19:02):

it's have H, from <proof of H>

Mario Carneiro (Apr 04 2020 at 19:02):

or just have := succ_mul

Reid Barton (Apr 04 2020 at 19:06):

I don't think cc will instantiate lemmas, does it?

Iocta (Apr 04 2020 at 19:11):

open nat
example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from
  begin
  have := nat.succ_mul,
  have := nat.add_left_comm,
  cc,
  end)))

doesn't work

Kevin Buzzard (Apr 04 2020 at 19:12):

ie

m n k m : ℕ,
ih : m * (n + k) = m * n + m * k,
this : ∀ (n m : ℕ), succ n * m = n * m + m,
this : ∀ (n m k : ℕ), n + (m + k) = m + (n + k)
⊢ succ m * (n + k) = succ m * n + succ m * k

cc failing

Iocta (Apr 04 2020 at 19:13):

^

Kevin Buzzard (Apr 04 2020 at 19:13):

(I don't know anything about cc, I was just posting so people could diagnose)

Iocta (Apr 04 2020 at 19:16):

for comparison,

open nat
example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from
  begin
  simp [succ_mul, ih, add_left_comm],
  end)))

does work

Iocta (Apr 04 2020 at 19:20):

need another theorem in order to use simp only

example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from
  begin
  simp only [nat.succ_mul, ih, nat.add_left_comm, nat.add_assoc],
  end)))

Iocta (Apr 04 2020 at 19:22):

but the cc version of that doesn't work either

open nat
example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from
  begin
  have := nat.succ_mul,
  have := nat.add_left_comm,
  have := nat.add_assoc,
  cc,
  end)))
m n k m : ,
ih : m * (n + k) = m * n + m * k,
this :  (n m : ), succ n * m = n * m + m,
this :  (n m k : ), n + (m + k) = m + (n + k),
this :  (n m k : ), n + m + k = n + (m + k)
 succ m * (n + k) = succ m * n + succ m * k

Mario Carneiro (Apr 04 2020 at 19:23):

reid says you need to instantiate the lemmas, i.e. have := nat.succ_mul m (n+k) and so on

Iocta (Apr 04 2020 at 19:34):

open nat
example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from
  begin
  rw [succ_mul, ih, add_left_comm, add_assoc, succ_mul, succ_mul],
  cc,
  end)))

works, deleting cc, makes it fail with

m n k m : ,
ih : m * (n + k) = m * n + m * k
 n + (m * n + (m * k + k)) = m * n + n + (m * k + k)

Iocta (Apr 04 2020 at 19:39):

if I'm interpreting that right it did instantiate the lemmas

Reid Barton (Apr 04 2020 at 19:41):

I have no idea what to make of this

Reid Barton (Apr 04 2020 at 19:42):

from the proof term, it looks like maybe cc has built-in knowledge that + is commutative and associative

Iocta (Apr 04 2020 at 19:47):

using rw [add_left_comm, add_assoc] instead of cc works there, so it does seem that way

Iocta (Apr 04 2020 at 19:48):

I wondered if the lemmas in the rw list above were being pulled into cc's scope so I tried deleting the last succ_mul

open nat
example :  m n k : nat, m * (n + k) = m * n + m * k :=
assume m n k,
nat.rec_on m
(show 0 * (n + k) = 0 * n + 0 * k, from by rw [zero_mul, zero_mul, zero_mul])
(assume m,
  (assume ih:  m * (n + k) = m * n + m * k,
  (show (nat.succ m) * (n + k) = (nat.succ m) * n + (nat.succ m) * k,
  from
  begin
  rw [succ_mul, ih, add_left_comm, add_assoc, succ_mul],
  cc,
  end)))

but no

m n k m : ,
ih : m * (n + k) = m * n + m * k
 n + (m * n + (m * k + k)) = m * n + n + succ m * k

Iocta (Apr 04 2020 at 21:01):

variable m : nat
#check m * m

when I put the cursor over * why does it tell me about has_mul.mul instead of nat.mul?

Kevin Buzzard (Apr 04 2020 at 21:02):

because * is notation for has_mul.mul

Iocta (Apr 04 2020 at 21:03):

I dunno what answer I was looking for :-P

Kevin Buzzard (Apr 04 2020 at 21:03):

:-)

Iocta (Apr 04 2020 at 21:11):

looks like Logic and Proof doesn't have "... in Lean" versions for the last few chapters. is that because nobody's gotten around to writing them or because those chapters are too hard to port to Lean or some other reason?

Kevin Buzzard (Apr 04 2020 at 21:12):

My guess is the former

Kevin Buzzard (Apr 04 2020 at 21:12):

Actually, they do look hard

Kevin Buzzard (Apr 04 2020 at 21:13):

Chapter 19 alone would be quite an effort for a beginner.

Kevin Buzzard (Apr 04 2020 at 21:14):

They could certainly be done in Lean but it would be far more effort than the earlier chapters.

Iocta (Apr 04 2020 at 22:19):

import data.real.basic seems to cause compilation and I'm running out of ram. That's not supposed to happen, right?

Bryan Gin-ge Chen (Apr 04 2020 at 22:20):

Are you using leanproject to download mathlib olean files? What version of Lean are you using?

Iocta (Apr 04 2020 at 22:25):

I was using 3.7.0 with leanproject. I'll try upgrading to 3.7.2.

Kevin Buzzard (Apr 04 2020 at 22:26):

Try restarting Lean. There is no 3.7.0. don't upgrade to 3.7.2

Kevin Buzzard (Apr 04 2020 at 22:26):

Sorry, I take that back. Just use leanproject up

Iocta (Apr 04 2020 at 22:40):

deleted everything and rebuilt, working now

Bryan Gin-ge Chen (Apr 04 2020 at 22:42):

Lean 3.7.0c and 3.7.1c have some issues that prevent olean files from being loaded properly in Windows. They should all be fixed in 3.7.2c.

Iocta (Apr 04 2020 at 22:45):

er not quite working


Iocta (Apr 04 2020 at 22:45):

_target/deps/mathlib/src/data/real/basic.lean:240:13: error: unexpected token (lean-checker)

Kevin Buzzard (Apr 04 2020 at 22:46):

Restart Lean? Did you use leanproject up?

Iocta (Apr 04 2020 at 22:46):

i switched to 3.7.2 and ran leanproject up and restarted lean

Kevin Buzzard (Apr 04 2020 at 22:46):

leanproject up would have switched for you

Kevin Buzzard (Apr 04 2020 at 22:47):

Exit VS Code and restart. What errors do you get?

Iocta (Apr 04 2020 at 22:47):

am i not supposed to have lean installed outside leanproject?

Kevin Buzzard (Apr 04 2020 at 22:47):

no

Kevin Buzzard (Apr 04 2020 at 22:48):

You are supposed to let elan do the work of deciding which Lean binary you will be using

Kevin Buzzard (Apr 04 2020 at 22:48):

I don't know why people insist on using non-default set-ups. I don't have any source code for any Lean installed on my computer and I never compile it.

Iocta (Apr 04 2020 at 22:49):

ok but i mean leanproject doesn't provide the lean, right?

Kevin Buzzard (Apr 04 2020 at 22:49):

elan provides the lean binary

Kevin Buzzard (Apr 04 2020 at 22:49):

elan and leanproject are designed to work together

Kevin Buzzard (Apr 04 2020 at 22:50):

anyone who starts telling VS Code where their Lean installation is, or changing their LEAN_PATH or anything like that is asking for trouble

Kevin Buzzard (Apr 04 2020 at 22:50):

https://github.com/leanprover-community/mathlib/blob/master/docs/install/debian_details.md

Kevin Buzzard (Apr 04 2020 at 22:51):

No download of Lean source code required

Iocta (Apr 04 2020 at 22:54):

using elan with leanproject works for me

Kevin Buzzard (Apr 04 2020 at 23:04):

So the error is gone?

Kevin Buzzard (Apr 04 2020 at 23:05):

When mathlib upgrades to 3.8.0 or whatever, leanproject up will edit your toml and then when you run your project elan will download it automatically

Iocta (Apr 04 2020 at 23:41):

yeah the error is gone

Iocta (Apr 05 2020 at 01:13):

I see my editor is running ~/.elan/toolchains/leanprover-community-lean-3.7.2/bin/lean --server -M1024 -T100000 /path/to/project/leanpkg.path I don't see that last argument documented in lean --help, is it doing what it looks like? namely, providing path to a file containing the list of directories where the project source and deps are

Iocta (Apr 05 2020 at 01:27):

looks like yes

Iocta (Apr 05 2020 at 03:02):

how do I do this?

inductive X : Type
| one : X
| two : X

namespace X

def R : X  X  Prop
| one one := true
| one two := true
| two one := true
| two two := false


example : not (reflexive R) :=
begin
intro reflR,
sorry
end

Shing Tak Lam (Apr 05 2020 at 03:09):

You can use reflR to get a term of type R two two, which is false.

example : not (reflexive R) :=
begin
intro reflR,
exact (reflR two)
end

Also see below, if all you have is intro and exact, you don't need tactic mode.

Uranus Testing (Apr 05 2020 at 03:12):

Or even shorter

example : not (reflexive R) :=
λ r, r two

Iocta (Apr 05 2020 at 03:14):

Thanks

Stephanie Zhou (Apr 06 2020 at 18:33):

What's the command for taking the derivative?

Kenny Lau (Apr 06 2020 at 18:36):

Lean is not an imperative programming language, and you can find derivatives here

Kenny Lau (Apr 06 2020 at 18:36):

import analysis.calculus.deriv

Kevin Buzzard (Apr 06 2020 at 18:36):

Is derivative in the kind of state where users can use it? For example, can I prove that the derivative of sin(cos(x)) is what the undergraduates think it is?

Patrick Massot (Apr 06 2020 at 18:37):

Kenny, you should try to get into the habit of posting links to the docs rather than source code.

Patrick Massot (Apr 06 2020 at 18:39):

as in https://leanprover-community.github.io/mathlib_docs/analysis/calculus/deriv.html

Johan Commelin (Apr 06 2020 at 18:39):

@Kevin Buzzard Yes, by simp

Johan Commelin (Apr 06 2020 at 18:40):

https://leanprover-community.github.io/mathlib_docs/analysis/complex/exponential.html#real.deriv_cos

Johan Commelin (Apr 06 2020 at 18:41):

https://leanprover-community.github.io/mathlib_docs/analysis/calculus/deriv.html#deriv.comp

Kenny Lau (Apr 06 2020 at 18:41):

@Patrick Massot ah, I didn't know that they exist

Johan Commelin (Apr 06 2020 at 18:41):

Hmm, maybe by simp won't work... because of the composition

Kenny Lau (Apr 06 2020 at 18:41):

are they automatically generated?

Kenny Lau (Apr 06 2020 at 18:42):

wow this is very nice

Johan Commelin (Apr 06 2020 at 18:42):

Welcome back!

Ryan Lahfa (Apr 06 2020 at 18:42):

Quick question, what is the state with integration/integrals?

Johan Commelin (Apr 06 2020 at 18:42):

WIP

Johan Commelin (Apr 06 2020 at 18:43):

We have a PR on FTC

Johan Commelin (Apr 06 2020 at 18:43):

There is a very general definition, and a bunch of general API, but it's not complete

Ryan Lahfa (Apr 06 2020 at 18:43):

Is it going to be a general integral and we would have Lebesgue, Riemann, Kurzweil-Henstock?

Johan Commelin (Apr 06 2020 at 18:43):

Bochner

Ryan Lahfa (Apr 06 2020 at 18:44):

Every day is a learning experience here

Yury G. Kudryashov (Apr 06 2020 at 18:46):

More-or-less it's Lebesgue.

Ryan Lahfa (Apr 06 2020 at 18:47):

Yury G. Kudryashov said:

More-or-less it's Lebesgue.

Yes, I pulled Wikipedia and makes sense to adopt this one.

Ryan Lahfa (Apr 06 2020 at 18:47):

(but I think that a Riemann integral tutorial would be nice… :-°)

Yury G. Kudryashov (Apr 06 2020 at 18:47):

It's already formalized for general measure spaces in measure_theory/.

Yury G. Kudryashov (Apr 06 2020 at 18:50):

There is a PR that glues it to integrals over intervals but its author @Joe disappeared. I'm going to wait for an update for about a month. If there will be no updates, I'll probably take over this PR.

Iocta (Apr 10 2020 at 08:05):

is there a way to know which moves preserve provability?

Kevin Buzzard (Apr 10 2020 at 08:06):

That's surely undecidable

Iocta (Apr 10 2020 at 08:07):

we seem to have decided for some of them though

Kenny Lau (Apr 10 2020 at 08:07):

what is a move?

Iocta (Apr 10 2020 at 08:08):

adding anything to a proof

Mario Carneiro (Apr 10 2020 at 08:09):

I think that for the most part you can determine for each "move" whether it necessarily preserves provability or not

Mario Carneiro (Apr 10 2020 at 08:10):

Anything reversible preserves provability. intro, revert, have, split, rw, simp

Mario Carneiro (Apr 10 2020 at 08:11):

apply is the main source of potentially lossy moves, but it depends on the theorem. clear is also not reversible

Mario Carneiro (Apr 10 2020 at 08:13):

cases preserves provability, but only if you case on a variable. If you case on an expression you might lose information about what term it was unless you use the form cases h : e

Mario Carneiro (Apr 10 2020 at 08:14):

induction is also not reversible, since if you do the induction wrong you can get unprovable goals

Iocta (Apr 10 2020 at 08:21):

I see

Iocta (Apr 10 2020 at 08:45):

are "target" and "goal" synonymous?

Mario Carneiro (Apr 10 2020 at 09:02):

Depends on who you ask. Lean usually uses the term to mean a target expression together with its list of hypotheses

Bhavik Mehta (Apr 21 2020 at 10:01):

Iocta said:

looks like Logic and Proof doesn't have "... in Lean" versions for the last few chapters. is that because nobody's gotten around to writing them or because those chapters are too hard to port to Lean or some other reason?

For whatever it's worth, I think almost all of the combinatorics section has proofs in mathlib, and I've done the ones which don't

Pit Sinning (Apr 21 2020 at 10:13):

Iocta said:

looks like Logic and Proof doesn't have "... in Lean" versions for the last few chapters. is that because nobody's gotten around to writing them or because those chapters are too hard to port to Lean or some other reason?

If you look at the Git of the book, it certainly appears that those chapters were planned for, but the files were last edited 3 years ago so I wouldn't hold my breath.

Patrick Massot (Apr 21 2020 at 10:14):

Chapter 21 is clearly in mathlib

Patrick Massot (Apr 21 2020 at 10:14):

and 22 also

Jeremy Avigad (Apr 21 2020 at 14:26):

Rob, Floris, and I perpetually have back-burner plans to expand Logic and Proof, but other projects always seem to take precedence. I'll be using it again in the fall, and I am hoping to add more then.

Iocta (May 08 2020 at 20:37):

What's next after the books?

Patrick Massot (May 08 2020 at 20:39):

Did you do the new tutorials and the beginning of the new book?

Iocta (May 08 2020 at 20:46):

Link?

Patrick Massot (May 08 2020 at 20:49):

https://leanprover.zulipchat.com/#narrow/stream/113489-new-members/topic/Tutorials.20update/near/196123657 and https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/new.20Lean.20.2F.20mathlib.20tutorial

Iocta (May 08 2020 at 21:12):

Hadn't seen those, thanks

Iocta (May 08 2020 at 21:13):

those filenames haha :-)

Iocta (May 09 2020 at 22:21):

I'm curious, Do you find it works to learn a subject by formalizing it, or do you have to understand it really well already before formalizing it?

Bryan Gin-ge Chen (May 09 2020 at 22:22):

Those aren't mutually exclusive.

Iocta (May 09 2020 at 22:23):

Do you find it works to learn a subject for the first time by formalizing it, or do you have to understand it really well already before formalizing it?

Kevin Buzzard (May 09 2020 at 22:42):

I have understood everything I've formalised really quite well, although I will be the first to admit that formalising perfectoid spaces taught me that the theory of complete topological rings was more subtle than I had realised.

Conversely @Athina told me when she was watching me teach mathematics undergraduates that she could see evidence that learning new mathematics at the same time as learning Lean was hard

Kevin Buzzard (May 09 2020 at 22:42):

I realise that this is not quite what you asked

Mario Carneiro (May 10 2020 at 02:23):

I think you should be somewhat familiar, at the level where you just had a midterm on the topic. You don't need to be an expert, but you should have the general feel for the terrain. Reading the wikipedia page is usually sufficient, if you are the type that can absorb what is written there

Carlo Cabrera (May 10 2020 at 21:56):

How do I tell Lean I've derived a contradiction? Here is a snippet from Lean's tactic state... Tactic State Snippet

Kenny Lau (May 10 2020 at 21:57):

example (R : Prop) (H : R  ¬R) : false := by cc

Mario Carneiro (May 10 2020 at 21:57):

by contradiction is the correct answer

Mario Carneiro (May 10 2020 at 21:58):

or by exact notr r

Kenny Lau (May 10 2020 at 21:58):

oh I didn't notice that r and notr were there lol

Mario Carneiro (May 10 2020 at 21:59):

I suspect carlo put them together to make it more obvious for lean

Carlo Cabrera (May 10 2020 at 22:00):

Excellent, thanks, Kenny and Mario. I did try to put them together to see if that got me somewhere, but it seems it did not by itself.

Kenny Lau (May 10 2020 at 22:00):

also, #mwe

Mario Carneiro (May 10 2020 at 22:00):

but the right way to put them together is to apply the \not R proof to the R proof, because \not R is short for R -> false

Mario Carneiro (May 10 2020 at 22:00):

so notr r : false

Mario Carneiro (May 10 2020 at 22:01):

using apply you can apply notr and the goal will change to R

Carlo Cabrera (May 10 2020 at 22:02):

To be fair I noticed that "cc" actually proved what I needed to without additional work, but I was trying to get a feel for using tactics by doing it the long way

Kenny Lau (May 10 2020 at 22:03):

I notice that notr is basically h2 notq

Kenny Lau (May 10 2020 at 22:03):

so how about by exact h2 notq r

Mario Carneiro (May 10 2020 at 22:04):

kenny wants a #mwe so he can golf your theorem to oblivion

Carlo Cabrera (May 10 2020 at 22:05):

He probably already has, this is from Imperial's exercise sheets. I guess you can't make it shorter than "cc"

Kenny Lau (May 10 2020 at 22:11):

yeah no I'm not going through 1000 exercise sheets to find your theorem

Carlo Cabrera (May 10 2020 at 23:54):

I guess I'm not understanding how by contradiction works yet. Here's one case where I expect it to work, but doesn't:

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
begin
    have pnotp : P  ¬P,
        intro h,
        by exact h₂ (h₁ h),
end

In a simpler example, I can show that (P → ¬P) → ¬P, but for some reason the approach doesn't work in the example I gave above. (i.e. one where instead of two hypotheses, there is a single one that says (P → ¬P))

Dan Stanescu (May 11 2020 at 00:04):

Carlo Cabrera said:

I guess I'm not understanding how by contradiction works yet. Here's one case where I expect it to work, but doesn't:

You mean something like this?

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
begin
    by_contradiction H,
    exact h₂ (h₁ H) H,
end

Jalex Stark (May 11 2020 at 00:04):

why are you using by if you're already in tactic mode?

Jalex Stark (May 11 2020 at 00:06):

I think @Carlo Cabrera is talking about the contradiction tactic, which looks for P and ¬ P in the hypotheses

Jalex Stark (May 11 2020 at 00:07):

and @Dan Stanescu is talking about the by_contradiction tactic which puts the negation of the goal into the hypotheses and makes false the new goal

Dan Stanescu (May 11 2020 at 00:10):

@Jalex Stark I was wondering. contradiction needs something very obvious, like both P and not P, but @Carlo Cabrera didn't have both in his context.

Dan Stanescu (May 11 2020 at 00:10):

Not sure he had any.

Jalex Stark (May 11 2020 at 00:12):

@Carlo Cabrera is this the proof you want?

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
begin
intro hp,
have := (h₂  h₁) hp,
contradiction,
end

Iocta (May 11 2020 at 05:57):

theorem mul_zero (a : R) : a * 0 = 0 :=
begin
  have h : a * 0 + a * 0 = a * 0 + 0,
  from by rw [mul_add, add_zero, add_zero],
  rw add_left_cancel h
end

is from doing anything?

Kenny Lau (May 11 2020 at 06:36):

https://github.com/leanprover-community/lean/blob/master/library/init/meta/interactive.lean#L297

/--
A synonym for `exact` that allows writing `have/suffices/show ..., from ...` in tactic mode.
-/
meta def «from» := exact

Kenny Lau (May 11 2020 at 06:37):

it's to make it look more like the term mode version have h : _, from _

Kenny Lau (May 11 2020 at 06:38):

but in this case it is simply unnecessary

Iocta (May 11 2020 at 07:22):

ok

Patrick Massot (May 11 2020 at 07:25):

Either from or by is unnecessary

Kenny Lau (May 11 2020 at 07:27):

that's right, because from is unnecessary

Patrick Massot (May 11 2020 at 07:31):

Removing from and keeping by also works

Carlo Cabrera (May 11 2020 at 08:07):

Jalex Stark said:

Carlo Cabrera is this the proof you want?

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
begin
intro hp,
have := (h₂  h₁) hp,
contradiction,
end

@Jalex Stark Yes! That was what I had in mind. I don't know why I was using by, thanks for pointing that out. Would you say that this is the simplest approach to the example?

Thanks for your comments too, @Dan Stanescu. I learned something from you both here.

Johan Commelin (May 11 2020 at 08:11):

by { exfalso, exact (h₂ ∘ h₁) id }?

Johan Commelin (May 11 2020 at 08:12):

Meh, me confused

Patrick Massot (May 11 2020 at 08:13):

The simplest approach is

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
λ hp, h₂ (h₁ hp) hp

or

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
by tauto

depending on taste

Carlo Cabrera (May 11 2020 at 08:21):

Excellent, thank you.

Kenny Lau (May 11 2020 at 08:21):

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
assume hp : P,
  have hq : Q, from h₁ hp,
  have hnp : ¬P, from h₂ hq,
  show false, from hnp hp

Kevin Buzzard (May 11 2020 at 08:21):

Aargh

Kenny Lau (May 11 2020 at 08:21):

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
assume Aargh : P,
  have hq : Q, from h₁ Aargh,
  have hnp : ¬P, from h₂ hq,
  show false, from hnp Aargh

Kevin Buzzard (May 11 2020 at 08:21):

The horribleist approach to complement the two simple ones

Kevin Buzzard (May 11 2020 at 08:27):

I can't actually see what the question is because I'm on mobile and one of the hypotheses is h-box : P box Q. From the code it looks like the box should be an implies sign, but I can see an implies sign in h2 so I'm confused

Kenny Lau (May 11 2020 at 08:31):

example (P Q : Prop)
    (h₁ : P  Q)
    (h₂ : Q  ¬P) :
    ¬P :=
implies.trans (and_self P).mpr (and.rec (implies.trans h₁ h₂))

Iocta (May 11 2020 at 19:25):

theorem zero_mul (a : R) : 0 * a  = 0 :=
begin
rw  add_left_neg a,
rw add_mul,
rw add_comm,
rw add_left_neg a,
rw add_comm (a * a) (-a * a),
have h: (-a * a) + (a * a) = -(a * a) + (a * a),
from sorry,
rw h,
rw add_left_neg (a * a),
end

How can I fill sorry, or is there a better way to solve this problem?

Jalex Stark (May 11 2020 at 19:27):

please give a #mwe
Is R a commutative ring?

Jalex Stark (May 11 2020 at 19:27):

are you avoiding use of the ring tactic?

Kevin Buzzard (May 11 2020 at 19:27):

Are you avoiding use the of the zero_mul lemma? ;-)

Jalex Stark (May 11 2020 at 19:29):

If you want to learn the proof of zero_mul for natural numbers, play the natural number game

Jalex Stark (May 11 2020 at 19:29):

If you want to learn it for general commutative rings then you should be thinking about what the axioms are and what order you want to develop the theory in

Kevin Buzzard (May 11 2020 at 19:29):

but currently your question is unanswerable because you didn't supply the typeclass of R in a mwe

Iocta (May 11 2020 at 19:30):

variables {R : Type*} [ring R]

theorem zero_mul (a : R) : 0 * a  = 0 :=
begin
rw  add_left_neg a,
rw add_mul,
rw add_comm,
rw add_left_neg a,
rw add_comm (a * a) (-a * a),
have h: (-a * a) + (a * a) = -(a * a) + (a * a),
from sorry,
rw h,
rw add_left_neg (a * a),
end

Kevin Buzzard (May 11 2020 at 19:31):

Kevin Buzzard said:

I can't actually see what the question is because I'm on mobile and one of the hypotheses is h-box : P box Q. From the code it looks like the box should be an implies sign, but I can see an implies sign in h2 so I'm confused

Oh -- back to this -- on mobile the implication for h1 didn't render but the one for h2 did?

Kenny Lau (May 11 2020 at 19:31):

how about one topic for one question

Reid Barton (May 11 2020 at 19:32):

if you're talking about Kenny's most recent message, I can assure you they are the same character

Kevin Buzzard (May 11 2020 at 19:32):

@Iocta can you start a new topic instead of just posting on some other one?


Last updated: Dec 20 2023 at 11:08 UTC