# Zulip Chat Archive

## Stream: new members

### Topic: noob question(s)

#### Wojciech Nawrocki (Nov 20 2018 at 00:46):

What's the command to make Lean automatically derive `decidable_eq`

for some custom inductive type?

#### Chris Hughes (Nov 20 2018 at 00:47):

@[derive decidable_eq]

#### Wojciech Nawrocki (Nov 20 2018 at 00:47):

ah I should've thought of that, thanks! :)

#### Wojciech Nawrocki (Nov 20 2018 at 03:18):

Hm, when I add

open classical local attribute [instance] prop_decidable

to a file, definitions below it which used to pass now fail with:

equation compiler failed to generate bytecode for 'subst._main' nested exception message: code generation failed, VM does not have code for 'classical.choice'

Why might this be?

Is it basically because `classical`

makes things uncomputable? If so, maybe Lean should detect that `classical`

is not used in a particular case and still compile the definition?

#### Chris Hughes (Nov 20 2018 at 03:21):

Try `[instance, priority 0]`

. Otherwise it uses classical decidability even when there's proper decidability.

#### Wojciech Nawrocki (Nov 20 2018 at 03:23):

Ah indeed, thanks Chris!

#### Kevin Buzzard (Nov 20 2018 at 08:22):

This trips lots of people up! I wonder where people are learning this trick? Not putting priority 0 can trip you up later in quite a confusing way

#### Patrick Massot (Nov 20 2018 at 08:23):

@Jeremy Avigad needs to fix the very bottom of https://leanprover.github.io/theorem_proving_in_lean/axioms_and_computation.html

#### Johan Commelin (Nov 20 2018 at 08:23):

I think Lean shows an error message when it can't find an instance for `decidable`

, and that error message does not include setting the priority. It would be very helpful if it did.

#### Patrick Massot (Nov 20 2018 at 08:23):

and https://leanprover.github.io/theorem_proving_in_lean/type_classes.html

#### Jeremy Avigad (Nov 20 2018 at 15:03):

I'm on it -- I'll do it tomorrow.

#### Patrick Massot (Nov 20 2018 at 16:08):

Thanks!

#### Kenny Lau (Nov 20 2018 at 16:09):

how about *not* using `classical.dec`

#### Reid Barton (Nov 20 2018 at 16:10):

Let's just agree to not not use it

#### Kenny Lau (Nov 20 2018 at 16:11):

that doesn't mean we *use* it :P

#### Patrick Massot (Nov 20 2018 at 16:11):

You may have missed Reid's point

#### Reid Barton (Nov 20 2018 at 16:11):

It doesn't mean *you* use it

#### Wojciech Nawrocki (Nov 20 2018 at 22:59):

Is it possible to make Lean display the values of constant variables in the tactic state? E.g. if i have `lst: list nat`

which is also empty, it would be nice to see that it's empty.

#### Kenny Lau (Nov 20 2018 at 23:01):

either this doesn't make sense, or `subst lst`

#### Wojciech Nawrocki (Nov 20 2018 at 23:04):

Hm I should elaborate, maybe I'm misunderstanding how `induction`

works. Say I have a hypothesis `h: InductiveFoo list.nil`

, where `InductiveFoo: list nat -> Prop`

. So then running `induction h`

creates cases for all the constructors of `InductiveFoo`

, which take the list as an input, say `lst`

. But the `lst`

is empty, so it'd be nice to see that in the state.

#### Kenny Lau (Nov 20 2018 at 23:06):

you can't because it's forgotten

#### Kenny Lau (Nov 20 2018 at 23:06):

you might want to `generalize_hyp`

#### Wojciech Nawrocki (Nov 20 2018 at 23:16):

Thanks!

#### Wojciech Nawrocki (Nov 21 2018 at 15:03):

Perhaps it would be useful to have a reference sheet for translating from Coq to Lean tactics?

#### Mario Carneiro (Nov 21 2018 at 15:05):

I recall such a thing being made at one point. Maybe it's in mathlib docs?

#### Rob Lewis (Nov 21 2018 at 15:05):

https://github.com/jldodds/coq-lean-cheatsheet

#### Rob Lewis (Nov 21 2018 at 15:05):

Note the date though, it'll need updating.

#### Wojciech Nawrocki (Nov 21 2018 at 15:06):

Oh, nice!

#### Wojciech Nawrocki (Nov 21 2018 at 16:06):

Is there a general tactic for showing `false`

by "this term could not have been constructed"? E.g.

inductive Foo: Prop → Prop → Prop | FooT: Foo true true | FooF: Foo false false -- This term could not have been constructed lemma impossible (h: Foo true false) : false := by sorry

#### Mario Carneiro (Nov 21 2018 at 16:11):

`cases`

#### Mario Carneiro (Nov 21 2018 at 16:11):

also empty match

#### Mario Carneiro (Nov 21 2018 at 16:11):

lemma impossible : Foo true false → false.

#### Rob Lewis (Nov 21 2018 at 16:12):

You have to make `Foo : bool -> bool -> Prop`

for that.

#### Wojciech Nawrocki (Nov 21 2018 at 16:13):

Yeah, for `Prop -> Prop -> Prop`

both of these fail, but that wasn't actually my problem, so thanks!

#### Patrick Massot (Nov 21 2018 at 16:16):

It seems that variations on this topic comes up again and again. We should really find a way to document that

#### Wojciech Nawrocki (Nov 21 2018 at 20:17):

That would be nice :) I was also surprised to see that more often than not `contradiction`

fails when `cases h`

works. Description: `The contradiction tactic attempts to find in the current local context an hypothesis that is equivalent to an empty inductive type (e.g. false)`

. I thought a hypothesis that cannot be constructed is exactly that, but maybe I'm misunderstanding it?

#### Wojciech Nawrocki (Nov 22 2018 at 01:12):

Is it possible to reserve some notation for an inductive type and then use it while defining the type, like in Coq? I tried this, but the parser seems to fail:

reserve infix `∋`:50 inductive Typeof: list Tp → Tp → Prop | Z: ∀ (Γ: list Tp) A, Typeof (Γ.append [A]) A | S_: ∀ (Γ: list Tp) A B, Typeof Γ A → Typeof (Γ.append [B]) A inductive Typeof: list Tp → Tp → Prop | Z: ∀ (Γ: list Tp) A, (Γ.append [A]) ∋ A -- fails | S_: ∀ (Γ: list Tp) A B, Γ ∋ A → (Γ.append [B]) ∋ A infix ∋ := Typeof

#### Mario Carneiro (Nov 22 2018 at 03:00):

Yes! You can just put a notation line between the inductive header and the first constructor

inductive Typeof{Tp}: list Tp → Tp → Prop infix `∋`:50 := Typeof | Z: ∀ (Γ: list Tp) A, (Γ.append [A]) ∋ A | S_: ∀ (Γ: list Tp) A B, Γ ∋ A → (Γ.append [B]) ∋ A

#### Jeremy Avigad (Nov 23 2018 at 16:29):

@Patrick Massot I added a discussion of the `priority 0`

trick to Section 10.4 of TPIL (search on "priority 0"):

https://leanprover.github.io/theorem_proving_in_lean/type_classes.html#decidable-propositions

I also added a back reference in Section 11:

https://leanprover.github.io/theorem_proving_in_lean/axioms_and_computation.html#the-law-of-the-excluded-middle

Finally, I fixed an old issue raised by @Joseph Corneli by changing all the examples in 6.4:

https://leanprover.github.io/theorem_proving_in_lean/interacting_with_lean.html#attributes

https://github.com/leanprover/theorem_proving_in_lean/issues/62

Teaching my class next semester will give me a chance to review and expand TPIL. I am planning to add one more chapter on some of the fine points of dependent type theory, e.g. explaining how to work with equality and dependent types (the dark side of type theory), and explaining how Lean manages recursion on arbitrary well-founded relations. I'll also try to write a less ambitious but up-to-date version of Programming in Lean. But I am counting on the mathlib crew to continue documenting mathlib and all the new tactics, and to provide useful guidance on using the library and proving theorems.

#### Kevin Buzzard (Nov 23 2018 at 16:33):

I have not been using Lean seriously since term started; there are three weeks to go before it finishes. After that I fully intend to go back to the perfectoid project. But when I don't understand something, my instinct is to write docs about it, because if I work something out and don't write down what I learnt then I realise a month later that I've forgotten it all again!

#### Patrick Massot (Nov 23 2018 at 20:02):

Thank you very much Jeremy! Your documentation work is really crucial.

#### Patrick Massot (Nov 23 2018 at 21:06):

I'm now reading random pieces of TPIL, and I have a couple more suggestions about chapter 10:

- at several places, emacs is mentioned but not VScode. I guess this goes back to before the VScode extension was available, but it could be misleading
- in 10.5. Managing Type Class Inference, I think it would be nice to add the standard tricks to see what's the name of an instance Lean is finding, and sometimes what's the actual definition, as in

#check (by apply_instance : has_add ℕ) #reduce (infer_instance : has_add ℕ)

maybe find a better example for the second one since the answer is not super easy to read (every nice example coming to my mind are in mathlib...)

#### Patrick Massot (Nov 23 2018 at 21:09):

Oh, it seems `apply_instance`

is never mentioned in TPIL :sad:

#### Patrick Massot (Nov 23 2018 at 21:10):

Another thing that would be very helpful, both because it can be puzzling and because it can be very helpful would be to discuss

def n : Type := ℕ example : has_add ℕ := by apply_instance -- ok example : has_add n := by apply_instance -- fails example : has_add n := by unfold n ; apply_instance -- ok

#### Jeremy Avigad (Nov 24 2018 at 13:23):

Thanks for the input. Yes, TPIL evolved over time, and the last major rewrite was early in the days of Lean 3, before there was a VSCode extension. I'll do a global search and try to make the text less emacs-centric.

I'll discuss `apply_instance`

and `infer_instance`

. I am thinking of using these examples:

/- example 1: using apply_instance -/ def foo : has_add nat := by apply_instance def bar : inhabited (nat → nat) := by apply_instance /- example 2: using infer_instance -/ def baz : has_add nat := infer_instance def bla : inhabited (nat → nat) := infer_instance /- example 3: seeing them -/ #print foo -- nat.has_add #reduce foo -- (unreadable) #print bar -- pi.inhabited ℕ #reduce bar -- {default := λ (a : ℕ), 0} #print baz -- infer_instance #reduce baz -- (same as for #reduce foo) #print bla -- infer_instance #reduce bla -- {default := λ (a : ℕ), 0} /- example 4: tricks to be more concise -/ #check (by apply_instance : inhabited ℕ) #print nat.inhabited #reduce (infer_instance : inhabited ℕ) /- examples 5: core Lean can't find an instance for inhabited set -/ -- fails -- example {α : Type*} : inhabited (set α) := by apply_instance /- example 6: supplying one manually -/ def inhabited.set (α : Type*) : inhabited (set α) := ⟨∅⟩ #print inhabited.set -- λ {α : Type u}, {default := ∅} #reduce inhabited.set ℕ -- {default := λ (a : ℕ), false} /- example 7: unfolding a definition so Lean can find it -/ def inhabited.set (α : Type*) : inhabited (set α) := by unfold set; apply_instance #print inhabited.set -- λ (α : Type u), eq.mpr _ (pi.inhabited α) #reduce inhabited.set ℕ -- {default := λ (a : ℕ), true} /- example 8: using dunfold instead -/ def inhabited.set (α : Type*) : inhabited (set α) := by dunfold set; apply_instance #print inhabited.set -- λ (α : Type u), id (pi.inhabited α) #reduce inhabited.set ℕ -- {default := λ (a : ℕ), true}

#### Patrick Massot (Dec 11 2018 at 09:17):

Yes! You can just put a notation line between the inductive header and the first constructor

inductive Typeof{Tp}: list Tp → Tp → Prop infix `∋`:50 := Typeof | Z: ∀ (Γ: list Tp) A, (Γ.append [A]) ∋ A | S_: ∀ (Γ: list Tp) A B, Γ ∋ A → (Γ.append [B]) ∋ A

Is there something similar for dependant `structure`

? If one field of my structure is a binary operator, can I define an infix notation usable in the remaining fields declaration?

#### Kevin Buzzard (Dec 11 2018 at 09:18):

I usually make the `structure`

extend the notation typeclass in this situation.

#### Kevin Buzzard (Dec 11 2018 at 09:19):

In fact I have been known to make new notation typeclasses called things like `group_notation`

extending `has_mul`

, `has_one`

and `has_inv`

, and then extending these too so I get a bunch of notation at once.

#### Patrick Massot (Dec 11 2018 at 09:21):

Thanks Kevin. I know all this, but I'm still interested in an answer to my question.

#### Kevin Buzzard (Dec 11 2018 at 09:21):

Yes I understand. For example if the notation is not in the standard notation list then it would be nicer to add it directly in the definition of the structure.

#### Rob Lewis (Dec 11 2018 at 12:05):

You can define notation in structures that's used in the remaining fields. But I think it's just local to the structure declaration.

structure patrick := (α : Type) (f : α → α → α) (infix `^^^`:50 := f) (h : ∀ a : α, a ^^^ a = a) #check patrick.h

#### Patrick Massot (Dec 11 2018 at 13:29):

Thanks!

#### Jeremy Avigad (Jan 03 2019 at 18:49):

I just crossed this item off my to do list. `apply_inference`

and such are now discussed here: https://leanprover.github.io/theorem_proving_in_lean/type_classes.html#managing-type-class-inference. And VS Code is now mentioned whenever Emacs is, with VS Code first.

#### Patrick Massot (Jan 03 2019 at 18:54):

If you add this to your file in Emacs mode and use C-c C-x to run an independent Lean process on your file, the output buffer will show a trace every time the type class resolution procedure is subsequently triggered.

This paragraph (in the section your referred to) is still Emacs centric

#### Jeremy Avigad (Jan 03 2019 at 19:20):

Yes, that is the only one, because I don't know how to start an independent Lean process from within VS Code. I guess I'll remind people that they can run Lean from the VS Code terminal.

#### Patrick Massot (Jan 03 2019 at 19:20):

Why would you do that?

#### Patrick Massot (Jan 03 2019 at 19:20):

The trace is printed in the info view

#### Jeremy Avigad (Jan 03 2019 at 19:23):

Oh! I forgot. Good point. I'll fix that.

#### Patrick Massot (Jan 03 2019 at 19:24):

Great!

#### Jeremy Avigad (Jan 03 2019 at 19:34):

Fixed. Thanks for catching it.

https://leanprover.github.io/theorem_proving_in_lean/type_classes.html#managing-type-class-inference

#### Wojciech Nawrocki (Jan 16 2019 at 12:50):

Hello! Is there anything special I need to do to make Lean recognise my `instance : has_zero Foo`

as being equivalent to `0`

? I got this state while trying to use `rw [this]`

:

rewrite tactic failed, did not find instance of the pattern in the target expression 0 + π₂ state: 3 goals π₂ : mult, this : 0 + π₂ = π₂ ⊢ mult.Zero + π₂ = π₂ + mult.Zero

even though I have

instance : has_zero mult := ⟨mult.Zero⟩

above

#### Kevin Buzzard (Jan 16 2019 at 12:55):

rewrites don't recognise definitional equality, only syntactic equality.

#### Kevin Buzzard (Jan 16 2019 at 12:55):

So you could try "show 0 + pi2 = _" before the rewrite

#### Kevin Buzzard (Jan 16 2019 at 12:56):

or "change mult.Zero + _ = _ at this". Maybe it will work after one of these changes. But not after both ;-)

#### Reid Barton (Jan 16 2019 at 12:56):

`convert this`

should also work

#### Kevin Buzzard (Jan 16 2019 at 12:57):

but `this`

isn't the goal

#### Kevin Buzzard (Jan 16 2019 at 12:57):

Oh!

#### Kevin Buzzard (Jan 16 2019 at 12:58):

This `convert`

trick works when the thing you're rewriting is precisely one side of the equality I guess.

#### Reid Barton (Jan 16 2019 at 12:58):

or `erw this`

would also work but it's not as nice

#### Reid Barton (Jan 16 2019 at 13:01):

In fact you can use `a' = b'`

to prove `a = b`

with neither side matching definitionally (you'll get two new goals `a = a'`

and `b = b'`

), but then you run the risk that the new goals are not actually true :smile:

#### Wojciech Nawrocki (Jan 16 2019 at 13:13):

Ah, I see, thanks! And related, I'm proving that an object with three elements and custom add/mult tables is a semiring, how ugly is it to do all my proofs like this?

lemma add_assoc (π₁ π₂ π₃: mult) : π₁ + π₂ + π₃ = π₁ + (π₂ + π₃) := by { cases π₁; cases π₂; cases π₃; refl }

#### Mario Carneiro (Jan 16 2019 at 13:17):

it will work, although there are simpler proofs where you ony case on one of them, I think

#### Wojciech Nawrocki (Jan 16 2019 at 13:20):

Hm, I found that without expanding every case, I had to do a bit more work by using other lemmas and so on. The definition of `add`

is:

inductive mult: Type | Zero: mult | One: mult | Omega: mult instance : has_zero mult := ⟨mult.Zero⟩ instance : has_one mult := ⟨mult.One⟩ notation `ω` := mult.Omega def add: mult → mult → mult | 0 π := π | π 0 := π | 1 1 := ω | ω _ := ω | _ ω := ω instance : has_add mult := ⟨add⟩

#### Mario Carneiro (Jan 16 2019 at 13:22):

right

#### Mario Carneiro (Jan 16 2019 at 13:24):

you should have lemmas like `x + ω = ω`

as simp lemmas which should simplify most of the cases

#### Mario Carneiro (Jan 16 2019 at 13:25):

or you could just split into 27 cases if you want

#### Wojciech Nawrocki (Jan 16 2019 at 13:26):

Ah ok, i'll try it with simp as well, thanks!

#### Johan Commelin (Jan 16 2019 at 13:55):

Scott Morrison's case-bashing tactic would probably be useful here. But I don't know where that tactic lives at the moment...

#### Mario Carneiro (Jan 16 2019 at 14:01):

if you want to do a case bashing proof, another approach is to prove `fintype mult`

and `decidable_eq mult`

(you can `derive`

this), and then you can just revert everything and use `dec_trivial`

#### Johan Commelin (Jan 16 2019 at 14:02):

Should even be reasonably fast, I guess.

#### Mario Carneiro (Jan 16 2019 at 14:02):

as long as you don't have too many variables; it is still 27 cases

#### Kevin Buzzard (Jan 16 2019 at 14:18):

Kind of a stupid question, but when I'm using other computer algebra systems I would expect checking a million cases to be very quick. Mario's comments suggest that 27 is rather large for Lean. What is happening here?

#### Mario Carneiro (Jan 16 2019 at 14:37):

There is a large overhead of the expression that is generated, elaboration for it, and typechecking

#### Mario Carneiro (Jan 16 2019 at 14:38):

I don't think 27 is that large in this context, I guess it's probably less than a second to check

#### Mario Carneiro (Jan 16 2019 at 14:38):

I just think it's better to have more "human" proofs with fewer cases

#### Kevin Buzzard (Jan 16 2019 at 14:38):

This is what I don't understand. We have to check that 27 things of the form add a (add b c) = add (add a b) c hold and in each case this is by refl.

#### Kevin Buzzard (Jan 16 2019 at 14:39):

How long does it take Lean to prove (0 + 1) + 1 = 0 + (1 + 1) in this type?

#### Mario Carneiro (Jan 16 2019 at 14:40):

there is also all the intermediate steps, the generation of motives, lots of abstraction and substitution going on, and large terms being built up behind the scenes before you even attack those 27 cases

#### Kevin Buzzard (Jan 16 2019 at 14:40):

So the bottleneck is elsewhere?

#### Mario Carneiro (Jan 16 2019 at 14:41):

I have heard it repeatedly asserted that the kernel is not a bottleneck

#### Kevin Buzzard (Jan 16 2019 at 14:41):

Again the idea of a "large" term is confusing to me. In python I could happily manipulate a list with 1000 elements.

#### Mario Carneiro (Jan 16 2019 at 14:41):

this term has way more than 1000 subterms

#### Kevin Buzzard (Jan 16 2019 at 14:41):

One of my kids has been learning about algorithms over the last few months and I realise now that I am far more aware of these things than I used to be.

#### Mario Carneiro (Jan 16 2019 at 14:42):

all in all it makes lean just look a lot slower to do "simple" things

#### Mario Carneiro (Jan 16 2019 at 14:42):

because there is a lot of bookkeeping in the background

#### Mario Carneiro (Jan 16 2019 at 14:44):

I would like to figure out ways to minimize the overhead, but that runs close to work on the lean compiler

#### Kevin Buzzard (Jan 16 2019 at 15:28):

this term has way more than 1000 subterms

In the same way that a set with 10 elements has more than 1000 subsets, or in a more serious "we really need to work with way more than 1000 things" way?

#### Mario Carneiro (Jan 16 2019 at 15:50):

in the more serious way. (It's tricky to count the "size" of an expression but number of subterms is a good proxy)

#### Mario Carneiro (Jan 16 2019 at 15:51):

there is no exponential growth because subterms can't overlap, they are either disjoint or in a containment relationship

#### Wojciech Nawrocki (Jan 18 2019 at 00:04):

Is it possible to define a custom synthesis strategy for an implicit argument? I would like to define a function which extracts concrete values from concrete lists, like so:

def get': ∀ (l: list ℕ) (n: ℕ) {h: n < l.length}, ℕ | (x::xs) 0 _ := x | (x::xs) (n+1) h := @get' xs n (nat.lt_of_succ_lt_succ h) | [] n h := by { exfalso, simp at h, exact (nat.not_lt_zero n)

and for concrete args, `h`

is always derivable with a custom tactic. I'd like Lean to use that tactic to synthesise it.

OR am I doing this completely wrong and there is a much simpler way?

#### Chris Hughes (Jan 18 2019 at 00:07):

unification hints! I don't know much about them though.

#### Mario Carneiro (Jan 18 2019 at 00:51):

This function is `list.nth_le`

btw

#### Mario Carneiro (Jan 18 2019 at 00:53):

you can synthesize the argument using typeclasses, but `exact_dec_trivial`

is another easy way to do it

#### Mario Carneiro (Jan 18 2019 at 00:54):

def get' (l : list ℕ) (n : ℕ) (h : n < l.length . exact_dec_trivial) : ℕ := l.nth_le n h example : get' [1, 2, 3] 1 = 2 := rfl

#### Wojciech Nawrocki (Jan 18 2019 at 01:07):

Ah indeed, thanks Mario! Can I use something like this in a Pi-type (to make the equation compiler work)? The `foo . tactic`

syntax doesn't seem to work:

def debrujin_of_nat: Π {Γ: Env} (n: ℕ) (h: n < Γ.length . tactic.exact_dec_trivial), (Γ ∋ Γ.nth_le n h) -- ill-formed declaration

#### Mario Carneiro (Jan 18 2019 at 01:14):

If you can put it left of the colon, the dot notation should work. But if you can't write it that way, it's sugar for `auto_param`

:

def get' : Π (l : list ℕ) (n : ℕ), auto_param (n < l.length) ``exact_dec_trivial → ℕ := list.nth_le example : get' [1, 2, 3] 1 = 2 := rfl

#### Wojciech Nawrocki (Jan 18 2019 at 01:19):

Can I have a _named_ `auto_param`

:sweat_smile:? I need to use the hypothesis in the type signature itself, more specifically in the return type.

#### Mario Carneiro (Jan 18 2019 at 01:21):

sure, just use a pi instead of an arrow

#### Mario Carneiro (Jan 18 2019 at 01:21):

`auto_param T n`

is defeq to `T`

so it doesn't cause any problems

#### Wojciech Nawrocki (Jan 18 2019 at 01:27):

Oh, I was sure I'd tried that but apparently not, thanks! Doesn't seem to work under `#eval`

unfortunately:

don't know how to synthesize placeholder context: ⊢ auto_param (0 < list.length [Tp.Nat]) (name.mk_string "exact_dec_trivial" (name.mk_string "tactic" name.anonymous))

#### Mario Carneiro (Jan 18 2019 at 01:29):

what did you write?

#### Mario Carneiro (Jan 18 2019 at 01:30):

it's not really related to the context you write it in, but rather the expected type during elaboration

#### Wojciech Nawrocki (Jan 18 2019 at 01:35):

Apologies for the length, but this is the full context:

import tactic.find import tactic.interactive import tactic import tactic.auto_cases import tactic.tidy @[derive decidable_eq] inductive Tp | Nat: Tp | Bool: Tp | Fn: Tp → Tp → Tp local infixr ` ⇒ `:50 := Tp.Fn def Env := list Tp inductive TypeIn: Env → Tp → Type infix ` ∋ `:40 := TypeIn | ZVar: Π {Γ T}, T::Γ ∋ T | SVar: Π {Γ T U}, Γ ∋ T → U::Γ ∋ T local infix ` ∋ `:40 := TypeIn open TypeIn inductive Term: Env → Tp → Type | Nat (n: ℕ): Π {Γ}, Term Γ Tp.Nat -- in all environments, nat literals have type Nat | Bool (b: bool): Π {Γ}, Term Γ Tp.Bool -- and booleans have type Bool | Var: Π {Γ T}, Γ ∋ T → Term Γ T -- A variable has type T given its de Brujin index -- is in the environment. | Abs: Π {Γ T U}, Term (T::Γ) U → Term Γ (T ⇒ U) | App: Π {Γ T U}, Term Γ (T ⇒ U) → Term Γ T → Term Γ U open Term def debrujin_of_nat: Π {Γ: Env} (n: ℕ) {h: auto_param (n < Γ.length) ``tactic.exact_dec_trivial}, (Γ ∋ Γ.nth_le n h) | (T::Γ) 0 _ := ZVar | (T::Γ) (n+1) h := SVar (@debrujin_of_nat Γ n (nat.lt_of_succ_lt_succ h)) | [] n h := by { exfalso, simp at h, exact (nat.not_lt_zero n) h } local notation `#` n := Var (debrujin_of_nat n) #eval (@App [] Tp.Nat Tp.Nat (@Abs [] Tp.Nat Tp.Nat (@Var [Tp.Nat] Tp.Nat (debrujin_of_nat 0))) (@Nat 3 []))

#### Wojciech Nawrocki (Jan 18 2019 at 01:37):

Basically given a concrete `list Tp`

and a concrete `n`

, I'd like it to figure out that `n`

is within bounds and include the result of `lst.nth_le n _`

in the return type.

#### Wojciech Nawrocki (Jan 18 2019 at 01:41):

This does work: `#eval (@App [] Tp.Nat Tp.Nat (@Abs [] Tp.Nat Tp.Nat (@Var [Tp.Nat] Tp.Nat (@debrujin_of_nat [Tp.Nat] 0 (by tactic.exact_dec_trivial)))) (@Nat 3 []))`

(notice the explicit proof I put in)

#### Mario Carneiro (Jan 18 2019 at 02:06):

so what did you write?

#### Wojciech Nawrocki (Jan 18 2019 at 02:07):

Well, the `#eval`

at the bottom of that long snippet is what fails synthesis. The `#eval`

with an explicit proof works

#### Mario Carneiro (Jan 18 2019 at 02:07):

aha, you made the arg implicit

#### Mario Carneiro (Jan 18 2019 at 02:07):

auto params should be explicit

#### Mario Carneiro (Jan 18 2019 at 02:07):

def debrujin_of_nat: Π {Γ: Env} (n: ℕ) (h: auto_param (n < Γ.length) ``tactic.exact_dec_trivial), (Γ ∋ Γ.nth_le n h)

#### Wojciech Nawrocki (Jan 18 2019 at 02:09):

Oh thanks, now it does work, but still behaves as if it were implicit :thinking: is this currying at work, meaning I have to place `auto_param`

last, s.t. given `foo: nat -> auto_param blah -> nat`

, `(foo n): nat`

(and `foo n _`

still fails)?

#### Wojciech Nawrocki (Jan 18 2019 at 23:39):

(unrelated to above)

I'm seeing a wierd error in an inductive type: `invalid occurrence of recursive arg#3 of 'context.cons', the body of the functional type depends on it.`

. The type definition is below, and as far as I know it's a perfectly legit defn, so what's wrong?

inductive context: list ℕ → Type | nil: context [] | cons {ns: list ℕ} (_: context ns) (n: ℕ) (m: ℕ): context (n::ns)

EDIT: swapping two arguments makes it compile, but why?

inductive context: list ℕ → Type | nil: context [] | cons: Π {ns: list ℕ} (n: ℕ) (_: context ns) (m: ℕ), context (n::ns)

#### Mario Carneiro (Jan 18 2019 at 23:48):

I think Gabriel recently pointed out an example similar to this. You have a dependent pi (`n`

) after a recursive arg (`_ : context ns`

) and lean doesn't like this

#### Wojciech Nawrocki (Jan 19 2019 at 00:37):

Ah ok, maybe this could be fixed in Lean 4? :)

#### Wojciech Nawrocki (Jan 19 2019 at 01:51):

Do I need to do something special to make the semiring-ness of my custom type available to the `ring`

tactic? I have a state like this:

π π' : mult, π_1 : mult, ⊢ π * π' * π_1 = π * (π' * π_1)

which is provable by `exact mult.monoid.mul_assoc π π' π_1`

, but `ring`

fails. I have `instance : semiring mult`

shown a few lines above.

#### Mario Carneiro (Jan 19 2019 at 02:01):

you need to prove `comm_semiring mult`

#### Wojciech Nawrocki (Jan 21 2019 at 18:36):

Given `def add (a b: foo): foo := blah`

, what's the difference between `infix `

++ ` := add`

and `instance : has_add foo := ⟨add⟩`

? If i switch from the former to the latter and replace `++`

with `+`

, my proofs break at the simplification stage, namely addition seems to not be `unfold`

able anymore

#### Kevin Buzzard (Jan 21 2019 at 18:37):

They're very different in the sense that they're using different machinery to figure out what's going on.

#### Kevin Buzzard (Jan 21 2019 at 18:38):

I guess the `infix`

trick is just syntax sugar, whereas the `instance`

approach is using type class inference. Can you give an example of something which breaks?

#### Kevin Buzzard (Jan 21 2019 at 18:39):

I guess if you go via the instance approach then you have an extra layer of unfolding to do. `+`

is `has_add.add`

, which unfolds to your add.

#### Kevin Buzzard (Jan 21 2019 at 18:39):

Maybe that's the answer to your question. If you're trying to unfold things explicitly in the middle of a proof, maybe you have to insert some `unfold has_add.add`

's

#### Kevin Buzzard (Jan 21 2019 at 18:40):

`++`

unfolds directly to your `add`

, whereas `+`

unfolds to `has_add.add`

which unfolds to your add.

#### Kevin Buzzard (Jan 21 2019 at 18:42):

[NB I'm a bit of a CS noob, I don't know if "unfolds" is the right terminology for notation turning into its underlying definition]

#### Kevin Buzzard (Jan 21 2019 at 18:42):

[they might well be syntactically equal rather than just definitionally equal]

#### Wojciech Nawrocki (Jan 21 2019 at 19:29):

Ah indeed, `unfold`

ing twice does make it work - thanks!

#### Wojciech Nawrocki (Jan 22 2019 at 01:12):

Does there exist a general tactic for proving `f a0 .. an = f b0 .. bn`

from `a0 = b0 .. an = bn`

?

#### Mario Carneiro (Jan 22 2019 at 01:33):

`congr`

#### Wojciech Nawrocki (Jan 22 2019 at 01:36):

Hm, I tried `congr`

but it seems to iterate the congruence, which gives me unprovable goals. Namely, I have a goal `f (g x) = f (g y)`

and `congr`

gives me `x = y`

but I just want `g x = g y`

. EDIT: `congr' 1`

works, thx!

#### Mario Carneiro (Jan 22 2019 at 02:37):

use `congr' 1`

and increase the number until you get a good result

#### Wojciech Nawrocki (Jan 22 2019 at 22:43):

The issue of `has_add.add`

and its actual value not being definitionally equal makes a lot of my proofs quite ugly - I have to expand definitions first so that the expressions can simplify and then fold them back into the `has_add.add`

version (or `has_mul.mul`

, etc), because all the ring/module/whatever laws only work on those. For example:

{ /- case context.cons δ γ γ₁ : precontext, π₁ : mult, T₁ : tp, Γ₁ : context γ₁, ih₁ : ∀ {Γ₂ : context γ₁} {Ξ : matrix γ₁ δ}, vmul (Γ₁ + Γ₂) Ξ = vmul Γ₁ Ξ + vmul Γ₂ Ξ, Γ₂ : context (T₁ :: γ₁), Ξ : matrix (T₁ :: γ₁) δ ⊢ vmul (cons π₁ T₁ Γ₁ + Γ₂) Ξ = vmul (cons π₁ T₁ Γ₁) Ξ + vmul Γ₂ Ξ -/ cases Γ₂ with _ π₂ _ Γ₂, -- unfold unfold vmul has_add.add context.add has_scalar.smul context.smul at *, simp *, -- fold back let a := vmul Γ₁ (λ (U : tp) (x : γ₁ ∋ U), Ξ U (SVar x)), let b := vmul Γ₂ (λ (U : tp) (x : γ₁ ∋ U), Ξ U (SVar x)), change (π₁ + π₂) • (Ξ T₁ ZVar) + (a + b) = (π₁•(Ξ T₁ ZVar) + a) + (π₂•(Ξ T₁ ZVar) + b), -- simplify using monoid laws simp [context.add_smul, context.add_assoc] },

is there some tactic or such that I could apply to do this automatically?

#### Mario Carneiro (Jan 22 2019 at 22:51):

This is what simp lemmas are for

#### Mario Carneiro (Jan 22 2019 at 22:53):

If you define `add x (y :: z) := y :: add x z`

, for example, and then install `add`

as a `has_add`

instance, then you can prove `x + (y :: z) = y :: (x + z)`

by rfl, and you should state this as a simp lemma

#### Mario Carneiro (Jan 22 2019 at 22:54):

You should not ever have to unfold `has_add.add`

#### Wojciech Nawrocki (Jan 22 2019 at 22:56):

Hm okay, so basically I need to "lift" the behaviour of my functions from the custom definition to one using `has_op.op`

? I'll try

#### Wojciech Nawrocki (Jan 22 2019 at 23:09):

Is it fine to unfold `has_zero.zero`

though? My definition of `0`

for this type is

def zeros: Π γ, context γ | [] := nil | (T::δ) := cons 0 T (zeros δ)

and I need the `cons`

to prove `0+Γ=Γ`

#### Wojciech Nawrocki (Jan 22 2019 at 23:21):

In any case this is pretty awesome, all my proofs have shortened by half now without the unfolding, thanks a lot!

#### Mario Carneiro (Jan 23 2019 at 00:37):

For this, you should decide whether you prefer to write the empty context as `0`

or `[]`

, and write a simp lemma like `0 = []`

if you want to get rid of the 0 everywhere. In this case you should also make sure that all your other simp lemmas use the "preferred form" of this element on the LHS

#### Wojciech Nawrocki (Feb 23 2019 at 22:04):

What's the closest thing to the `ring`

tactic when what I have is not a commutative semiring, e.g. just a monoid? Say my theorems are provable simply by repeated application of monoid or group laws, what tactic could I use?

#### Kevin Buzzard (Feb 23 2019 at 22:08):

If it's abelian then `abel`

might work, and if it's not then you're best going with `simp`

I think

#### Kevin Buzzard (Feb 23 2019 at 22:09):

You might have to write `simp [mul_assoc]`

maybe, I can't remember if mul_assoc is a simp lemma

#### Wojciech Nawrocki (Feb 23 2019 at 22:15):

`abel`

worked :) thanks. And no, `mul_assoc`

doesn't seem to be a simp lemma.

#### Wojciech Nawrocki (Feb 23 2019 at 23:41):

Say I would like to work with a particular module which uses a variable `v`

instantiated over a concrete `v`

. For example, a file `foo.lean`

could first declare `variable {v: Type}`

and then use `v`

in all definitions/lemmas in the file. I would like to import everything in `foo`

instantiated with e.g. `nat`

for `v`

. This would be equivalent to partially applying everything in the file to `nat`

. Is such a thing possible without manually redefining everything?

#### Andrew Ashworth (Feb 23 2019 at 23:42):

You could write a tactic to do it

#### Andrew Ashworth (Feb 23 2019 at 23:43):

But, why?

#### Wojciech Nawrocki (Feb 23 2019 at 23:47):

I would like to do this in order to:

a) Avoid type class search by giving it the right instance from the start.

b) Make inference work. I found that making one of my modules more generic (I changed it from using a particular semiring to being generic over arbitrary semirings) broke inference in a lot of places where I use this module because it no longer knows which semiring to pick and I don't want to specify it everywhere manually.

For a more concrete example which is close to what I'm doing, say I define a list `inductive slist {α: Type} [semiring α]: Type`

which is a list the elements of which are also elements of a semiring and then derive a bunch of theorems about the behaviour of such a list under some transformations. I would then like to use all of these theorems only with a particular choice of `α`

, even though the `slist`

file/module is generic.

#### Kevin Buzzard (Feb 24 2019 at 00:06):

But if you're using `{alpha : Type}`

correctly, then Lean should be able to infer what alpha is from subsequent terms, so you don't need to change it to nat, Lean will just guess it for you and do it itself. Have I missed the point?

#### Andrew Ashworth (Feb 24 2019 at 00:15):

"no longer knows which semiring to pick" this sounds fishy

#### Kevin Buzzard (Feb 24 2019 at 00:18):

The `{}`

brackets aren't type class search, they are dealt with using unification.

#### Wojciech Nawrocki (Feb 24 2019 at 00:23):

You're right, Lean can still unify it and find the class instance - I was perhaps a bit rash there when figuring out what's going on. However, tactic state updates that used to take about 300ms when I was working over a concrete object now take several seconds, making it somewhat painful to work interactively. I was hoping that reinstantiating all the lemmas over the concrete objects again would alleviate this.

#### Kevin Buzzard (Feb 24 2019 at 00:24):

Again, if you're talking about `{}`

s then this isn't to do with classes, it's something else.

#### Kevin Buzzard (Feb 24 2019 at 00:24):

But I agree that sometimes things get slow, and it can sometimes be quite difficult to find out why.

#### Wojciech Nawrocki (Feb 24 2019 at 00:25):

I'm talking about both - the `{alpha}`

variable being unified and `[semiring alpha]`

which needs a class instance.

#### Kevin Buzzard (Feb 24 2019 at 00:25):

Aah yes, that is a class instance. Sorry.

#### Kevin Buzzard (Feb 24 2019 at 00:25):

Does it help to actually put the type class instances explicitly into Lean at the top of the file?

#### Kevin Buzzard (Feb 24 2019 at 00:26):

e.g. you were working with a random alpha assumed to be a semiring, and now you're working with nat, so you could put `instance : semiring nat := by apply_instance`

at the top of the file. However that instance should already be there so I can't imagine it will help :-/

#### Kevin Buzzard (Feb 24 2019 at 00:27):

oh this can't be the problem. Unless your actual instances are hard to find.

#### Wojciech Nawrocki (Feb 24 2019 at 00:30):

I couldn't say for sure without benchmarking, but I would guess that the two implicit arguments I added to *everything* makes it appreciably more difficult to carry out elaboration.

#### Kevin Buzzard (Feb 24 2019 at 00:30):

I'm afraid you'll have to speak to a computer scientist about this one :-)

#### Kevin Buzzard (Feb 24 2019 at 00:31):

In maths, everything runs instantly.

#### Kevin Buzzard (Feb 24 2019 at 00:31):

You should consider moving to the Platonic universe.

#### Kevin Buzzard (Feb 24 2019 at 00:31):

We don't have engineering troubles there.

#### Wojciech Nawrocki (Feb 24 2019 at 00:55):

Oh okay, I ran into the problem that made me think unification fails again. In fact, I believe it could eventually succeed, but currently `simp *`

takes so long it times out, while it would work fine before I added the implicit parameters to one of my modules.

#### Kevin Buzzard (Feb 24 2019 at 01:00):

If you want to try to speed things up yourself, you could take a look at what simp was doing before and after by putting logging on.

#### Kevin Buzzard (Feb 24 2019 at 01:01):

`set_option trace.simplify.rewrite true`

is what you need to see what simp is doing.

#### Wojciech Nawrocki (Feb 24 2019 at 23:55):

Does Lean have an equivalent of Haskell's/Agda's `where`

? E.g. `def abc := two where two := 2`

#### Mario Carneiro (Feb 25 2019 at 00:06):

no

#### Mario Carneiro (Feb 25 2019 at 00:06):

it messes with elaboration order

#### Wojciech Nawrocki (Feb 25 2019 at 00:27):

Could you *elaborate* on what the issue is?

#### Mario Carneiro (Feb 25 2019 at 00:43):

Lean does pretty much all elaboration from left to right. If you use a postfix `let`

, then the type and value of the defined variable will not be known when it is needed

#### Mario Carneiro (Feb 25 2019 at 00:43):

Is this stupid? Yes. I think lean 4 will relax the elaboration order a bit to allow stuff like this to work

#### Wojciech Nawrocki (Feb 28 2019 at 19:47):

Does anyone know where the extra argument comes from (I do know `(n = 0)`

is probably not valid syntax for a hypothesis)?

def foo_fn := Π (n: ℕ) (n = 0), string #print foo_fn /- def foo_fn : Type := ℕ → Π (n : ℕ), n = 0 → string -/ -- ^ what is this?

#### Chris Hughes (Feb 28 2019 at 19:51):

Do you have a `variable`

somewhere in your file?

#### Chris Hughes (Feb 28 2019 at 19:52):

Oh no actually. `Π (n = 0), _`

is shorthand for `Π n, n = 0 -> _`

#### Wojciech Nawrocki (Feb 28 2019 at 19:54):

Ah so it *is* valid syntax. Thanks!

#### Wojciech Nawrocki (Feb 28 2019 at 22:08):

I would like to define a subtype of Lean functions like so:

@[reducible] def foo_fn := ℕ → ℕ inductive is_foo: foo_fn → Prop | Id: is_foo (λ x, x) | Rec: Π {f: foo_fn}, is_foo f → is_foo (λ x, (f x) + 1) structure foo := (f: foo_fn) (hf: is_foo f)

Where a `foo`

is a `ℕ → ℕ`

together with a proof that it's a valid kind of `foo_fn`

, i.e. either the identity or some other `foo`

plus 1. Then, I would also like to carry out transformations on the `foo`

structure by modifying the `foo_fn`

and adjusting its proof. Unfortunately the `foo_fn`

is opaque as just a Lean function, so to actually do this it seems I need the following:

inductive foo': foo_fn → Type | Id: foo' (λ x, x) | Rec: Π {f: foo_fn}, foo' f → foo' (λ x, (f x) + 1)

My question is, can `foo'`

be considered equivalent to `foo`

in the sense that I can extract `f: foo_fn`

out of `foo' f`

and use it as I would use the `f`

member of the structure `foo`

?

#### Kevin Buzzard (Feb 28 2019 at 23:42):

I don't understand the question. Are you not happy with this sort of thing:

def foo_fn := ℕ → ℕ inductive is_foo: foo_fn → Prop | Id: is_foo (λ x, x) | Rec: Π {f: foo_fn}, is_foo f → is_foo (λ x, (f x) + 1) structure foo := (f: foo_fn) (hf: is_foo f) def add_one : foo → foo := λ ⟨f, hf⟩, ⟨λ x, f x + 1, is_foo.Rec hf⟩

?

#### Wojciech Nawrocki (Mar 02 2019 at 14:09):

The problem is that I couldn't redefine `add_one`

like this:

def add_one': foo → foo /- induction tactic failed, recursor 'is_foo.dcases_on' can only eliminate into Prop -/ | ⟨f, is_foo.Id⟩ := ⟨λ x, f x + 1, is_foo.Rec is_foo.Id⟩ | ⟨f, is_foo.Rec hf'⟩ := ⟨λ x, f x + 1, is_foo.Rec (is_foo.Rec hf')⟩

And in general any definition which tries to extract an inner `foo_fn`

out of an `is_foo`

constructor will fail, but I need to do that sometimes in order to transform the `foo_fn`

. So far the `inductive foo': foo_fn -> Type`

definition seems to work well though.

#### Kevin Buzzard (Mar 02 2019 at 14:15):

Aah I see; the recursor for `is_foo`

only eliminates into `Prop`

. Yeah you need an expert, not a mathematician :-)

#### Chris Hughes (Mar 02 2019 at 14:26):

The reason this isn't possible in general, is that by proof irrelevance `is_foo.Id = is_foo.rec _`

if the proofs have the same type, so I cannot define functions that treat these two cases differently.

Since both cases are equal in your function, you can write

def add_one': foo → foo | ⟨f, h⟩ := ⟨λ x, f x + 1, is_foo.rec_on h (is_foo.Rec is_foo.Id) (λ hf', is_foo.Rec (is_foo.Rec hf'))⟩

#### Chris Hughes (Mar 02 2019 at 14:28):

Usually this form is preferred however

def add_one' (f : foo) : foo := ⟨λ x, f.1 x + 1, is_foo.rec_on f.2 (is_foo.Rec is_foo.Id) (λ hf', is_foo.Rec (is_foo.Rec hf'))⟩

#### Chris Hughes (Mar 02 2019 at 14:30):

The second `add_one'`

will definitionally reduce in a nice way when applied to an argument `f : foo`

, wherease the first definition will only reduce when applied to an argument of the form `⟨f, h⟩ : foo`

. This makes the first definition harder to work with in proofs.

#### Wojciech Nawrocki (Mar 02 2019 at 15:12):

Thanks!

#### drocta (Mar 03 2019 at 21:57):

I am trying to say something about free objects, using an existing definition of concrete categories in mathlib.

I couldn't find an existing definition in mathlib for free objects (edit: really, I just need to define free objects over a set or type with one element), so I was going to define them myself.

I have mathlib installed using leanpkg, and it looks like because I have lean version 3.4.1 installed, leanpkg used the branch with that name from the mathlib repository.

However, I notice that the master branch and the 3.4.1 branch have the definition of concrete categories in different places.

Should I go ahead and use the version which is defined in the 3.4.1 branch, and is in the category.lean file, or ought I somehow use the version in the concrete_category.lean version currently in the master branch?

#### Kevin Buzzard (Mar 03 2019 at 22:56):

The master branch of mathlib doesn't work with Lean 3.4.1.

#### Kevin Buzzard (Mar 03 2019 at 22:58):

If I were you I'd use 3.4.2 (probably the last release of Lean 3) and mathlib master. If you install elan then it will all work by magic and you won't have to worry.

#### drocta (Mar 03 2019 at 23:56):

thank you

(edit 7:33 : that worked for me)

#### drocta (Mar 04 2019 at 02:48):

I'm not sure where to find the documentation on out_param . Could you point me towards it?

the definition in core.lean looks like it is just an identity function, but I assume I'm missing something, seeing as I think the lack of it just caused me to get a type error.

#### drocta (Mar 04 2019 at 03:50):

found a way to do what I was trying to do which didn't require that, so nvm I guess?

#### Kevin Buzzard (Mar 04 2019 at 07:03):

`out_param`

is something to do with telling Lean's elaborator what order to do type inference, or something like that. I'm afraid the only documentation that I know of is when I asked for details about it on either this chat or at the old chat, and Mario Carneiro explained it.

#### Kevin Buzzard (Mar 04 2019 at 07:52):

https://gitter.im/leanprover_public/Lobby?at=5a6e31685a9ebe4f75e77351

#### drocta (Mar 05 2019 at 02:35):

`out_param`

is something to do with telling Lean's elaborator what order to do type inference, or something like that. I'm afraid the only documentation that I know of is when I asked for details about it on either this chat or at the old chat, and Mario Carneiro explained it.

https://gitter.im/leanprover_public/Lobby?at=5a6e31685a9ebe4f75e77351

Ah! Thank you!

#### Wojciech Nawrocki (Mar 05 2019 at 14:51):

What is the meaning of this error: `rewrite tactic failed, motive is not type correct`

?

I get it when trying to apply `rw [sesh_tp.dual_end_recv]`

where `sesh_tp.dual_end_recv : sesh_tp.dual End? = End!`

to the following state:

⊢ ⟦1⬝↑(sesh_tp.dual End?)⟧::?m_1 = matrix.identity (↑(sesh_tp.dual End?) :: γ) ↑End! (ZVar γ ↑End!)

I thought that `rw`

should be able to simply replace the instances of `sesh_tp.dual End?`

.

(`End?`

and `End!`

are constructors for the `sesh_tp`

inductive type.)

#### Patrick Massot (Mar 05 2019 at 15:39):

We need an emoji for this error

#### Patrick Massot (Mar 05 2019 at 15:41):

It means that doing the rewrite will give you an expression which doesn't type check, presumably because you had a dependent type and rewrote the parameter. The classical example is:

set_option pp.notation false set_option pp.implicit true example (n m : ℕ) (a : fin (n + m)) (b : fin (n + m)) (h : a = b) : true := begin rw add_comm at h, sorry end

#### Patrick Massot (Mar 05 2019 at 15:41):

Try understanding this simple example and then your complicated one

#### Wojciech Nawrocki (Mar 05 2019 at 16:04):

Oh right, I get the issue, thanks! But if you `rw add_comm`

both sides of the equation, it should still type check, no? I managed to solve my problem by explicitly stating the type of the goal with both instances of `sesh_tp.dual End?`

rewritten to `End!`

. Could `rw`

not check the type after doing *both* rewrites?

#### Kevin Buzzard (Mar 05 2019 at 17:52):

`rw`

is what it is. If you want more you can try lots of tricks, e.g. `erw`

or `simp only`

or using `conv`

mode -- we have lots of workarounds :-) But this is dependent type theory, random rewriting can create terms which are not type correct and Lean pulls the plug when this happens.

#### Plam (Mar 05 2019 at 18:58):

Is there a way to make use of the fact that pattern matches in a theorem have the same structure as in a definition? In particular, that given a catchall pattern match later in the theorem, it can't be the case that the thing being matched against matches one of the earlier pattern matches?

Concretely, when proving `simp_const_eq`

below, I want to match on `plus (const a) (const b)`

, `times (const a) (const b)`

and then a catchall. Is this possible, or do I just need some extra cases that I can rfl my way through?

inductive aexpr : Type | const : ℕ → aexpr | var : ℕ → aexpr | plus : aexpr → aexpr → aexpr | times : aexpr → aexpr → aexpr open aexpr def aeval (v : ℕ → ℕ) : aexpr → ℕ | (const n) := n | (var n) := v n | (plus e₁ e₂) := (aeval e₁ + aeval e₂) | (times e₁ e₂) := aeval e₁ * aeval e₂ def simp_const : aexpr → aexpr | (plus (const n₁) (const n₂)) := const (n₁ + n₂) | (times (const n₁) (const n₂)) := const (n₁ * n₂) | e := e theorem simp_const_eq (v : ℕ → ℕ) : ∀ e : aexpr, aeval v (simp_const e) = aeval v e := sorry

#### Wojciech Nawrocki (Mar 18 2019 at 01:48):

While I don't think this is possible right now (correct me if I'm wrong), would it in principle be possible to extend Lean with support for custom type unification procedures? The way I defined some inductive families requires me to do *a lot* of conversions from terms of type `T a`

to a type `T b`

that Lean expects, where `a = b`

. All of these are resolvable more or less with a single tactic, and it would be awesome if I could just teach Lean a heuristic like "If type U is expected and a term of type T is given, see if T or U (or both) contain a specific expression. If so, try to use this tactic to convert T to U".

#### Johan Commelin (Mar 18 2019 at 07:15):

Search the chat for "unification hints"

#### Johan Commelin (Mar 18 2019 at 07:15):

I'm hoping that might be exactly the thing you want. Note however that we don't have much experience with them. Unification hints haven't been used outside of a demo, as far as I am aware.

#### Mario Carneiro (Mar 18 2019 at 07:34):

If by `a = b`

you mean you have a proof that they are equal, not that they are defeq, then unification hints won't help, and a lot of things will get more complicated

#### Wojciech Nawrocki (Mar 18 2019 at 12:38):

@Mario Carneiro indeed by `a = b`

I mean `eq a b`

. What I end up having in the actual terms is a bunch of `eq.mpr`

s which do the conversions from `T a`

to `T b`

. My idea here was, could I not just tell Lean to try to insert those conversions automatically when it fails to unify types?

#### Wojciech Nawrocki (Mar 18 2019 at 12:41):

@Johan Commelin I was looking into those but it seems they are not powerful enough to modify the term in question which is what seems to be needed when the types are not algorithmically definitonally equal according to Lean. Moreover, some of the problems that Coq resolves using Canonical Structures, which seem to be a type of unification hint, Lean resolves using type class inference.

#### Kevin Buzzard (Mar 18 2019 at 13:04):

There was some discussion about these issues at Lean Together in January. Note that Lean 4 will apparently be removing unification hints, perhaps because nobody uses them in Lean.

#### Kevin Buzzard (Mar 18 2019 at 13:04):

The speakers who talked about it were Assia Mahboubi and Cyril Cohen, and I am pretty sure their slides are online and linked to here somewhere.

#### Johan Commelin (Mar 18 2019 at 13:05):

#### Jesse Michael Han (Mar 18 2019 at 14:01):

We also ran into this issue in Flypitch. I found it helpful to have a library of `simp`

lemmas to normalize the location of the casts. I also had the occasional odd experience of being unable to prove an instance of a cast lemma by `refl`

inside a larger proof, but was able to prove that lemma by `refl`

separately, mark it as `simp`

, and then `simp`

would work...

#### Patrick Massot (Mar 18 2019 at 16:18):

Note that Lean 4 will apparently be removing unification hints, perhaps because nobody uses them in Lean.

Is this really clear? I thought that Assia and Cyril provided ample proofs that this would be a tragic mistake. I understand they were removed at the beginning of Lean 4, but many things were removed, and some of them will return at some point. Is there any update on this @Sebastian Ullrich?

#### Sebastian Ullrich (Mar 18 2019 at 16:21):

no

#### Wojciech Nawrocki (Mar 23 2019 at 18:30):

Why does `auto_param`

seem to not work as an implicit argument? I'd like to have a constructor like `foo {h: auto_param some_prop ``mytac}: Foo`

and match on it like `| foo`

rather than `| (foo _)`

, but if I put the `auto_param`

in curly brackets, it seems to not actually run the tactic when using `def abc := foo`

- it fails with "failed to synthesize placeholder" rather than trying to run `mytac`

.

#### drocta (Mar 25 2019 at 03:36):

I want the unique function from the type `empty`

to another type. I thought I might be able to do that with `empty.rec`

or `empty.rec_on`

, (going off of the tutorial's use of that with days of the week), but it looks like those both expect to be given a function from empty as an argument, when that is what I'm trying to get.

What am I missing?

#### Johan Commelin (Mar 25 2019 at 05:38):

I think this is what you want:

src/logic/basic.lean:def empty.elim {C : Sort*} : empty → C.

#### Wojciech Nawrocki (Mar 27 2019 at 00:51):

`congr`

gave me the following goal:

γ : precontext, Γ : context γ, M : @term γ Γ ↑End? ⊢ @Wait γ Γ M == @eq.mpr (@term γ (Γ + 0) tp.unit) (@term γ Γ tp.unit) _ (@Wait γ Γ M)

Having never worked with `heq`

, I'm not sure if this is provable? Clearly both sides are the same expression, but one has been passed through `eq.mpr`

to cast its type. If it's provable, what would be a good tactic/term to solve it? *EDIT*: It's probably worth noting that it has been casted to the wrong type, i.e. RHS has the same type as LHS before the cast, but a different one after. Can I undo `eq.mpr`

in here somehow?

#### Wojciech Nawrocki (Mar 27 2019 at 01:25):

Ah `h_generalize`

did it. I would still be interested in hearing if there's a more automated way of doing it.

#### Jesse Michael Han (Mar 27 2019 at 01:30):

`cc`

can sometimes handle heqs of casts which are close to being definitional equalities

#### Jesse Michael Han (Mar 27 2019 at 01:30):

however, it works better when `eq.mpr`

s are hidden beneath `cast`

s

#### Wojciech Nawrocki (Mar 27 2019 at 01:35):

What's the difference between `cast`

and `eq.mpr`

?

#### Kenny Lau (Mar 27 2019 at 01:37):

@[inline] def cast : Π {α β : Sort u}, α = β → α → β := λ {α β : Sort u} (h : α = β) (a : α), eq.rec a h @[inline] def eq.mpr : Π {α β : Sort u}, α = β → β → α := λ {α β : Sort u} (h₁ : α = β) (h₂ : β), eq.rec_on _ h₂

#### Wojciech Nawrocki (Mar 27 2019 at 01:40):

Oh, it's just reversed? Okay then :)

#### Kenny Lau (Mar 27 2019 at 01:41):

@[inline] def eq.mp : Π {α β : Sort u}, α = β → α → β := λ {α β : Sort u}, eq.rec_on

#### Kevin Buzzard (Mar 27 2019 at 07:40):

There are people here who would say that if you're dealing with heqs, you're doing it wrong

#### Kevin Buzzard (Mar 27 2019 at 07:40):

Equality of types is evil in type theory

#### Kevin Buzzard (Mar 27 2019 at 07:41):

And maybe you'd be better off making an equiv if you have two types which you have identified in your mind

#### Kevin Buzzard (Mar 27 2019 at 07:41):

Many of the key facts about equality are also proved for equivs

#### Kevin Buzzard (Mar 27 2019 at 07:41):

The key missing one is `rw`

#### Kevin Buzzard (Mar 27 2019 at 07:42):

But we're working on it

#### Wojciech Nawrocki (Mar 27 2019 at 13:30):

Yeah, I (ab)use dependent types in a way that makes it necessary to prove type equality quite often. I try to give my functions built-in & semi-automated support for this by adding arguments like `(h: auto_param (expected_T = actual_T) ``some_tac)`

, but it's not always possible

#### Wojciech Nawrocki (Mar 27 2019 at 14:48):

While I'm at it, how can I `rw`

using a `heq`

? Given `Hx: M == x`

, `rw [Hx]`

tells me that `lemma is not an equality nor an iff`

.

#### Chris Hughes (Mar 27 2019 at 14:49):

With great difficulty. You can try using `heq.rec_on`

, but I think this is usually quite hard, because the resulting expression often won't type check.

#### Simon Hudon (Mar 27 2019 at 15:28):

This is a lesson you have to learn eventually: just because you can write it this way and that it is type correct doesn't mean it's a good idea. Keep the type of your definitions as simple as possible. Move your added logic to separate lemmas or separate definitions.

#### Jesse Michael Han (Mar 27 2019 at 15:47):

`heq.subst`

will let you simulate some rewriting, but as Chris said, it is very painful

#### Kevin Buzzard (Mar 27 2019 at 20:15):

If you know you are abusing dependent types then maybe you are bringing the pain on yourself. I had a maths repo where I used equality of types, because I was learning type theory and didn't understand the difference between equality and definitional equality; I ran into real trouble, which was only fixed by a refactor. What I am pushing for now is a tactic which will rewrite appropriate terms along equivs (or however you say it: if P is a predicate on rings which is mathematically reasonable then for two isomorphic rings R and S, P R will be true iff P S is, and I want a tactic which will show this -- this is something mathematicians do all the time but which seems to be painful in type theory).

#### Wojciech Nawrocki (Mar 27 2019 at 22:27):

@Kevin Buzzard Well, kind of, but not necessarily. I'm still a newcomer to type theory/theorem proving, but from what I read and some discussions I've had, my understanding is that currently Coq can handle these kinds of things much better than Lean. Adding good support for working with dependent types which are only propositionally equal could be well worth the effort. In particular, Coq has a Program framework (which I think I've mentioned here at some point) which is able to find which `eq`

s need to be proven while constructing an expression and generate them as external goals. This way we get a clean expression and all the nasty details are resolved externally. After the goals are proven, `Program`

seems to automatically insert `casts`

, more or less. Moreover, Coq can be told which tactics to use in order to automatically resolve most of such side equality goals. In PVS, I believe these things are called "Type Correctness Conditions" and are also dealt with outside the expression. HoTT also seems to provide an interesting answer to this in the form of paths, but I've only started looking into that.

#### Andrew Ashworth (Mar 28 2019 at 05:56):

Is `Program`

ready for prime time? I only read about it in CPDT, Chlipala is not big on it. http://adam.chlipala.net/cpdt/html/Subset.html

#### Wojciech Nawrocki (Mar 29 2019 at 16:21):

In Coq, `JMeq_eq`

is an axiom, but in Lean `eq_of_heq`

seems to be a lemma with no non-core axioms below it. Does that mean it follows from proof irrelevance or another axiom that Lean has built-in?

#### Mario Carneiro (Mar 29 2019 at 17:56):

@Wojciech Nawrocki Yes, it relies on proof irrelevance.

lemma eq_of_heq {α : Sort u} {a a' : α} (h : a == a') : a = a' := have ∀ (α' : Sort u) (a' : α') (h₁ : @heq α a α' a') (h₂ : α = α'), (eq.rec_on h₂ a : α') = a', from λ (α' : Sort u) (a' : α') (h₁ : @heq α a α' a'), heq.rec_on h₁ (λ h₂ : α = α, rfl), show (eq.rec_on (eq.refl α) a : α) = a', from this α a' h (eq.refl α)

the `rfl`

on line 3 has type `eq.rec_on h₂ a = a`

, which only makes sense because `h₂ : α = α`

is defeq to `rfl`

by proof irrelevance

#### Wojciech Nawrocki (Mar 29 2019 at 18:16):

Gotcha, thanks!

#### Wojciech Nawrocki (Mar 31 2019 at 18:16):

How can I introduce a new constant into all goals using `;`

? `sometac; have a := 2,`

doesn't seem to work. *EDIT*: `all_goals { have a := 2 }`

worked. Unsure why `;`

doesn't.

#### Chris Hughes (Mar 31 2019 at 18:20):

This worked for me

example : false ∧ false := begin split; have a := 2, end

However, this didn't work

example : false ∧ false := begin split, skip; have a := 2, end

Maybe the semicolon only works for the goals generated by the last tactic.

#### Rob Lewis (Mar 31 2019 at 18:20):

`sometac; have a := 2`

works. Remember `have`

is different from `let`

. You'll get `a : ℕ`

in the context, but won't see that it's 2.

Edit: oh, I see the confusion. `;`

doesn't mean "do the second tactic to all goals." I think `t; s`

means "do `t`

to the first goal, and then do `s`

to all remaining goals."

Edit 2: I'm thinking about too many things at once and writing unclearly, sorry. Chris phrased what I meant better. `t; s`

focuses on the first goal, applies t, and then applies `s`

to all goals generated from that. "Remaining goals" meaning remaining in the focus.

#### Wojciech Nawrocki (Mar 31 2019 at 18:24):

Double-checked, it acts wierdly:

cases h: H; have a := 2, -- works cases h: H, skip; have a := 2, -- doesn't work cases h: H, case Foo { .. }; have a := 2, -- doesn't work

#### Reid Barton (Mar 31 2019 at 18:33):

What Chris said. `x; y`

applies `y`

to each goal produced by `x`

.

#### Wojciech Nawrocki (Mar 31 2019 at 18:34):

Oooh, ok! Thanks

#### Wojciech Nawrocki (Mar 31 2019 at 18:41):

Different question - in every goal I have a hypothesis of the form `h : hE' = Foo Γₑ' hΓ_1 N E_1 a`

, where `Foo`

is that goal's case. I would like to solve each goal by applying that goal/case's constructor, but with different arguments. Can I somehow "extract" which constructor was used for the case and reuse it to solve the goal without quoting expressions and all that?

#### Wojciech Nawrocki (Apr 01 2019 at 14:09):

How can I debug a `(deterministic) timeout`

when Lean tries to prove well-foundedness of a recursion given my custom tactic?

#### Wojciech Nawrocki (Apr 01 2019 at 14:13):

Oh actually I also get a "failed to prove well-foundedness" with the state:

h : hE' = EAppLeft Γₑ' hΓ_1 N E_1 a, Γ : context γ, M : term Γ A'' ⊢ eval_ctx.sizeof (((⟨γ, ⟨Γₑ, ⟨Γ₁, ⟨A'', ⟨A', ⟨A_1⊸A, ⟨E, ⟨hE, ⟨E_1, ⟨a, ⟨Γₑ + Γ₁, _⟩⟩⟩⟩⟩⟩⟩⟩⟩⟩⟩.snd).snd).fst) (((((((((⟨γ, ⟨Γₑ, ⟨Γ₁, ⟨A'', ⟨A', ⟨A_1⊸A, ⟨E, ⟨hE, ⟨E_1, ⟨a, ⟨Γₑ + Γ₁, _⟩⟩⟩⟩⟩⟩⟩⟩⟩⟩⟩.snd).snd).snd).snd).snd).snd).snd).snd).fst) a < eval_ctx.sizeof (((⟨γ, ⟨Γₑ, ⟨Γₑ', ⟨A'', ⟨A', ⟨A, ⟨E, ⟨hE, ⟨E', ⟨hE', ⟨Γ, hΓ⟩⟩⟩⟩⟩⟩⟩⟩⟩⟩⟩.snd).snd).fst) (((((((((⟨γ, ⟨Γₑ, ⟨Γₑ', ⟨A'', ⟨A', ⟨A, ⟨E, ⟨hE, ⟨E', ⟨hE', ⟨Γ, hΓ⟩⟩⟩⟩⟩⟩⟩⟩⟩⟩⟩.snd).snd).snd).snd).snd).snd).snd).snd).fst) hE'

Since `h: hE' = EAppLeft ... a`

is in the context, why can't Lean just rewrite it using that and then show that `sizeof a < sizeof EAppLeft .. a`

?

#### Andrew Ashworth (Apr 01 2019 at 14:47):

this error log is pretty hard to understand just by looking at it

#### Andrew Ashworth (Apr 01 2019 at 14:48):

what always gets linked when these things come up: TPIL and https://github.com/leanprover-community/mathlib/blob/master/docs/extras/well_founded_recursion.md

#### Andrew Ashworth (Apr 01 2019 at 14:55):

what is not mentioned in the mathlib doc is what I usually do when I don't want to stare at a bunch of errors: prove my relation is well-founded by hand `well_founded r`

and directly use `well_founded.fix`

#### Wojciech Nawrocki (Apr 01 2019 at 15:01):

Oh yeah I'd read that, but was hoping Lean could employ a bit of automation when trying to prove the inequality. I'll try `fix`

as I don't think I can use `have`

( the fn definition is entirely in tactic mode), thanks.

#### Andrew Ashworth (Apr 01 2019 at 15:04):

oh, don't forget http://adam.chlipala.net/cpdt/html/GeneralRec.html

#### Andrew Ashworth (Apr 01 2019 at 15:04):

the definitions are sorta almost the same if you squint

#### snowbunting (Apr 02 2019 at 09:12):

What would be the correct way to define a function from ℤ to (abelian) groups? (i.e. having indexed groups G_1, G_2, ...)

I don't think that this here is working as it should. This would be a constant function, wouldn't it?

universes u variables {α: Type u} def my_indexes_groups (i:ℤ) := add_group α #check my_indexes_groups 3

Thx :)

#### Johan Commelin (Apr 02 2019 at 09:12):

No, because then all groups have the same cardinality

#### Kevin Buzzard (Apr 02 2019 at 09:12):

You define a function from Z to `Type`

#### Johan Commelin (Apr 02 2019 at 09:12):

What you do is `G : Z → Type`

#### Johan Commelin (Apr 02 2019 at 09:13):

And then `[\for i, add_group G i]`

#### Kevin Buzzard (Apr 02 2019 at 09:13):

And then you define another function from Z to the structure of a group on the image of the first function :-)

#### Kevin Buzzard (Apr 02 2019 at 09:14):

And you also note that Lean's definition of `add_group`

doesn't require commutativity so if you're doing exact sequences you might want to use add_comm_group

#### Kevin Buzzard (Apr 02 2019 at 09:15):

Johan and I are both interested in seeing exact sequences in Lean by the way, we're both mathematicians

#### Kevin Buzzard (Apr 02 2019 at 09:16):

The way to think about groups in Lean is that they are two things. First you have G, a *type*

#### Kevin Buzzard (Apr 02 2019 at 09:16):

And then you have another thing h, a term of type [group G]

#### Kevin Buzzard (Apr 02 2019 at 09:17):

And it's h that has all the data of the multiplication and the axioms

#### Kevin Buzzard (Apr 02 2019 at 09:17):

But amazingly you never have to mention h at all when you're working with groups

#### Kevin Buzzard (Apr 02 2019 at 09:18):

Because whenever Lean needs this data it gets it from G using something called type class inference

#### Kevin Buzzard (Apr 02 2019 at 09:18):

You try to do a group thing on G like multiplying two elements together

#### snowbunting (Apr 02 2019 at 09:19):

ok that works indeed! Maybe it would be actually ask to ask what the difference between `Type`

and `Type u`

is, although it feels super dumb...

#### Kevin Buzzard (Apr 02 2019 at 09:19):

And Lean says "what's this multiplication they're talking about? Does my big list of type class stuff contain anything which mentions a multiplication on G? And then it looks and it finds h and says "ok I'll just use that stuff"

#### Kevin Buzzard (Apr 02 2019 at 09:20):

Type is Type 0

#### Kevin Buzzard (Apr 02 2019 at 09:20):

In type theory everything has to have a type

#### Kevin Buzzard (Apr 02 2019 at 09:20):

So unfortunately Type has to have a type

#### Kevin Buzzard (Apr 02 2019 at 09:20):

Which is Type 1

#### Kevin Buzzard (Apr 02 2019 at 09:20):

You get the picture

#### Kevin Buzzard (Apr 02 2019 at 09:21):

Type u is a random universe somewhere up in the hierarchy

#### Kevin Buzzard (Apr 02 2019 at 09:21):

It's best practice to make functions and constructions take inputs from arbitrary universes in type theory

#### Kevin Buzzard (Apr 02 2019 at 09:22):

You should do a map from Z to `Type u`

really

#### Kevin Buzzard (Apr 02 2019 at 09:23):

People might also write `Type*`

-- this is the same thing, it just saves you having to name u

#### snowbunting (Apr 02 2019 at 09:28):

ok so `u`

really is just a number, like `Type u`

one of `Type 0`

, `Type 1`

, ...

And if I don't want to force two variables to be both for example `Type 2`

simultaneously, then I should use `Type u`

and `Type v`

in the definition, right?

And if I would write `def xxx (a: Type*) (b: Type*): Type* := sorry`

then there is no constrains if they have the same type or not, is there?

Thx :)

#### Kevin Buzzard (Apr 02 2019 at 09:28):

That's right.

#### Kevin Buzzard (Apr 02 2019 at 09:29):

u is a natural number, but it's not a Lean natural number, it's a universe variable, which is treated differently; it's part of the core language or something.

#### snowbunting (Apr 02 2019 at 14:53):

And then

`[\for i, add_group G]`

This works great together with `variable`

, but how would I now include that into a class? This does not seem to be working:

universe u class chain (α: Type u) := (group (i:ℤ): Type u) (diff {i:ℤ}: (group i) → (group (i-1))) [x: ∀i, add_comm_group (group i)] variable (C: Type) variable [chain C] variables (g h: chain.group C 1) #check g #check chain.group C 1 #check chain.diff g #check g + h -- has_add (chain.group C 1)

#### Johan Commelin (Apr 02 2019 at 14:54):

Put it with `()`

in the class. Then aftwerwards state:

instance foo (i : ℤ) : add_group C i := C.x i

#### Johan Commelin (Apr 02 2019 at 14:54):

Modulo mistakes

#### Johan Commelin (Apr 02 2019 at 14:56):

Also, you might be interested in `category_theory/`

#### snowbunting (Apr 02 2019 at 14:56):

Of course, I should have remembered `instance`

, used that before, thank you!

#### Kevin Buzzard (Apr 02 2019 at 17:59):

Your alpha doesn't do anything

#### Kevin Buzzard (Apr 02 2019 at 17:59):

You could just remove it, and you could change class to structure

#### Kevin Buzzard (Apr 02 2019 at 18:01):

Then a term of type `chain`

would be a bunch of abelian groups and homs

#### Kevin Buzzard (Apr 02 2019 at 18:01):

You use `class`

when there's only expected to be one term of that type

#### Scott Morrison (Apr 02 2019 at 22:05):

Also, you might be interested in

`category_theory/`

Regarding initial experiments, this isn't meant at all as discouragement --- but eventually we'd like the definition of `chain_complex`

to be as generic as possible (e.g. any category enriched in a category with_zero).

#### Wojciech Nawrocki (Apr 05 2019 at 20:42):

Is there anything I can do to debug "equation compiler failed to prove equation lemma (workaround: disable lemma generation using `set_option eqn_compiler.lemmas false`

)"?

#### Kevin Buzzard (Apr 05 2019 at 21:10):

Wow that's some error! How did you get that?

#### Patrick Massot (Apr 05 2019 at 21:11):

Kevin is so jalous...

#### Wojciech Nawrocki (Apr 05 2019 at 22:24):

:) It happens in this definition (link)

#### Kevin Buzzard (Apr 05 2019 at 22:31):

it doesn't compile for me because it needs imports I don't have :-/

#### Kevin Buzzard (Apr 05 2019 at 22:31):

but even if it did compile for me I think I wouldn't be able to help.

#### Wojciech Nawrocki (Apr 05 2019 at 23:04):

Oh, it's self-contained modulo downloading mathlib, but for some reason leanpkg.path was in my .gitignore and it's needed for import paths to resolve - fixed that

#### Reid Barton (Apr 05 2019 at 23:31):

leanpkg.path should indeed be in .gitignore (assuming you are actually using leanpkg)

#### Mario Carneiro (Apr 06 2019 at 02:15):

Is it related to the use of `let`

in the type of the function you are defining by pattern matching?

#### Wojciech Nawrocki (Apr 06 2019 at 21:09):

@Mario Carneiro it doesn't seem like it, I can inline the `let`

, same error

#### Mario Carneiro (Apr 06 2019 at 21:10):

do you have a self contained MWE?

#### Mario Carneiro (Apr 06 2019 at 21:10):

(Try putting everything needed in one file, then start deleting things that don't matter for the error)

#### Wojciech Nawrocki (Apr 09 2019 at 19:57):

@Mario Carneiro thanks, that was good idea! It took a while but I was able to simplify it to this:

inductive foo: ℕ → ℕ → Type def foo_fn (n m: ℕ): Type := foo n m → foo n m inductive is_foo_fn : Π {n m: ℕ}, foo_fn n m → Type | IsFooEta: Π {n m: ℕ} {f: foo_fn n m}, is_foo_fn f → is_foo_fn (λ M, f M) open is_foo_fn def ext: -- equation compiler failed to prove equation lemma (workaround: disable lemma generation using `set_option eqn_compiler.lemmas false`) Π {m n: ℕ} {f: foo_fn n m}, is_foo_fn f → Σ f': foo_fn n m, is_foo_fn f' | _ _ _ (IsFooEta f) := ⟨_, IsFooEta (ext f).snd⟩

which still fails, while making the following change makes the eqn compiler work:

def ext: -- good Π {n m: ℕ} {f: foo_fn n m}, is_foo_fn f → Σ f': foo_fn n m, is_foo_fn f' | _ _ _ (IsFooEta f) := ⟨_, IsFooEta (ext f).snd⟩

Can you spot the difference? :) (it's the order of arguments). To me this seems like an eqn_compiler bug

And indeed this stupid-simple fix works on my actual code :D

#### Wojciech Nawrocki (Apr 09 2019 at 20:30):

@Sebastian Ullrich What do you think? Is this a bug or am I missing something obvious?

#### Kevin Buzzard (Apr 09 2019 at 20:31):

Thanks for putting in the effort. I've never seen this error before.

#### Sebastian Ullrich (Apr 09 2019 at 20:34):

Me neither

#### Patrick Massot (Apr 09 2019 at 20:34):

Can you reproduce it in Lean 4?

#### Patrick Massot (Apr 09 2019 at 20:35):

If not, then who cares about this error?

#### Kevin Buzzard (Apr 09 2019 at 20:39):

those of us who are stuck with Lean 3?

#### Kevin Buzzard (Apr 09 2019 at 20:39):

;-)

#### Andrew Ashworth (Apr 09 2019 at 22:42):

I don't have time to look into this deeply, but just remarking that I have seen 'equation compiler failed to prove equation lemma' before

#### Andrew Ashworth (Apr 09 2019 at 22:42):

a very long time ago, I think the last time I saw it there was some problem with eta reduction

#### Andrew Ashworth (Apr 09 2019 at 22:43):

but i could be hallucinating how I fixed it, since it was awhile ago

#### Kevin Kappelmann (Apr 18 2019 at 09:51):

Given

def P (n: ℕ) : Prop := n > 42 example (n: ℕ) : P n = (P (nat.succ n)) := -- this is obviously false begin unfold P, -- this unfolds both occurrences sorry end

how can I unfold `P`

just at the right-hand side?

#### Kevin Buzzard (Apr 18 2019 at 09:52):

you can just use `change`

or `show`

#### Kevin Buzzard (Apr 18 2019 at 09:53):

def P (n: ℕ) : Prop := n > 42 example (n: ℕ) : P n = (P (nat.succ n)) := -- this is obviously false begin show P n = ((nat.succ n) > 42), sorry end

#### Kevin Buzzard (Apr 18 2019 at 09:54):

If you're planning on using the `rw`

tactic, then Lean might be very fussy about exactly which form a term is in

#### Kevin Buzzard (Apr 18 2019 at 09:55):

but conversely Lean is happy to switch between definitionally equal forms of the same term, using the `change`

and `show`

tactics (which do the same thing, `change`

having the advantage that it also works for hypotheses in your context with `change ... at H`

)

#### Kevin Kappelmann (Apr 18 2019 at 09:55):

Ah okay, thanks. If my goal statement is very long though, I end up copy-pasting a lot of text.

#### Kevin Buzzard (Apr 18 2019 at 09:56):

there are other ways

#### Kevin Buzzard (Apr 18 2019 at 09:57):

def P (n: ℕ) : Prop := n > 42 example (n: ℕ) : P n = (P (nat.succ n)) := -- this is obviously false begin conv begin to_rhs, simp only [P], end, sorry end

#### Kevin Buzzard (Apr 18 2019 at 09:57):

`conv`

mode is a mode which isn't mentioned at all in the official docs

#### Kevin Buzzard (Apr 18 2019 at 09:57):

It enables you to zoom into parts of terms

#### Kevin Buzzard (Apr 18 2019 at 09:58):

Unfortunately, when you're in conv mode, the tools available to you are completely different, and it doesn't look like `unfold`

is available yet.

#### Kevin Buzzard (Apr 18 2019 at 09:58):

So I had to use `simp only`

#### Kevin Buzzard (Apr 18 2019 at 09:58):

which is close to the same thing

#### Kevin Kappelmann (Apr 18 2019 at 09:58):

uuh, the hidden secrets. that does not seem very elegant to me though :')

#### Kevin Buzzard (Apr 18 2019 at 09:58):

The community documented `conv`

mode.

#### Kevin Kappelmann (Apr 18 2019 at 09:58):

I though I could make use of some pattern matching or occurrence counting when I was reading https://leanprover.github.io/tutorial/A1_Quick_Reference.html A1.6.2.1

#### Kevin Kappelmann (Apr 18 2019 at 09:59):

But I am either to silly or misinterpreting what is written there.

#### Kevin Buzzard (Apr 18 2019 at 09:59):

https://github.com/leanprover-community/mathlib/blob/master/docs/extras/conv.md

#### Kevin Buzzard (Apr 18 2019 at 10:00):

Your link there is to an out of date file :-/ That's Lean 2 you're looking at

#### Kevin Buzzard (Apr 18 2019 at 10:00):

You can use `conv`

to zoom directly into the right hand side with `conv in`

or `conv at`

or something, I can never remember the details, but they're clearly documented in Patrick's conv.md write-up

#### Kevin Kappelmann (Apr 18 2019 at 10:01):

Oh damn, google fooled me

#### Kevin Buzzard (Apr 18 2019 at 10:01):

That way you avoid the `to_rhs`

conv-mode tactic

#### Kevin Buzzard (Apr 18 2019 at 10:01):

Oh damn, google fooled me

call them up and complain

#### Kevin Buzzard (Apr 18 2019 at 10:02):

example (n: ℕ) : P n = (P (nat.succ n)) := -- this is obviously false begin conv in (P (nat.succ n)) begin simp only [P], end, sorry end

#### Kevin Buzzard (Apr 18 2019 at 10:03):

def P (n: ℕ) : Prop := n > 42 example (n: ℕ) : P n = (P (nat.succ n)) := -- this is obviously false begin rw (show P (nat.succ n) = (nat.succ n > 42), from rfl), sorry end

#### Kevin Buzzard (Apr 18 2019 at 10:03):

This way avoids conv mode and is more precise about what you want done

#### Kevin Buzzard (Apr 18 2019 at 10:04):

What exactly are you looking for? Tactic-writers here are very good.

#### Kevin Buzzard (Apr 18 2019 at 10:04):

People can just make new stuff. Lean tactics are written in Lean and some people (not me though! I'm a mathematician) can just knock them off before breakfast.

#### Kevin Kappelmann (Apr 18 2019 at 10:11):

Basically, I had a rather long equation of the form `l=r`

and I wanted to unfold an expression in `r`

by its definition, which again, is a rather long term. Hence, I neither want to re-state the whole goal nor the result of the unfolded definition. What I want to do is something like

begin unfold P {2} -- this should unfold the second occurrence of P in the goal (or the third if you count from zero) sorry end

#### Kevin Buzzard (Apr 18 2019 at 10:12):

@Gabriel Ebner is that already a tactic, do you know?

#### Kevin Buzzard (Apr 18 2019 at 10:12):

Is there any obstruction to making it?

#### Kevin Buzzard (Apr 18 2019 at 10:14):

@Kevin Kappelmann you could just do all the unfolds manually, ignore what happens to `l`

, and then once you've got `r`

into the form you want you can just delet everything and write `show _ = [what you want r to be]`

#### Kevin Buzzard (Apr 18 2019 at 10:14):

(the point being that `show`

will take `_`

for the left hand side)

#### Kevin Buzzard (Apr 18 2019 at 10:16):

I think I've now told you all the tricks I know -- but I can see that your proposed idea is nicer. It might be a pain though -- if you want to unfold a bunch of stuff in `r`

but not in `l`

you might have to keep changing the numbers, depending on which things you want to unfold in `r`

also show up in `l`

. I think that if you're adamant that you want to change `r`

and leave `l`

untouched, just use `conv`

mode and `to_rhs`

. That's only a couple of lines and I think it's better than your idea because of the issue with numbers perhaps changing depending on exactly you want to unfold.

#### Gabriel Ebner (Apr 18 2019 at 10:17):

Yes, you can do this with `conv`

. Unfortunately, `unfold`

isn't wrapped so you need to use `simp`

.

conv { for (P _) [2] { simp [P] } },

#### Kevin Buzzard (Apr 18 2019 at 10:17):

OTOH I guess your idea has its merits when there is more than one occurrence of P in the RHS.

#### Kevin Buzzard (Apr 18 2019 at 10:17):

Aah, I'd forgotten `conv`

could take numbers!

#### Kevin Buzzard (Apr 18 2019 at 10:17):

Thanks Gabriel

#### Kevin Buzzard (Apr 18 2019 at 10:18):

hey what is this voodoo? `for`

? I thought functional languages didn't have `for`

loops!

#### Kevin Buzzard (Apr 18 2019 at 10:20):

Aah yes, I see now that `for`

is documented in the `conv`

document I linked to earlier.

#### Kevin Buzzard (Apr 18 2019 at 10:20):

It's about time I re-read it :D

#### Kevin Buzzard (Apr 18 2019 at 10:22):

I'll show you a Lean trick Kevin.

def P (n: ℕ) : Prop := n > 42 #print prefix P -- P.equations._eqn_1 : ∀ (n : ℕ), P n = (n > 42)

When you define `P`

, Lean makes one or more "equation lemmas" for it. `unfold P`

is basically the same as `simp only [P.equations._eqn_1]`

or `simp only [P]`

for short. It's a shame there's no `unfold`

in `conv`

mode, but these are the tricks you can use to emulate it.

#### Kevin Buzzard (Apr 18 2019 at 10:23):

def P (n: ℕ) : Prop := n > 42 example (n: ℕ) : P n = (P (nat.succ n)) := -- this is obviously false begin rw P.equations._eqn_1 (nat.succ n), sorry end

#### Kevin Kappelmann (Apr 18 2019 at 10:23):

Okay, I summarise: `conv`

is pretty useful, I should read about it, and I guess it should also be included in the official docs at some point to avoid noobs like me wondering how to rewrite subterms :p

#### Kevin Buzzard (Apr 18 2019 at 10:24):

The Lean team is currently working on Lean 4, I don't think they're interested in documentation-writing at the minute.

#### Kevin Buzzard (Apr 18 2019 at 10:24):

If Lean 4 is a success then we'll all be porting the docs over to there

#### Kevin Buzzard (Apr 18 2019 at 10:25):

The community-written docs are really helpful for technical questions not covered by Theorem Proving In Lean.

#### Kevin Buzzard (Apr 18 2019 at 10:25):

They all live in the mathlib repo but they're not really anything to do with mathlib, they're just where these community-generated docs ended up.

#### Kevin Kappelmann (Apr 18 2019 at 10:29):

I see. I think rewriting subterms is quite common though, so I would mention it at least in the official doc. Anyway, thanks Kevin & Gabriel :)

#### Jesse Michael Han (Apr 18 2019 at 12:52):

you can use underscores with `change`

and Lean will try to infer what should be there from the current goal. that way `change _ = new_rhs`

lets you avoid copy-pasting the entire left hand side

#### Greg Conneen (Apr 21 2019 at 17:24):

How would I go about proving ∀ x : ℤ, even x ∨ odd x? I have even and odd as

def even : ℤ → Prop := λ n, ∃ m : ℤ, 2*m = n

def odd : ℤ → Prop := λ n, ∃ m : ℤ, 2*m + 1 = n

I assume that I would first have to show that odd is equivalent to not even, then apply classical.em. However, I'm having trouble proving odd x ↔ ¬(even x)

#### Kevin Buzzard (Apr 21 2019 at 17:28):

My instinct would be to prove even n -> odd (n+1) etc, and then prove by induction that every integer was odd or even.

#### Kevin Buzzard (Apr 21 2019 at 17:28):

There are induction hypotheses of the form (P 0 and (forall n, P n -> P(n+1)) and (forall N, P(n)->P(n-1))) implies P(n) for all n

#### Greg Conneen (Apr 21 2019 at 17:29):

Ah, I see. That makes sense. I'm not familiar enough with induction in Lean yet, but I'll try it out

#### Greg Conneen (Apr 21 2019 at 17:29):

thank you, I'll respond again if I'm stuck on the inductive bit

#### Kevin Buzzard (Apr 21 2019 at 17:29):

If you just apply the induction tactic out of the box, you'll get an unpleasant surprise

#### Kevin Buzzard (Apr 21 2019 at 17:30):

You'll end up with some question about natural numbers instead, and it will be pain converting from naturals to integers.

#### Kevin Buzzard (Apr 21 2019 at 17:30):

You'll need to find an appropriate custom induction term in the library.

#### Greg Conneen (Apr 21 2019 at 17:31):

Oh, okay. I'm not familiar enough with the library to get started with that. Where would I go to look for something like that?

#### Kevin Buzzard (Apr 21 2019 at 17:31):

`int.induction_on`

should be fine for you.

#### Kevin Buzzard (Apr 21 2019 at 17:33):

`import data.int.basic`

at the top of your file (you'll need the maths library mathlib) and then apply that function, and you'll be left with goals of the form "n even or n odd -> n+1 even or n+1 odd" which you should be able to cope with. Prove n even -> n+1 odd and the other three results (e.g. n odd -> n-1 even) first and then use them all to prove the result.

#### Greg Conneen (Apr 21 2019 at 17:34):

Neat. I have mathlib, but I find most of the algebra/analysis files unreadable despite being highly competent with the material on paper. Are there any other packages you would recommend before diving into that level of material?

#### Kevin Buzzard (Apr 21 2019 at 17:36):

I would recommend doing just what you're doing -- writing material yourself. I agree that the library is impenetrable for a beginner. Proofs are specifically shortened for efficiency reasons, they are not written to be readable. We are still lacking a bunch of good teaching material for mathematicians; I intend to make some this summer, however I also said that last summer.

#### Kevin Buzzard (Apr 21 2019 at 17:38):

Here's me trying to do some problem sheets which we give to 1st year mathematicians at Imperial: https://github.com/ImperialCollegeLondon/M1F_example_sheets

#### Greg Conneen (Apr 21 2019 at 17:39):

Oh wow, that's a substantial amount of material. Thank you so much!

#### Kevin Buzzard (Apr 21 2019 at 17:40):

The proofs are written in a far more longwinded way, they might be more instructional but there are still not many comments.

#### Kevin Buzzard (Apr 21 2019 at 17:40):

You can step through them in tactic mode and watch the state change, of course.

#### Greg Conneen (Apr 21 2019 at 17:42):

Right, I think that as long as I can just run through the tactic state I should be fine. Are there solutions for every single exercise?

#### Greg Conneen (Apr 21 2019 at 17:44):

Also, despite there being a real numbers package in mathlib, I've never seen it before. That's super useful!

#### Greg Conneen (Apr 21 2019 at 17:45):

oh wow, there's also complex numbers defined. That's amazing

#### Kevin Buzzard (Apr 21 2019 at 17:45):

I don't think I managed every single exercise. There were some which were hard to formalise.

#### Kevin Buzzard (Apr 21 2019 at 17:46):

Those answers are really just a big work in progress. They were just me trying to figure out if Lean was ready for a beginning undergraduate level maths course. It was.

#### Greg Conneen (Apr 21 2019 at 17:46):

Are you a professor at Imperial?

#### Kevin Buzzard (Apr 21 2019 at 17:46):

Yes

#### Greg Conneen (Apr 21 2019 at 17:47):

Oh, that's awesome. I'm just an undergrad math student

#### Kevin Buzzard (Apr 21 2019 at 17:48):

I felt like that when I was trying to get Lean to do my own problem sheets.

#### Greg Conneen (Apr 21 2019 at 17:50):

Haha yeah, the professor here at UVA has a tough time teaching Lean. It seems like it's one of those things where many trivial proofs on paper are just very difficult to formalise, despite familiarity

#### Kevin Buzzard (Apr 21 2019 at 17:51):

Yes. I spent about a year being very frustrated with not being able to make Lean do obvious things. I'm now finally over that initial learning hump and I can write even research level maths in Lean now, although it can take a long time.

#### Greg Conneen (Apr 21 2019 at 17:53):

Well congrats. I hope to get to that point someday. What maths research have you been able to formalise in Lean?

#### Kevin Buzzard (Apr 21 2019 at 17:53):

https://github.com/leanprover-community/lean-perfectoid-spaces

#### Kevin Buzzard (Apr 21 2019 at 17:53):

It's not finished yet but we're getting there.

#### Kevin Buzzard (Apr 21 2019 at 17:54):

I've just spent the last 30 minutes staring at https://leanprover.zulipchat.com/#narrow/stream/116395-maths/topic/Cute.20lemma/near/163135381

#### Kevin Buzzard (Apr 21 2019 at 17:54):

some technical lemma about topological spaces which we need.

#### Greg Conneen (Apr 21 2019 at 17:55):

that's... absolutely amazing. I never imagined we could do something like this in Lean so soon.

#### Kevin Buzzard (Apr 21 2019 at 17:55):

Yeah me neither

#### Kevin Buzzard (Apr 21 2019 at 17:55):

I've had a huge amount of support from other people here

#### Kevin Buzzard (Apr 21 2019 at 17:56):

18 months ago there weren't even complex numbers

#### Kevin Buzzard (Apr 21 2019 at 17:56):

Now we have schemes https://github.com/ramonfmir/lean-scheme and they work

#### Greg Conneen (Apr 21 2019 at 17:56):

Wow. Yeah, I'm now very happy that I've been introduced to this chat by Dr. Hölzl

#### Greg Conneen (Apr 21 2019 at 17:56):

Oh wow, algebraic geometry? I'll have to tell my advisor about this

#### Kevin Buzzard (Apr 21 2019 at 17:57):

we have stuff which reflects the interests of the people involved, I guess.

#### Greg Conneen (Apr 21 2019 at 17:58):

Thank you so much for all this info. I really appreciate it. I look forward to participating more in the future, when I get a little bit more acquainted with Lean.

#### Kevin Buzzard (Apr 21 2019 at 17:58):

#### Kevin Buzzard (Apr 21 2019 at 17:58):

That's the schemes thread

#### Kevin Buzzard (Apr 21 2019 at 17:58):

we're going to re-write it using the category theory library; at the minute we make all the categories by hand.

#### Kevin Buzzard (Apr 21 2019 at 17:59):

There are universe issues, as ever in category theory, so it gets a bit hairy because you have to do it all properly in a system like this

#### Greg Conneen (Apr 21 2019 at 18:00):

That makes sense. Cat theory makes for the introduction of some interesting nuances

#### Greg Conneen (Apr 21 2019 at 18:14):

How do I apply one direction of an iff? Like, if I wanted to only apply the left implication in a proof.

#### Mario Carneiro (Apr 21 2019 at 18:36):

`bla.1`

or `bla.mp`

#### Mario Carneiro (Apr 21 2019 at 18:36):

which is short for `iff.mp bla`

#### Greg Conneen (Apr 21 2019 at 18:37):

oh okay, so if my iff statement was titled bla, bla.1 would be the left implication, and bla.2 would be the right?

#### Greg Conneen (Apr 21 2019 at 18:48):

Also, where can I find the xenalib package?

#### Greg Conneen (Apr 21 2019 at 18:50):

nvm, found it

#### Greg Conneen (Apr 22 2019 at 13:02):

Okay, I've got a Lean file I've tinkered around with in the past, which includes some simple proofs about integers. Some propositions I've failed to prove in Lean, and the rest I'm sure are anything but efficient. I was wondering if anyone would be willing to look over this file for me and give me some pointers on how to improve my proving techniques. int_props.lean

#### Greg Conneen (Apr 22 2019 at 13:03):

It's sort of a long file, and not very organised, but any help at all would be really appreciated. Sorry in advance.

#### Greg Conneen (Apr 23 2019 at 06:12):

I'm having trouble defining structure fields. The generic construction of structures is fine, I just don't understand fields. How would I go about appending them to a given structure?

#### Johan Commelin (Apr 23 2019 at 06:16):

Do you mean extending a structure?

#### Johan Commelin (Apr 23 2019 at 06:16):

Search for "extend" in mathlib and you will find lots of examples

#### Greg Conneen (Apr 23 2019 at 06:41):

oh, cool. Thank you. I didn't even know what I was looking for until you showed me

#### Kevin Buzzard (Apr 23 2019 at 06:58):

You might want to read the chapters on inductive types and structures in TPIL.

#### Patrick Massot (Apr 23 2019 at 07:22):

@Greg Conneen what Kevin really means is: if you have any interest in learning Lean, you want to read all of TPIL. The modern way of doing that is to launch VScode, open any Lean file, type Ctrl-shift-p type lean, select "Open documentation view", click "Theorem proving in Lean". This way you can click "Try it!" links on code snippets to open them right in VScode and play with them.

#### Patrick Massot (Apr 23 2019 at 07:32):

Trying to figure out the community map: Greg, are you a student of the Kevin Sullivan?

#### Greg Conneen (Apr 23 2019 at 12:47):

I was Prof. Sullivan's student last year, yes.

#### Greg Conneen (Apr 23 2019 at 12:49):

I've read the entirety of TPIL. I'm just still having trouble with implementation. I guess I'll tinker around with it a bit more

#### Greg Conneen (Apr 23 2019 at 13:09):

Okay, so if a group is defined like this

structure group : Type u → Type u fields: group.mul : Π {α : Type u} [c : group α], α → α → α group.mul_assoc : ∀ {α : Type u} [c : group α] (a b c_1 : α), a * b * c_1 = a * (b * c_1) group.one : Π (α : Type u) [c : group α], α group.one_mul : ∀ {α : Type u} [c : group α] (a : α), 1 * a = a group.mul_one : ∀ {α : Type u} [c : group α] (a : α), a * 1 = a group.inv : Π {α : Type u} [c : group α], α → α group.mul_left_inv : ∀ {α : Type u} [c : group α] (a : α), a⁻¹ * a

the way I would make something like this would be

structure {u} fake_group (a : Type u) := (mul : Π {α : Type u} [c : group α], α → α → α) ...

except, I want to change group α to fake_group α and I don't understand how to do that. I also in general don't understand the bracket notation or what pi does.

#### Johan Commelin (Apr 23 2019 at 13:14):

Note, you can use

```lean foobar ```

to get code blocks with syntax highlighting

#### Greg Conneen (Apr 23 2019 at 13:15):

oh okay, thanks

#### Johan Commelin (Apr 23 2019 at 13:16):

Note that a group is not defined the way you did. (Internally that is what Lean sees, maybe. But it isn't how it's written in the Lean source files.)

#### Johan Commelin (Apr 23 2019 at 13:16):

Tip: write `#print group`

in VScode, and `Ctrl`

-click on `group`

.

#### Johan Commelin (Apr 23 2019 at 13:16):

Or put your cursor on `group`

and hit `Ctrl - Shift - F10`

#### Greg Conneen (Apr 23 2019 at 13:17):

ohhhh I see. So a group is simply an extension of a monoid, which I assume is an extension of a semigroup.

#### Greg Conneen (Apr 23 2019 at 13:18):

...which it is, 4 lines above. Thank you so much.

#### Greg Conneen (Apr 23 2019 at 13:22):

So, in general, classes are used in the backend, but Lean interprets them as structures. What's the difference?

#### Greg Conneen (Apr 23 2019 at 13:23):

Also, is there any way to formalise has_mul and has_one without extending those classes?

#### Johan Commelin (Apr 23 2019 at 13:24):

No the difference between classes and structures is type class inference

#### Johan Commelin (Apr 23 2019 at 13:24):

Also, is there any way to formalise has_mul and has_one without extending those classes?

Just copy paste their definitions.

#### Greg Conneen (Apr 23 2019 at 13:24):

I don't know what you mean by that

#### Johan Commelin (Apr 23 2019 at 13:24):

The word `class`

is a red herring in this discussion

#### Johan Commelin (Apr 23 2019 at 13:24):

You could write `structure`

everywhere

#### Greg Conneen (Apr 23 2019 at 13:25):

gotcha. So the only difference is that structures are't able to inherit the properties of other structures. So when would you use a structure over a class?

#### Johan Commelin (Apr 23 2019 at 13:26):

No, structures can inherit

#### Johan Commelin (Apr 23 2019 at 13:26):

The difference is "type class instance". Search for that in TPIL.

#### Greg Conneen (Apr 23 2019 at 13:26):

oh. Then what do you mean by "type class inference?"

#### Greg Conneen (Apr 23 2019 at 13:27):

oh okay, gotcha

#### Greg Conneen (Apr 23 2019 at 13:27):

must've skimmed over that section

#### Johan Commelin (Apr 23 2019 at 13:27):

inference, not inheritance :wink:

#### Johan Commelin (Apr 23 2019 at 13:27):

/me never read TPIL in detail. I'm a cargo cult Leaner.

#### Kevin Buzzard (Apr 23 2019 at 13:27):

The difference between a structure and a class is that classes are structures with extra magical sauce.

#### Greg Conneen (Apr 23 2019 at 13:27):

Ah, that's my confusion. Thank you for being patient with me

#### Kevin Buzzard (Apr 23 2019 at 13:27):

I was trying to write some advanced notes on type class inference

#### Kevin Buzzard (Apr 23 2019 at 13:28):

but before I wrote them I wrote some basic notes on type class inference for mathematicians

#### Kevin Buzzard (Apr 23 2019 at 13:28):

These might help. They present the same material as in TPIL but in a way far more suited to mathematicians.

#### Kevin Buzzard (Apr 23 2019 at 13:28):

They're in a branch of mathlib, hang on, I'll dig it out. I'm actually on an aeroplane at this point

#### Kevin Buzzard (Apr 23 2019 at 13:32):

#### Kevin Buzzard (Apr 23 2019 at 13:33):

I found typeclass inference really hard to learn. It was only when I began to try to do normal maths in Lean that I began to see the point of it and understand it properly.

#### Kevin Buzzard (Apr 23 2019 at 13:36):

For reasons that you don't need to worry about and which might change, Lean keeps a ring in two packages not one. The first is a type called something like R. The second is a whole bunch of data which might be called something like `_inst_1 : ring R`

and which you should never need to look at, but it's here that all the multiplication and the addition and the zero and the axioms are all stored. You will probably not need to dig out the axioms, you should just be applying lemmas, if you're doing mathematics -- but sometimes it's important to know where Lean magics up e.g. the proof of the distributivity law for the integers, and if you want to know this then you need to know about typeclass inference.

#### Greg Conneen (Apr 23 2019 at 14:10):

What if I wanted to talk about a ternary operator instead of a binary one (I want to define an algebraic heap)? I can't use has_mul, since it's strictly binary...

#### Andrew Ashworth (Apr 23 2019 at 14:12):

yup

#### Andrew Ashworth (Apr 23 2019 at 14:13):

what exactly is your question about ternary operators?

#### Greg Conneen (Apr 23 2019 at 14:17):

I want to define a class, let's start with semiheap. I only need it to have the following property, called para-associativity:

forall a b c d e : alpha, [[a b c] d e] = [a [d c b] e] = [a b [c d e]]

where [a b c] is a ternary operator. Although, If I have to use different notation given Lean's use of square brackets, that's fine

#### Greg Conneen (Apr 23 2019 at 14:19):

I assume I have to define some sort of infix for a ternary operator, but I don't know how to implement that

#### Johan Commelin (Apr 23 2019 at 14:19):

Does the operator have a "canonical" name in the literature?

#### Andrew Ashworth (Apr 23 2019 at 14:19):

section 10.3 in TPIL describes notation

#### Johan Commelin (Apr 23 2019 at 14:19):

You will first want to define it without the ternary notation.

#### Johan Commelin (Apr 23 2019 at 14:19):

Afterward, you can define the notation, and start using it.

#### Greg Conneen (Apr 23 2019 at 14:20):

Okay. I assume that I need to give it an operation on 3 elements, then state the rule

#### Greg Conneen (Apr 23 2019 at 14:21):

And no, unfortunately there's not really a canonical name for the operator. It's just a ternary operation, in the same way that the standard operation a group deals with is just a binary one

#### Johan Commelin (Apr 23 2019 at 14:22):

Yup. So you want fields

(tern_op : X → X → X → X) (semi_assoc : \for a b c d e, blabla)

#### Johan Commelin (Apr 23 2019 at 14:22):

And no, unfortunately there's not really a canonical name for the operator. It's just a ternary operation, in the same way that the standard operation a group deals with is just a binary one

We "canonically" call it *multiplication*.

#### Greg Conneen (Apr 23 2019 at 14:23):

Ah, I see. That makes sense. Although, most mathematicians I know don't like to be so restrictive in terminology :P

#### Johan Commelin (Apr 23 2019 at 14:23):

I was just wondering whether there was a good alternative to `tern_op`

.

#### Greg Conneen (Apr 23 2019 at 14:24):

Yeah, I figured that's what you were after. Heaps aren't really a common algebraic structure, I'm really just trying to implement this so I have a better understanding of how Lean deals with structures/classes

#### Greg Conneen (Apr 23 2019 at 14:29):

Also, am I able to say that 3 things are equal in one field without using "and"?

#### Greg Conneen (Apr 23 2019 at 14:29):

like how I wrote above

#### Patrick Massot (Apr 23 2019 at 14:31):

there are stupid indirect ways to say it, but using `and`

is the reasonable one

#### Greg Conneen (Apr 23 2019 at 14:32):

Okay. Unfortunate, but I guess it's better than defining two separate fields

#### Patrick Massot (Apr 23 2019 at 14:33):

If you really want to you can build an API around this

#### Greg Conneen (Apr 23 2019 at 14:34):

No thank you lol

#### Patrick Massot (Apr 23 2019 at 14:34):

You will need lemmas about heaps saying that anyway

#### Patrick Massot (Apr 23 2019 at 14:35):

Whatever is inside your field, you'll want a lemma asserting the three relevant equalities

#### Greg Conneen (Apr 23 2019 at 14:35):

I don't know what you mean

#### Greg Conneen (Apr 23 2019 at 14:36):

Why would I need a lemma? Are you talking about showing that some object is a heap?

#### Greg Conneen (Apr 23 2019 at 14:36):

I'd expect that just implementing the classes wouldn't require lemmas

#### Johan Commelin (Apr 23 2019 at 14:36):

No, he is talking about making it usable

#### Greg Conneen (Apr 23 2019 at 14:37):

Oh, okay. So I can't refer to the fields until I define lemmas for them?

#### Patrick Massot (Apr 23 2019 at 14:39):

You will have:

class heap {X : Type*} := ... namespace heap variables {X : Type*} [heap X] (a b c d e : X) lemma assoc1 : [[a b c] d e] = [a [d c b] e] := sorry lemma assoc2 : [a [d c b] e] = [a b [c d e]] := sorry lemma assoc3 : [[a b c] d e] = [a b [c d e]] := sorry end heap

#### Chris Hughes (Apr 23 2019 at 14:39):

I would have three fields rather than an `and`

#### Johan Commelin (Apr 23 2019 at 14:39):

You can refer to them, otherwise you can't state the lemmas. What Patrick is saying is that with only the fields you'll have something that is cumbersome to use.

#### Greg Conneen (Apr 23 2019 at 14:39):

yeah I'm putting it into 3 fields. I've realised that's easier

#### Patrick Massot (Apr 23 2019 at 14:39):

The point is: whatever the implementaion chosen, after writing those three lemmas, you will have your three equalities

#### Greg Conneen (Apr 23 2019 at 14:40):

What does that change? Just being able to refer to the property without using semiheap.*?

#### Patrick Massot (Apr 23 2019 at 14:40):

Of course if you put all three lemmas as fields then you don't have to state theses lemmas to use them (unless for some reason you want to change binder types, but that's a more advanced discussion)

#### Greg Conneen (Apr 23 2019 at 14:41):

Also, is Type* a way of referring to an arbitrary Type level without defining a universe?

#### Patrick Massot (Apr 23 2019 at 14:41):

And if you use redundant fields like that you will probably want a custom constructor

#### Greg Conneen (Apr 23 2019 at 14:43):

How would a custom constructor help me?

#### Greg Conneen (Apr 23 2019 at 14:43):

Sorry, but I'm still very new, and I don't really see the big picture yet

#### Patrick Massot (Apr 23 2019 at 14:45):

I think the discussion would be much easier if you write something and then we comment it

#### Patrick Massot (Apr 23 2019 at 14:45):

Abstract discussion won't help

#### Greg Conneen (Apr 23 2019 at 14:45):

Yeah I agree. I'll post in just a second

#### Greg Conneen (Apr 23 2019 at 15:00):

This is what I've got so far:

class semiheap (α : Type*) := (tern_op : α → α → α → α ) (para_assoc1 : ∀ a b c d e : α, tern_op (tern_op a b c) d e = tern_op a (tern_op d c b) e) (para_assoc2 : ∀ a b c d e : α, tern_op (tern_op a b c) d e = tern_op a b (tern_op c d e)) (para_assoc3 : ∀ a b c d e : α, tern_op a (tern_op d c b) e = tern_op a b (tern_op c d e)) namespace semiheap notation `[` _ _ _ `]` := λ x y z : α, tern_op x y z variables {α : Type*} [semiheap α] (a b c d e : α) lemma para_assoc1 : sorry := sorry lemma para_assoc2 : sorry := sorry lemma para_assoc3 : sorry := sorry end semiheap

The semiheap as defined is fine. I'm just having trouble with the notation, since it throws an error saying I'm referencing local variables if I put it below them, and I don't know how to reference the semiheap alpha if I put it above.

#### Patrick Massot (Apr 23 2019 at 15:12):

I don't think you can use space as a delimiter in this notation. You can play with

class semiheap (α : Type*) := (tern_op : α → α → α → α) (notation `[` x `|` y `|` z `]` := tern_op x y z) (para_assoc1 : ∀ a b c d e : α, [[a | b | c] | d | e] = [a | [d | c | b] | e]) (para_assoc2 : ∀ a b c d e : α, [a | [d | c | b] | e] = [a | b | [c | d | e]]) (para_assoc3 : ∀ a b c d e : α, [[a | b | c] | d | e] = [a | b | [c | d | e]]) def semiheap.mk' {α : Type*} (op :α → α → α → α) (h1 : ∀ a b c d e : α, op (op a b c) d e = op a (op d c b) e) (h2 : ∀ a b c d e : α, op (op a b c) d e = op a b (op c d e)) : semiheap α := { tern_op := op, para_assoc1 := h1, para_assoc2 := λ a b c d e, (h1 a b c d e) ▸ (h2 a b c d e), para_assoc3 := h2}

#### Patrick Massot (Apr 23 2019 at 15:13):

The function is the custom constructor, it lets you build a structure by providing only two equalities

#### Greg Conneen (Apr 23 2019 at 15:16):

Why have we defined .mk'? What does it do? Also, what is ▸?

#### Patrick Massot (Apr 23 2019 at 15:17):

Please first try to define an instance of `semiheap`

#### Greg Conneen (Apr 23 2019 at 15:17):

It clearly rewrote, but what does it do in general?

#### Greg Conneen (Apr 23 2019 at 15:17):

Okay, I will

#### Greg Conneen (Apr 23 2019 at 16:01):

I've got it.

def tern_add : ℤ → ℤ → ℤ → ℤ := λ a b c, a+b+c instance int_semiheap : semiheap ℤ := begin apply semiheap.mk' tern_add, rw tern_add, simp, rw tern_add, simp, end

#### Greg Conneen (Apr 23 2019 at 16:01):

Thank you so much for your help.

#### Patrick Massot (Apr 23 2019 at 16:06):

Does that code compile?

#### Patrick Massot (Apr 23 2019 at 16:07):

Anyway, the point is you had to prove only para-associativity equalities instead of three

#### Patrick Massot (Apr 23 2019 at 16:07):

You can also study (after adding `import tactic.abel`

on top of your file):

def tern_add : ℤ → ℤ → ℤ → ℤ := λ a b c, a+b+c example : semiheap ℤ := { tern_op := tern_add, para_assoc1 := by { intros, dsimp[tern_add], abel }, para_assoc2 := by { intros, dsimp[tern_add], abel }, para_assoc3 := by { intros, dsimp[tern_add], abel } } example : semiheap ℤ := semiheap.mk' tern_add (by { intros, dsimp [tern_add], abel }) (by { intros, dsimp [tern_add], abel }

#### Patrick Massot (Apr 23 2019 at 16:08):

you can also replace `dsimp [tern_add]`

by `unfold tern_add`

#### Greg Conneen (Apr 23 2019 at 16:10):

Okay, I will. I'm currently implementing heaps as an extension of semiheaps. How do I avoid having to write the notation as a field over and over?

#### Patrick Massot (Apr 23 2019 at 16:12):

you need to reissue the notation command after the semiheap definition. The one inside the definition has limited scope

#### Greg Conneen (Apr 23 2019 at 16:12):

Gotcha. So at most I'll only have to write it twice

#### Patrick Massot (Apr 23 2019 at 16:13):

yes

#### Greg Conneen (Apr 23 2019 at 19:07):

Is the symmetric group defined anywhere in mathlib? If not, I'd like to go about implementing it, and in doing so would need some guidance

#### Johan Commelin (Apr 23 2019 at 19:08):

It is:

src/data/equiv/basic.lean:/-- `perm α` is the type of bijections from `α` to itself. -/

#### Greg Conneen (Apr 23 2019 at 19:09):

Thank you

#### Greg Conneen (Apr 23 2019 at 19:10):

Followup question, has Cayley's theorem been proven?

#### Johan Commelin (Apr 23 2019 at 19:13):

@Chris Hughes I guess you did Cayley somewhere, right?

#### Chris Hughes (Apr 23 2019 at 19:14):

No.

#### Chris Hughes (Apr 23 2019 at 19:14):

But it won't be hard.

#### Johan Commelin (Apr 23 2019 at 19:14):

@Greg Conneen It would be a very nice project to start with!

#### Greg Conneen (Apr 23 2019 at 19:14):

Okay. I assume it won't be. That sounds like a great thing to do.

#### Greg Conneen (Apr 23 2019 at 19:15):

I'm going to need help understanding data.equiv.basic though

#### Greg Conneen (Apr 23 2019 at 19:17):

It looks like α ≃ β means α is isomorphic to β, given the defn. perm α makes sense. I'm still not completely comfortable with set theory notation and techniques in Lean, since I was introduced almost exclusively through propositions

#### Johan Commelin (Apr 23 2019 at 19:21):

@Greg Conneen Do you have an idea how you would formalise the statement?

#### Greg Conneen (Apr 23 2019 at 19:23):

I'm looking into that now. The proof is trivial to me on paper. I should be fine in abstracting it. I'll definitely ask for help with it when I need it

#### Patrick Massot (Apr 23 2019 at 19:28):

It looks like α ≃ β means α is isomorphic to β

This is an extremely confusing notation, but you'll get used to it

#### Patrick Massot (Apr 23 2019 at 19:29):

The first piece of truth is `α ≃ β`

is the type of isomorphisms from `α`

to `β`

#### Patrick Massot (Apr 23 2019 at 19:29):

So it's not a `Prop`

, it contains data

#### Greg Conneen (Apr 23 2019 at 19:29):

oh no

#### Greg Conneen (Apr 23 2019 at 19:30):

lol that's certainly unintuitive

#### Patrick Massot (Apr 23 2019 at 19:30):

The next layer of confusion is that an isomorphism from `α`

to `β`

is not only a map from `α`

to `β`

which happens to be an isomorphism, it's a bundle containing such a map, the corresponding inverse map and two proofs

#### Patrick Massot (Apr 23 2019 at 19:31):

you need to read https://github.com/leanprover-community/mathlib/blob/master/src/data/equiv/basic.lean

#### Patrick Massot (Apr 23 2019 at 19:31):

keeping the above explanation in mind

#### Greg Conneen (Apr 23 2019 at 19:32):

That's perfectly fine. I can understand that such a map needs more info packaged in with it. So, what's the actual prop that A is iso to B?

#### Patrick Massot (Apr 23 2019 at 19:32):

don't forget to first read back the section on coercions in TPIL

#### Greg Conneen (Apr 23 2019 at 19:32):

And yeah I've been reading that file

#### Patrick Massot (Apr 23 2019 at 19:33):

`nonempty (α ≃ β)`

#### Patrick Massot (Apr 23 2019 at 19:33):

This is the Prop you are looking for

#### Patrick Massot (Apr 23 2019 at 19:35):

Probably I missed that piece of information: at what stage of your math studies are you? (Knowing this will help choosing explanations for you)

#### Greg Conneen (Apr 23 2019 at 19:38):

I don't think I've fully elaborated. I'm a 2nd year undergraduate math major currently taking my first semester of graduate real and complex analysis, and my third semester of graduate algebra. I plan on studying algebraic combinatorics, algebraic geometry, or algebraic topology. Either way, interested in algebra

#### Patrick Massot (Apr 23 2019 at 19:40):

I'm afraid my knowledge of the American university system is clearly not good enough to understand that answer. Can you give a couple of examples of theorems you recently learned in your most advanced courses?

#### Kevin Buzzard (Apr 23 2019 at 19:42):

`nonempty (α ≃ β)`

That just says alpha bijects with beta. You want an isomorphism of what structure?

#### Johan Commelin (Apr 23 2019 at 19:43):

He doesn't need more structure. He is trying to formalise Cayley's theorem.

#### Greg Conneen (Apr 23 2019 at 19:48):

Yeah. I'm currently in a course on wedderburn theory/commutative algebra. So, for example, we just covered the classification theorem for semisimple rings and semisimple algebras. In manifolds, we just covered the generalisation of stokes' theorem, and we just had an ending seminar on the Poincare lemma and the DeRham cohomology.

#### Patrick Massot (Apr 23 2019 at 19:49):

Thanks, this I understand (but I have no idea how to relate this to your abstract answer)

#### Greg Conneen (Apr 23 2019 at 19:50):

Well, I'm glad I could clarify for you

#### Kevin Kappelmann (Apr 26 2019 at 10:27):

What is the recommended/most elegant way to simplify/rewrite a hypothesis/assumption while re-stating the desired outcome of the simplification/rewriting? For example, doing

example (hyp : true ∧ true = false) : false := have hyp : true = false, by { simp at hyp, assumption }, sorry

is not quite perfect as I have to write `by { simp at hyp, assumption }`

(preferably, I'd like to simply write `by simp at hyp`

) and leaves me with two hypotheses called `hyp`

, namely

hyp : true ∧ true = false, hyp : true = false

Is there something like `change`

for hypotheses?

#### Johan Commelin (Apr 26 2019 at 10:59):

There is `rwa`

and `simpa`

. Do those help you?

#### Kevin Kappelmann (Apr 26 2019 at 11:01):

Alright, that solves the former issue. Can I also somehow drop the non-simplified version from the list of assumptions?

#### Johan Commelin (Apr 26 2019 at 11:03):

@Kevin Kappelmann `replace hyp : bla, by foobar`

?

#### Kevin Kappelmann (Apr 26 2019 at 11:08):

Yeah, that's exactly what I was looking for! thanks :) I do not think `replace`

is mentioned in the tutorial. Is there some sort of document/website with "advanced tactics/APIs" that I can consult in such cases?

#### Johan Commelin (Apr 26 2019 at 11:09):

Which tutorial?

#### Johan Commelin (Apr 26 2019 at 11:09):

See the `docs/`

folder in mathlib. There is some stuff there.

#### Kevin Kappelmann (Apr 26 2019 at 11:10):

This document: https://leanprover.github.io/theorem_proving_in_lean/theorem_proving_in_lean.pdf

#### Patrick Massot (Apr 26 2019 at 11:38):

Is there something like

`change`

for hypotheses?

Yes, it's called `change`

#### Patrick Massot (Apr 26 2019 at 11:39):

If you have `hyp : 1 + 1 = 2`

you can `change 2 = 2 at hyp`

#### Kevin Kappelmann (Apr 29 2019 at 12:20):

It's me again :') I'm having some problems when using `let`

-expressions + cases/induction. For example, this does not work

example (n m : ℕ) : n + m = m + n := begin let s := n + m, induction n, case nat.zero : {sorry}, -- error: could not find open goal of given case case nat.succ : {sorry} end

whereas this works

example (n m : ℕ) : n + m = m + n := begin induction n, case nat.zero : {sorry}, case nat.succ : {sorry} end

Does someone know what I am doing wrong?

#### Kevin Buzzard (Apr 29 2019 at 12:24):

Maybe that's a bug in `case`

? You know that you don't need it, right?

example (n m : ℕ) : n + m = m + n := begin let s := n + m, induction n, {sorry}, {sorry} end

#### Kevin Kappelmann (Apr 29 2019 at 12:25):

Yep, I'd really like to keep the `case nat.xxx`

annotations though to keep my proofs more readable.

#### Kevin Buzzard (Apr 29 2019 at 12:25):

Make it a comment within the `{}`

. And complain to the devs :-)

#### Kevin Kappelmann (Apr 29 2019 at 12:27):

Haha, alright :D Thanks

#### Kevin Buzzard (Apr 29 2019 at 12:29):

@Simon Hudon is this a bug?

#### Simon Hudon (Apr 29 2019 at 12:35):

I think it is. I'll look into it

#### Greg Conneen (Apr 30 2019 at 02:51):

Does `unfold`

actually do anything other than change what the goal looks like? Isn't `#print`

or `Ctrl-click`

on what you're unfolding a replacement, since it's just a method of getting information about your goal? It seems that any proof can have the `unfold`

statement removed, and it works just the same

#### Mario Carneiro (Apr 30 2019 at 03:01):

yes and no (and no). `unfold`

will rewrite with equations that are not necessarily definitional equalities, so it might do more than just change the appearance of the goal. It's really a wrapper around `simp`

with particular config options. `dunfold`

is closer to your description, as it only does defeq simplifications. But there are two reasons that it's still not correct to say that it "doesn't actually do anything other than change what the goal looks like" and "any proof can have the `dunfold`

statement removed and it works just the same". First, it does actually do something - it inserts an `id`

term into the result (the partial proof being constructed). So you can see the result of `dsimp`

and `dunfold`

applications in the resulting proof if you `#print`

it. This is done to keep typechecking time down by remembering the simplification path in the proof term.

The more important reason why `dunfold`

and friends can't necessarily be removed from a proof is because many tactics depend on the syntactic shape of the goal or a hypothesis. For example, `rw`

will not match against the goal if some definitional unfolding is required to see the constant that appears. For example if the goal is `2 * n = 0`

where `n : nat`

, then this is defeq to `n + n = 0`

but `rw add_comm`

would only work on the second form of the goal. So anything that "changes what the goal looks like" could potentially affect whether a later tactic succeeds. That said, many tactics don't care about anything beyond defeq, in particular `apply`

, `refine`

and `exact`

, and in many cases you can remove superfluous `unfold`

and `dunfold`

tactics without breaking the proof.

#### Simon Hudon (May 03 2019 at 21:45):

@Kevin Kappelmann I now have a fix for your `let`

/ `case`

issue. It should be in the next nightly build of Lean 3.5.0c

#### Greg Conneen (May 04 2019 at 20:07):

Real quick, what is a meta variable and why is `tactic.finish`

absolutely packed with them? I'm not sure exactly what's going on, but it seems that any trivial propositional technique is proven immediately with `by finish`

#### Patrick Massot (May 04 2019 at 20:08):

What do you mean `finish`

is packed with metavariable?

#### Patrick Massot (May 04 2019 at 20:09):

A meta-variable is a kind of hole Lean will have to fill in. It can be the goal you need to solve or some implicit argument to be inferred by unification

#### Greg Conneen (May 04 2019 at 20:13):

I just mean that when I pressed `Ctrl-Click`

on `tactic.finish`

, I came upon a file that was filled with the keyword `meta`

and I wasn't sure what any of it meant

#### Patrick Massot (May 04 2019 at 20:14):

oohh

#### Patrick Massot (May 04 2019 at 20:14):

that's a different meta

#### Greg Conneen (May 04 2019 at 20:15):

I suppose so, I just assumed that `meta def`

was defining a metavariable, although I suppose that wouldn't make sense given my preconceived notion of what a metavariable is

#### Patrick Massot (May 04 2019 at 20:15):

`meta`

in this context means "unsafe". It's code that won't be checked by Lean. It particular it doesn't have to be provably terminating

#### Greg Conneen (May 04 2019 at 20:15):

Why would one ever need such a thing?

#### Patrick Massot (May 04 2019 at 20:15):

Typically tactics (the commands you type between begin and end) are meta. They produce proofs that *will* be checked by Lean. But the way they produces those proofs is freestyle

#### Greg Conneen (May 04 2019 at 20:17):

Ah, okay. So it's not just a cheeky way of making axioms. So the reason I can solve so many things with `by finish`

is because all of the structures in `finish`

are using this meta keyword? I assume without it, there wouldn't be a way of taking arbitrary propositions and figuring out how to solve them

#### Greg Conneen (May 04 2019 at 20:18):

That is, `finish`

wouldn't know how to solve a proposition handed to it if it weren't in a specific order of some sort

#### Scott Morrison (May 04 2019 at 20:18):

`meta`

code can do non-terminating recursion, and it can also work with "reflected" versions of mathematical objects, i.e. their actual representations as `expr`

objects in the C++ code.

#### Patrick Massot (May 04 2019 at 20:18):

`finish`

constructs a proof for you

#### Greg Conneen (May 04 2019 at 20:19):

Okay, that's really cool @Scott Morrison

#### Patrick Massot (May 04 2019 at 20:19):

I don't think there is any risk of non-termination here. But the type of Lean expressions is meta, for reasons too long to explain (unless you understand what Scott wrote)

#### Scott Morrison (May 04 2019 at 20:20):

This is one of the lovely things about Lean --- you can write new tactics (i.e. programs that construct proofs, but aren't necessarily themselves correct) in Lean itself.

#### Greg Conneen (May 04 2019 at 20:20):

I understand C++ enough to generally know what he means

#### Scott Morrison (May 04 2019 at 20:20):

In previous interactive theorem provers you needed to step out, either into the base implementation language, or some DSL, to write new tactics.

#### Greg Conneen (May 04 2019 at 20:21):

So, even with meta, there's still no way to prove false? Because a bad tactic won't construct a good proof?

#### Greg Conneen (May 04 2019 at 20:21):

That makes sense

#### Scott Morrison (May 04 2019 at 20:21):

The `meta`

keyword *protects* the real maths from these "helper" functions that we write to help construct proofs.

#### Greg Conneen (May 04 2019 at 20:21):

ahh

#### Scott Morrison (May 04 2019 at 20:22):

Exactly. The tactic framework lets you *run* meta code to produce proof terms, but those proof terms will be rejected if they refer to any `meta`

functions.

#### Greg Conneen (May 04 2019 at 20:22):

so, how would a noob go about using meta? Or, should I just not even touch the thing?

#### Patrick Massot (May 04 2019 at 20:23):

It depends on your taste, background and goals

#### Scott Morrison (May 04 2019 at 20:23):

Read https://github.com/leanprover-community/mathlib/blob/master/docs/extras/tactic_writing.md

#### Scott Morrison (May 04 2019 at 20:23):

And start reading all the files in `src/tactic/`

in the mathlib repository.

#### Scott Morrison (May 04 2019 at 20:23):

and ask here :-)

#### Patrick Massot (May 04 2019 at 20:23):

Yes, reading that tutorial will give you a first glance at what it looks like

#### Patrick Massot (May 04 2019 at 20:24):

Reading `src/tactic/`

will give you a lot more (after a *lot* more time)

#### Greg Conneen (May 04 2019 at 20:24):

Great, thank you. My goal generally is to git gud in Lean, but eventually I want to start implementing some actual maths. I expect that will take me some time, given I'll have to read quite a bit of mathlib

#### Patrick Massot (May 04 2019 at 20:25):

You don't have to read all of mathlib before starting to implement actual maths

#### Greg Conneen (May 04 2019 at 20:28):

sure, but I'd like to understand a chunk of what's already there on a fundamental level

#### Scott Morrison (May 04 2019 at 20:28):

Do you have an idea what maths you'd like to do?

#### Scott Morrison (May 04 2019 at 20:29):

If you're ever tempted to add documentation to stuff you're reading, documentation-only pull requests are very welcome. :-)

#### Greg Conneen (May 04 2019 at 20:29):

I'd like to define a manifold, and maybe prove the inverse/implicit function theorem

#### Greg Conneen (May 04 2019 at 20:30):

I just really need to look over how topology and euclidean space are implemented first

#### Greg Conneen (May 04 2019 at 20:30):

Also, do we have the definition of a metric space?

#### Kevin Buzzard (May 04 2019 at 20:32):

@Greg Conneen If you want to implement some actual maths in Lean, then don't try to get good at Lean first, just try to implement some actual maths in Lean, get stuck, ask for help here, and you will succeed.

#### Sebastien Gouezel (May 04 2019 at 20:32):

I am precisely working on the definition of a manifold. Work in progress (very much in a preliminary state) at https://github.com/sgouezel/mathlib/blob/aa7fbab39d1e6a01665e24e6197b09338e855467/src/geometry/manifolds/basic.lean#L1598

#### Kevin Buzzard (May 04 2019 at 20:32):

I have no idea how to write tactics. Tactics are written in meta code because they can fail. You don't need to know anything about tactic-writing (which is what people use meta code for) to write maths.

#### Kevin Buzzard (May 04 2019 at 20:33):

However, what you do need to know is what is already there and what is being worked on, because this changes *fast* here

#### Kevin Buzzard (May 04 2019 at 20:34):

Yes we have metric spaces. You can see this yourself by firing up a project with mathlib as a dependency in VS Code and searching for metric space. This is a really important beginner skill to learn.

#### Kevin Buzzard (May 04 2019 at 20:35):

Make sure that "the cog doesn't have a box around it" in "files to exclude" in VS Code. That way you can search your dependencies.

#### Kevin Buzzard (May 04 2019 at 20:35):

Several people would like to define manifolds and people are sort of in the middle of things, so it might not be a perfect project right now for a beginner.

#### Kevin Buzzard (May 04 2019 at 20:36):

Definitions are harder than theorems; proofs can be sloppy but it's important to get definitions right.

#### Kevin Buzzard (May 04 2019 at 20:36):

One big obstruction to doing much with manifolds is that we don't have enough calculus, but again people are working on this and things are changing fast.

#### Kevin Buzzard (May 04 2019 at 20:37):

Bump functions are still a little out of reach at the minute, but it won't be long until we have them.

#### Kevin Buzzard (May 04 2019 at 20:38):

For polynomials we have multivariable ones and single variable ones; I am not an expert in either analysis or Lean-analysis, but I am wondering whether we are going to need a robust theory of calculus in one real variable, proving basic theorems such that the derivative of the integral is the function. I don't think we even have that.

#### Kevin Buzzard (May 04 2019 at 20:39):

We have no complex analysis worth mentioning, not even integrating along a reasonable path.

#### Kevin Buzzard (May 04 2019 at 20:40):

Because of our lack of analysis, this makes manifolds harder to do, however having goals like manifolds in mind is driving the library forwards in the right direction (as far as I am concerned). Lean seems to me to be focussing on the kind of mathematics mathematicians do, as opposed to the kind of mathematics that computer scientists have managed to pick up somehow.

#### Greg Conneen (May 04 2019 at 20:43):

Gotcha, thanks for the information. Do you think that complex would be a good thing to get working on, or do we not have it because it's hard to implement?

#### Kevin Buzzard (May 04 2019 at 20:49):

I am not an analyst and it's been nearly 30 years since I thought about the basics, Cauchy's integral formula etc. I've never taught the course either, so I am not the person to ask.

#### Kevin Buzzard (May 04 2019 at 20:50):

One thing I know was an issue is that someone has to make a decision about in what generality to define path integrals -- C^infinity paths isn't good enough because you want to integrate round semicircles and rectangles etc.

#### Kevin Buzzard (May 04 2019 at 20:51):

I really don't know what's there already though; I know a lot of the algebra stuff in Lean but the analysis stuff is changing quickly at the minute.

#### Patrick Massot (May 04 2019 at 20:56):

Kevin, the way this is going, we'll soon have manifolds with boundary *and* corners. When we'll have differential forms this will be much more than enough for rectangle and semi-circles

#### Kevin Buzzard (May 04 2019 at 21:17):

Woo hoo we'll have de Rham. cohomology one day. Is this in any other theorem prover?

#### Wojciech Nawrocki (Jun 10 2019 at 12:24):

Is it possible to "unimport" `core`

/`prelude`

, i.e. the basic definitions of `nat`

, `eq`

, etc, leaving an environment pretty much free of any definitions?

#### Kevin Buzzard (Jun 10 2019 at 12:25):

You can put `prelude`

at the top of your file

#### Kevin Buzzard (Jun 10 2019 at 12:25):

which translates to "this file is part of the prelude" (so don't be importing all the prelude)

#### Kevin Buzzard (Jun 10 2019 at 12:25):

I've never used it, I've just seen people suggest it

#### Wojciech Nawrocki (Jun 10 2019 at 12:26):

Yup, that seems to work. Thank you!

#### Wojciech Nawrocki (Jun 11 2019 at 22:56):

Is the notation `h.fn arg`

given `class myclass (α: Type*) := (fn: α → ℕ)`

and `[h: myclass α]`

discouraged? It stops working for example if `fn: ℕ → α`

because the type argument `α `

to `myclass.fn`

becomes explicit, i.e. `myclass.fn : Π (α : Type u_1) [c : myclass α], ℕ → α`

.

#### Adrian Chu (Jun 19 2019 at 15:44):

Given a positive integer n, how to define a vector with n entries? Thanks

#### Kenny Lau (Jun 19 2019 at 15:45):

`vector \alpha n`

#### Adrian Chu (Jun 19 2019 at 15:46):

Oh, and say x is the vector, how do we decribe its i-th entry?

#### Kenny Lau (Jun 19 2019 at 15:46):

import data.vector universes u variables (α : Type u) (n : ℕ) #check vector α n

#### Kenny Lau (Jun 19 2019 at 15:47):

import data.vector universes u variables (α : Type u) (n : ℕ) (v : vector α n) (k : fin n) #check v.nth k

#### Adrian Chu (Jun 19 2019 at 15:55):

how to do summation from i=1 to n ?

#### Kenny Lau (Jun 19 2019 at 16:00):

import data.vector2 universes u variables (α : Type u) [has_zero α] [has_add α] (n : ℕ) (v : vector α n) #check v.to_list.sum

#### Adrian Chu (Jun 19 2019 at 16:18):

one more question: how do we define new 3-vector, say, (x,y,x+y) where x, y are variable?

#### Adrian Chu (Jun 19 2019 at 16:20):

and more importantly, is there some online resource where I can read and get familiarized with these simple commands? I skimmed through theorem_proving_in_lean.pdf and couldn't find what I want... I feel like i'm asking too many noob questions like an idiot lol

#### Kenny Lau (Jun 19 2019 at 16:23):

import data.vector universes u variables (α : Type u) [has_add α] (x y : α) #check (⟨[x, y, x+y], rfl⟩ : vector α 3)

#### Adrian Chu (Jun 19 2019 at 16:25):

how about an n-vector like (x, 2x, 3x, ..., nx)? :mischievous:

#### Bryan Gin-ge Chen (Jun 19 2019 at 16:25):

is there some online resource where I can read and get familiarized with these simple commands

Unfortunately there isn't much beyond TPiL. You'll have to get used to reading source files. Here's data.vector.

#### Adrian Chu (Jun 19 2019 at 16:26):

is there some online resource where I can read and get familiarized with these simple commands

Unfortunately there isn't much beyond TPiL. You'll have to get used to reading source files. Here's data.vector.

Oh i see, thanks

#### Floris van Doorn (Jun 19 2019 at 16:28):

Currently, if you want to figure these things out yourself, the best way is just to look through the library files in mathlib (and core). You can go to `data/vector2`

in mathlib or the file Bryan mentioned in core. Also, operations on list might be useful to know for vectors. Other ways to find things you're searching for:

- Browse through files (either on Github or locally)
- "Go to definition" and "peek definition" (F12 and alt-F12 / option-F12?)
- Search in VSCode (ctrl+shift+F)
- Search on Github
- Execute
`#print prefix vector`

in Lean - Find lemmas using the tactic
`library_search`

#### Floris van Doorn (Jun 19 2019 at 16:29):

(x, 2x, 3x, ..., nx)

without looking at the library: hopefully `vector.range`

and `vector.map`

both exists, from which you should be able to figure it out (if not, use `list.range`

and/or `list.map`

)

#### Kenny Lau (Jun 19 2019 at 16:35):

how about an n-vector like (x, 2x, 3x, ..., nx)? :mischievous:

import algebra.group_power data.pnat data.vector universes u variables {α : Type u} [has_add α] def multiple (x : α) : ℕ → α | 0 := x -- junk value | 1 := x | (n+2) := multiple (n+1) + x variables (x : α) (n : ℕ) #check (⟨(list.range n).map (λ i, multiple x (i+1)), by simp⟩ : vector α n)

#### Floris van Doorn (Jun 19 2019 at 16:43):

If \a is an `add_monoid`

, you can use `add_monoid.smul`

instead of `multiple`

#### Adrian Chu (Jun 19 2019 at 17:00):

import data.vector2 universes u variables (α : Type u) [has_zero α] [has_add α] (n : ℕ) (v : vector α n) #check v.to_list.sum

Is there a way to do summation without using vector?

#### Johan Commelin (Jun 19 2019 at 17:02):

You might be interested in `finset.sum`

and `finset.range`

. The first gives you sums, the second gives you the `finset`

"{0, ..., n-1}"

#### Kevin Buzzard (Jun 19 2019 at 17:04):

@Adrian Chu the problem is that TPIL just deals with core Lean, and you are asking about how to use stuff in the maths library. When I was learning the basics, I found the maths library very intimidating, but when I realised that I should just stop reading the proofs and instead just look at the definitions, I found that I could proceed with my mathematics very nicely, which in turn led to more understanding, which in turn made reading the proofs which I had been ignoring all this time much easier.

One thing I learnt fairly quickly was to abstract exactly what I wanted. For example you asked how to make (x, 2x, 3x, ..., nx). But in fact you can break this down into smaller questions. Probably you know well how to make a function like `lam n, (n + 1) * x`

, so then you realise that what you really need is how to make (f 0, f 1, ..., f m). Now from general functional programming you might know that they like things like working out how to take a function f and to apply it to a vector like (0 1 2 ... n) so now you have two smaller questions -- how to make a vector like (0 1 2 ... n) and how to apply a function to every element of a vector. By this stage we are in the world where these ideas have a common name -- `range`

for (0 1 2 ... n) and `map`

for applying a function to a bunch of things at once. Now you can begin to guess the actual names of the functions you want.

Learning to think this way was a very important stepping stone for me; I slowly began to realise that I was developing a method for being able to answer my own questions of this nature. I am still learning, but this is the way to think about it. For every question, figure out how to build it from general tools and then figure out where those general tools are, perhaps by figuring out first what they should be called.

#### Adrian Chu (Jun 19 2019 at 17:16):

I see... I feel that TPiL is like a book teaching grammar, while mathlib is a dictionary. And knowing all the grammar does not immediately guarantee one can fluently read a dictionary, let alone writing sentences or paragraphs.

#### Kevin Buzzard (Jun 19 2019 at 19:27):

If you look through something like `data.list.basic`

and take a look at just the names and the statements of the theorems, you can learn a lot about mathlib's naming conventions. The description above (breaking what you want down into small pieces) is I think how you're supposed to think about functional programming; the same sort of themes come up again and again; import `data.finset`

and then use `#check`

to look at the types of `list.map`

and `multiset.map`

and `finset.map`

and it slowly dawns on you that `map`

means a very specific thing in this game. Then try `list.range`

and `multiset.range`

and `finset.range`

. It will slowly dawn on you by people are suggesting that you use `vector.map`

and `vector.range`

without even knowing if these functions are defined in Lean -- and if they're not then it's not hard to make them.

#### Luis Berlioz (Jun 20 2019 at 04:08):

Can I place a variable declaration in the middle of a proof?

For example instead of this:

variable s : α example : r → ( ∃ x : α, r) := assume hr : r, show ( ∃ x : α, r), from ⟨ s, hr ⟩

I want the variable `s : α `

declared inside the proof, like:

example : r → ( ∃ x : α, r) := assume hr : r, variable s : α show ( ∃ x : α, r), from ⟨ s, hr ⟩

#### Johan Commelin (Jun 20 2019 at 04:10):

No, that's not possible. Maybe you mean `have`

or `let`

?

#### Johan Commelin (Jun 20 2019 at 04:11):

The goal is to prove that some `alpha`

exists. You can't just grab it out of nowhere.

#### Luis Berlioz (Jun 20 2019 at 04:13):

Ok, that makes sense.

#### Adrian Chu (Jun 20 2019 at 10:14):

how to fix

```
def y := [1,2,3]
#eval 4+y.nth 2
```

? why can't I add?

#### Mario Carneiro (Jun 20 2019 at 10:16):

nth returns `option nat`

#### Mario Carneiro (Jun 20 2019 at 10:17):

you should be able to use `y.nth_le 2 dec_trivial`

#### Adrian Chu (Jun 20 2019 at 10:25):

oh it works, thanks! but what do .nth_le and dec_trivial means?

#### Mario Carneiro (Jun 20 2019 at 10:26):

no one wants an array out of bounds exception

#### Mario Carneiro (Jun 20 2019 at 10:26):

the dec_trivial is a proof that 2 < 3

#### Adrian Chu (Jun 20 2019 at 10:32):

i see

#### Adrian Chu (Jun 20 2019 at 10:52):

How about

def y := [1,2,3] def g (i : fin 3) : ℕ := y.nth_le i-1 def f (i : fin 3) : ℕ := 4 + y.nth_le i-1 dec_trivial

? What's wrong?

#### Reid Barton (Jun 20 2019 at 11:42):

Clearly you intended to have parentheses around `i-1`

, but I'm not sure what you are trying to achieve

#### Wojciech Nawrocki (Jun 20 2019 at 11:45):

Yep, function application binds tighter than subtraction, so to express what you want you need to put parentheses around `i-1`

. You're also missing a proof that `(i-i < list.length y)`

in `g`

(no second argument is given).

#### Adrian Chu (Jun 20 2019 at 12:03):

Yep, function application binds tighter than subtraction, so to express what you want you need to put parentheses around

`i-1`

. You're also missing a proof that`(i-i < list.length y)`

in`g`

(no second argument is given).

How exactly should I give the proof then?

#### Mario Carneiro (Jun 20 2019 at 12:04):

You could use `fin.last`

instead

#### Mario Carneiro (Jun 20 2019 at 12:06):

If you were doing this from scratch, you would be proving i-1 < 3 given i < 3. There are theorems in data.nat.basic to help with this

#### Mario Carneiro (Jun 20 2019 at 12:06):

alternatively, you can skip the proof and just handle the possibility of error using `nth`

#### Wojciech Nawrocki (Jun 20 2019 at 12:14):

I'm kinda surprised that `forall (n m k: nat), n < m -> n-k < m`

isn't already in mathlib.

#### Mario Carneiro (Jun 20 2019 at 12:36):

it's just the composition of lt_of_le_of_lt and sub_le

#### Scott Morrison (Jun 20 2019 at 22:45):

That said, I think we should add it. Finding these lemmas is excruciating for beginners (who haven't even learnt that you can in principle guess the names of most basic facts, let alone learnt how to do it).

#### Scott Morrison (Jun 20 2019 at 22:47):

How bad is the downside of swamping Lean with these sort of "composition of two lemmas" lemmas?

#### Mario Carneiro (Jun 20 2019 at 22:47):

there is a combinatorial explosion

#### Mario Carneiro (Jun 20 2019 at 22:48):

it's not even clear which compositions are the most common

#### Mario Carneiro (Jun 20 2019 at 22:48):

and we don't even have the complete set of basic lemmas

#### Mario Carneiro (Jun 20 2019 at 22:49):

I would rather put more smarts into `library_search`

type provers than flood the library with random facts

#### Scott Morrison (Jun 20 2019 at 23:27):

/me needs to get back to `back`

#### Tim Daly (Jun 21 2019 at 00:35):

There need not be a combinatorial explosion, nor does the naming convention have to be so baroque.

It appears to me that Lean is using a flat namespace. The naming convention makes it really hard to guess what you need.

Farmer and Carette (McMaster U.) have the notion of "tiny theories" which intrioduce a single axiom or a single signature. These can be combined to form larger objects which inherit the axioms and signatures. So, for instance, there would be a "commutative" axiom which can be inherited everywhere it makes sense.

This hierarchical organization enables re-use of axioms and signatures and allows the same name to occur in different paths in the inheritance graph.

#### Mario Carneiro (Jun 21 2019 at 00:37):

Lean has namespaces, which are used to label the major components in the theorem, or the broad area in which it lives

#### Mario Carneiro (Jun 21 2019 at 00:39):

The naming convention is designed to be easy to guess based on the statement, so you can think of a statement that you think is true and then search for the corresponding name. I'm curious how you would propose to improve on that scheme

#### Mario Carneiro (Jun 21 2019 at 00:41):

The number of theorems does increase exponentially as the depth increases. If there are n theorems, then there are O(n^2) ways to compose two of them, O(n^3) ways to compose three, and so on. Even 2n theorems is a major increase; n^2 is totally intractable

#### Tim Daly (Jun 21 2019 at 00:44):

Namespaces don't implement an inheritance graph in any generality. You'd like to state the commutative axiom in one place in the graph and have it available whereever it is used. What you really want is genreal "name polymorphism" so that the name 'factor' can have different semantics in different contexts. Using 'dotted namespace notation' forces the user to manage the namespace. This doesn't scale.

#### Scott Morrison (Jun 21 2019 at 00:45):

Well... only a fraction of those n^2 theorems actually make sense. I wish I had a better sense of what that fraction was. I guess this could be automated. :-)

#### Scott Morrison (Jun 21 2019 at 00:45):

@Tim Daly, of course Lean does state the commutative axiom only once.

#### Scott Morrison (Jun 21 2019 at 00:46):

(Okay, twice, once for "additive" structures and once for "multiplicative", which is lame, but still, it's only 2. :-)

#### Tim Daly (Jun 21 2019 at 00:58):

How does Lean handle a Lie algebra? Lie groups are skew-symmetric and they are non-associative. So

x*y = -y * x and

x * (y * z) = (x * y) * z + y * (x * z)

Do you have to "built it from nothing" using an odd naming convention for things like "algebra.lie.cross_product_skew_symmetric."?

Axiom, which uses something similar to Farmer/Carette "tiny theories" has 10,000 functions but a much smaller set of names.

#### Scott Morrison (Jun 21 2019 at 01:01):

As far as I'm aware no one has done anything on Lie algebras yet.

#### Mario Carneiro (Jun 21 2019 at 01:08):

We would normally use some naming convention based on the symbols in the axiom, but for core laws that have names we have a small set of name segments to describe the law. We've not dealt with these laws before, so I guess they would require a new terminology; I would use something like `lie_algebra.mul_anticomm`

and `lie_algebra.jacobi`

for these

#### Mario Carneiro (Jun 21 2019 at 01:10):

You should take a look at https://github.com/leanprover-community/mathlib/blob/master/docs/contribute/naming.md

#### Scott Morrison (Jun 21 2019 at 01:10):

I wonder, actually, if we'd even want to use `mul`

for a Lie algebra. It's true that it distributes over addition just as a "normal" multiplication does, but otherwise has little in common.

#### Tim Daly (Jun 21 2019 at 01:23):

This places the burden of finding multiplication in an algebra on the user's ability to infer names. Polymorphic names could be disambiguated by a prolog-like match algorithm. The naming convention "is what it is", of course. But it seems to me that I have to read the original sources and "internalize" the names in my mind, which is a very small place.

#### Mario Carneiro (Jun 21 2019 at 01:24):

I'm not sure what problem you are trying to solve. There are multiple theorems with the same name in different namespaces, and we take care to have them be the same or clearly analogous

#### Mario Carneiro (Jun 21 2019 at 01:26):

If you have multiple theorems with the same name in scope (because you have opened multiple namespaces), it will disambiguate them based on the type

#### Andrew Ashworth (Jun 21 2019 at 01:29):

isn't this tiny theory system the type class inference system in disguise?

#### Tim Daly (Jun 21 2019 at 01:29):

Namespaces are a kind of type in that case. If they were all "in scope" then there is no need for namespaces?

#### Andrew Ashworth (Jun 21 2019 at 01:30):

specifically the often linked "unbundled type classes" section here: https://github.com/leanprover/lean/wiki/Refactoring-structures

#### Mario Carneiro (Jun 21 2019 at 01:30):

well no, namespaces are separate from types. The type system is DTT, the namespace system is just names and overloading

#### Tim Daly (Jun 21 2019 at 01:30):

Disguise? It is the type class inference problem.

#### Tim Daly (Jun 21 2019 at 01:31):

I admit ignorance of the full details. Further study on my part is needed.

#### Mario Carneiro (Jun 21 2019 at 01:32):

You can always refer to a theorem by its full name if the file that defines it has been imported. Opening a namespace just makes the name a bit shorter to reference

#### Adrian Chu (Jun 21 2019 at 03:35):

Btw, when Lean 4 comes out one day, what will happen to mathlib?

#### Mario Carneiro (Jun 21 2019 at 03:48):

Lean 4 is already "out", but not production ready. When it is solid we will start working on a port

#### Mario Carneiro (Jun 21 2019 at 03:49):

there isn't much point working on it at scale yet because it's too unstable and unfinished, although testing out a single file might be worthwhile

#### Adrian Chu (Jun 21 2019 at 03:52):

I see, then when it's stable enough, how much manual work is needed to update mathlib?

#### Bryan Gin-ge Chen (Jun 21 2019 at 03:53):

(deleted)

#### Mario Carneiro (Jun 21 2019 at 03:53):

somewhere between "a significant endeavor" and "a total overhaul"

#### Mario Carneiro (Jun 21 2019 at 03:54):

If we can get any refactoring tools in lean 3.5c then it may make this a lot easier. There is a lot of silly busywork like changing the naming convention

#### Adrian Chu (Jun 21 2019 at 09:12):

variable n : ℕ def vec_exp1 (i : fin n) (x : vector ℕ n) (y : vector ℕ n) : ℕ := (x.nth i)^(y.nth i) def vec_exp2 (x : vector ℕ n) (y : vector ℕ n) : vector ℕ n := ⟨(list.range n).map (λ i, vec_exp1 i x y), by simp⟩

#### Adrian Chu (Jun 21 2019 at 09:13):

I am trying to compute the entry-wise exponential of two n-vector. but the last line has error: the term x has type vector N n but is expected to have type fin i. what is wrong??

#### Adrian Chu (Jun 21 2019 at 09:14):

btw if i change n to 3, it works

#### Mario Carneiro (Jun 21 2019 at 09:17):

the variable `n : N`

is getting added to `vec_exp1`

, so it actually takes four arguments

#### Mario Carneiro (Jun 21 2019 at 09:17):

you should make it implicit by writing it in braces

#### Adrian Chu (Jun 21 2019 at 09:24):

ah i see! but now I'm getting another error in the last line: vec_exp1 i, the term i has type \N but is expected to have type fin ?m_1. how can I fix it?

#### Kevin Buzzard (Jun 21 2019 at 09:48):

If you hover over `vec_exp1`

you can see it has type `vec_exp1 : Π {n : ℕ}, fin n → vector ℕ n → vector ℕ n → ℕ`

. So it wants an input of type `fin n`

and an input of type `vector ℕ n`

. If you hover over `i`

you see it has type `ℕ`

. So now you can understand the error. How to fix it -- make the types correct ;-) Do you have a more specific question?

#### Kevin Buzzard (Jun 21 2019 at 09:54):

Do you understand how to interpret `(list.range n).map`

? Hover over `list.range`

to see it has type `ℕ → list ℕ`

. So `list.range n`

has type `list ℕ`

. Now this clever `l.map`

thing means `list.map l`

and doing `#check list.map`

you can see the type of that too. You can just unravel everything. Your code doesn't work because when you unravel it it doesn't make sense.

#### Kevin Buzzard (Jun 21 2019 at 09:57):

Figuring out yourself what you want to do and then the types of the functions you want which will do them is a really good exercise for Lean beginners. It gets you thinking in the "functional programming" way.

#### Kevin Buzzard (Jun 21 2019 at 10:01):

I think you would be better off using `vector.map`

than `list.map`

because you're dealing with vectors. I see there is no `vector.range`

though. It would be a good exercise to define `vector.range`

first. Its type should be this:

`def vector.range : Π (n : ℕ), vector ℕ n := sorry`

#### Adrian Chu (Jun 21 2019 at 10:28):

If you hover over

`vec_exp1`

you can see it has type`vec_exp1 : Π {n : ℕ}, fin n → vector ℕ n → vector ℕ n → ℕ`

. So it wants an input of type`fin n`

and an input of type`vector ℕ n`

. If you hover over`i`

you see it has type`ℕ`

. So now you can understand the error. How to fix it -- make the types correct ;-) Do you have a more specific question?

yeah, i understand the error but just don't know what how to make i of type fin n. Should I use something like `max i n`

? Also, why does `#check max 3 4`

works while `#eval max 3 4`

don't?

#### Adrian Chu (Jun 21 2019 at 10:29):

I think you would be better off using

`vector.map`

than`list.map`

because you're dealing with vectors. I see there is no`vector.range`

though. It would be a good exercise to define`vector.range`

first. Its type should be this:

`def vector.range : Π (n : ℕ), vector ℕ n := sorry`

OK! i will try...

#### Johan Commelin (Jun 21 2019 at 10:35):

Actually... why do you want to work with `vector`

s?

#### Johan Commelin (Jun 21 2019 at 10:35):

That's the annoying thing: you have to choose a data representation.

#### Johan Commelin (Jun 21 2019 at 10:35):

It could be `list`

, or `vector`

or maps from `fin n`

.

#### Johan Commelin (Jun 21 2019 at 10:35):

Possibly other choices.

#### Johan Commelin (Jun 21 2019 at 10:36):

What is the end goal?

#### Johan Commelin (Jun 21 2019 at 10:36):

"Just goofing around" *is* a valid answer, in which case using `vector`

s is totally fine.

#### Adrian Chu (Jun 21 2019 at 10:46):

What is the end goal?

The goal is simply to have an n-tuple of numbers. Now you mention it, fin n \to \N does sounds a lot more simplier. I will use this....

#### Johan Commelin (Jun 21 2019 at 11:11):

`fin n → ℕ`

does sounds a lot more simplier.

Depends...

#### Johan Commelin (Jun 21 2019 at 11:11):

If you want to write down an explicit 5-tuple, I guess `vector`

is easier.

#### Adrian Chu (Jun 21 2019 at 12:02):

`fin n → ℕ`

does sounds a lot more simplier.Depends...

say now i have `f:fin n → ℕ`

and want to calculate f(1)+...+f(n). do I have to convert f to a vector and use `.tolist_sum`

? or is there a more direct way?

#### Alexander Bentkamp (Jun 21 2019 at 12:29):

Try `finset.univ.sum`

.

#### Adrian Chu (Jun 21 2019 at 14:02):

i know that int.gcd can find the gcd of two numbers. how about finding the gcd of n numbers? what is the best way?

#### Marc Huisinga (Jun 21 2019 at 14:12):

since gcd is an associative function, you can foldr or foldl over the collection of numbers

#### Kevin Buzzard (Jun 21 2019 at 14:59):

yeah, i understand the error but just don't know what how to make i of type fin n. Should I use something like

`max i n`

?

Do you understand how to make a term of type `fin n`

? By definition, a term of type `fin n`

is a pair, consisting of a natural number `i`

and a proof that `i < n`

. In your function you seem to have a random natural number `i`

with no conditions at all, so it will be impossible to prove that `i < n`

. That was why I was encouraging you to step back a bit and think about the functions you're using and what you're doing.

Also, why does

`#check max 3 4`

works while`#eval max 3 4`

don't?

`#eval max 3 4`

works for me. Can you post a MWE?

#### Adrian Chu (Jun 21 2019 at 15:17):

What does MWE stand for? lol

#### Adrian Chu (Jun 21 2019 at 15:18):

And the error message for `#eval max 3 4`

is "code generation failed, VM does not have code for lattice.lattice.conditionally_complete_linear_order_bot'"

#### Kevin Buzzard (Jun 21 2019 at 15:18):

Minimal working example. I type `#eval max 3 4`

and it worked fine. I was wondering exactly what you did. A MWE is something you can cut and paste so I can see the error you're seeing. Currently I can't do that becaue you're only posting the line which gave the error.

#### Adrian Chu (Jun 21 2019 at 15:27):

Oh, I figured it out. It is because I imported ring_theory.principal_ideal_domain

#### Adrian Chu (Jun 21 2019 at 16:49):

given positive integers a and b how can we prove that the sum of a b's is ab?

#### Kevin Buzzard (Jun 21 2019 at 16:50):

How are you defining the sum of `a`

`b`

's? The devil is in the detail for this one.

#### Reid Barton (Jun 21 2019 at 16:50):

And why do you want to prove it?

#### Reid Barton (Jun 21 2019 at 16:53):

You might decide the definition of `a * b`

already answers your question

#### Reid Barton (Jun 21 2019 at 16:53):

Or possibly `b * a`

#### Adrian Chu (Jun 21 2019 at 16:54):

How are you defining the sum of

`a`

`b`

's? The devil is in the detail for this one.

@Kevin Buzzard I'm using `finset.univ.sum (λ i : fin a, b)`

#### Adrian Chu (Jun 21 2019 at 16:56):

And why do you want to prove it?

@Reid Barton

This is a crucial lemma in a theorem I wanna prove. Specifically, I need to prove that given positive integers b, c, the sum of b (b^c)'s is b^(c+1)

#### Reid Barton (Jun 21 2019 at 16:57):

Well this statement still has the phrase "the sum of `x`

`y`

s" in it so I could ask the same question again

#### Reid Barton (Jun 21 2019 at 16:59):

We all know the sum of X Ys is X * Y, so why not just formalize the statement using *

#### Adrian Chu (Jun 21 2019 at 16:59):

Well this statement still has the phrase "the sum of

`x`

`y`

s" in it so I could ask the same question again

I need to show `finset.univ.sum (λ i : fin a, b)=a*b`

#### Reid Barton (Jun 21 2019 at 16:59):

OK, that's a statement that we should be able to prove

#### Kevin Buzzard (Jun 21 2019 at 17:05):

I feel like this should be in the library. Is it `sum_const`

or something? *shrug*

#### Kevin Buzzard (Jun 21 2019 at 17:06):

@Adrian Chu I would be tempted to find where `finset.sum`

is defined in mathlib (just right click on it in VS Code and go to the definition) and take a look at the next 50 lemmas after its definition, to see if there is anything useful there.

#### Adrian Chu (Jun 21 2019 at 17:08):

I found this

theorem sum_const (ι : Type u) (a : cardinal.{u}) : sum (λ _:ι, a) = mk ι * a := quotient.induction_on a $ λ α, by simp; exact quotient.sound ⟨equiv.sigma_equiv_prod _ _⟩

in set theory/cardinal

#### Kevin Buzzard (Jun 21 2019 at 17:12):

That's the wrong kind of sum. This is exactly why the devil is in the detail.

#### Kevin Buzzard (Jun 21 2019 at 17:14):

Was there nothing near finset.sum?

#### Adrian Chu (Jun 21 2019 at 17:21):

I can't see what I want (possibly due to my own ignorance)

#### Kevin Buzzard (Jun 21 2019 at 17:24):

Ok I'll take a look.

#### Kevin Buzzard (Jun 21 2019 at 17:24):

It is indeed `finset.sum_const`

#### Kevin Buzzard (Jun 21 2019 at 17:25):

You could have just tried `#check finset.sum_const`

#### Kevin Buzzard (Jun 21 2019 at 17:35):

Aah, I bet you didn't have the right import.

#### Kevin Buzzard (Jun 21 2019 at 17:36):

`import algebra.big_operators`

#### Kevin Buzzard (Jun 21 2019 at 17:36):

Yeah, I don't know how to search for it if you don't have the right import. It would be nice if it was easy just to import everything temporarily.

#### Kevin Buzzard (Jun 21 2019 at 17:37):

Oh wait, but you need that import for `finset.sum`

...

#### Kenny Lau (Jun 21 2019 at 17:52):

import algebra.big_operators data.fintype example {a b : ℕ} : (finset.univ : finset $ fin a).sum (λ i, b) = a * b := by rw [finset.sum_const, finset.card_univ, fintype.card_fin, add_monoid.smul_eq_mul, nat.cast_id]

#### Kevin Buzzard (Jun 21 2019 at 18:10):

That's the way (or you can use simp instead of the last couple of things, I should think)

#### Kevin Buzzard (Jun 21 2019 at 18:11):

example (a b : ℕ): finset.univ.sum (λ i : fin a, b)=a*b := by simp [finset.sum_const, finset.card_univ, fintype.card_fin]

#### Adrian Chu (Jun 22 2019 at 04:49):

Thanks guys!

#### Adrian Chu (Jun 22 2019 at 08:01):

What command should I use to define a function f(x) from N to N with two cases, namely x >= some fixed n and x < n?

#### Kevin Buzzard (Jun 22 2019 at 08:02):

Lean has `if ... then ... else`

#### Kevin Buzzard (Jun 22 2019 at 08:03):

The tactic you need to know to introduce it is `split_ifs`

.

#### Kevin Buzzard (Jun 22 2019 at 08:04):

`if...then...else`

is harder to use than you might think, because Lean will by default refuse to do a case split on a condition unless it knows an algorithm for deciding whether the condition is true or not. In your case this should be no trouble.

#### Adrian Chu (Jun 22 2019 at 08:15):

given a function f : fin n -> N, how can I extend it to N -> N by defining it to be zero when the input is >= n ?

#### Mario Carneiro (Jun 22 2019 at 08:16):

`if h : x < n then f <x, h> else 0`

#### Kevin Buzzard (Jun 22 2019 at 10:01):

You shouldn't think of `fin n`

as some sort of subset of nat. There's a map from `fin n`

to nat which forgets part of the structure. From your recent questions you seen to think that a term of type `fin n`

is a nat. It's not -- it's a pair.

#### Adrian Chu (Jun 22 2019 at 10:09):

You shouldn't think of

`fin n`

as some sort of subset of nat. There's a map from`fin n`

to nat which forgets part of the structure. From your recent questions you seen to think that a term of type`fin n`

is a nat. It's not -- it's a pair.

ya, i know, i just didn't know that we can directly write `if h : x < n`

instead of `if x<n`

#### Adrian Chu (Jun 22 2019 at 10:14):

What is the correct syntax to define such a function `def f (x : ℕ) : ℕ := if x < 10 then x else f (x-3)`

recursively? (since I don't want to use mod for some other reason)

#### Kevin Buzzard (Jun 22 2019 at 11:11):

You will have to prove, somehow, that your function is well-defined. One thing you could do is just use the equation compiler, but that would stink a bit in this situation. You could define f zero = 0, f 1 = 1, ..., and then f (n+10) to be f(n+7). That might work.

#### Mario Carneiro (Jun 22 2019 at 11:13):

you only have to go up to n+3

#### Kevin Buzzard (Jun 22 2019 at 11:13):

I'm not so sure it's as easy as that, because f(7) is not f(4)

#### Mario Carneiro (Jun 22 2019 at 11:14):

def f : ℕ → ℕ | x@(y+3) := if x < 10 then x else f y | x := x

#### Kevin Buzzard (Jun 22 2019 at 11:15):

Nice!

#### Mario Carneiro (Jun 22 2019 at 11:15):

it still has 4 cases though

#### Kevin Buzzard (Jun 22 2019 at 11:15):

I have never had to push the equation compiler to its limits, so I don't really know these tricks.

#### Mario Carneiro (Jun 22 2019 at 11:16):

try replacing `3`

with `100`

#### Adrian Chu (Jun 22 2019 at 11:48):

I see... Thanks.

#### Adrian Chu (Jun 22 2019 at 11:59):

def f : ℕ → ℕ | x@(y+3) := if x < 10 then x else f y | x := x

But can someone please explain a bit what does this means?

#### Kevin Buzzard (Jun 22 2019 at 12:00):

Have you read the section about the equation compiler in TPIL?

#### Kevin Buzzard (Jun 22 2019 at 12:00):

It says "if x = y + 3 for some other nat y then do the first thing, else do the second thing"

#### Adrian Chu (Jun 22 2019 at 12:03):

well not yet. you are right, i should read it first :)

#### Adrian Chu (Jun 22 2019 at 14:58):

Now i have read the relevant sections in TPIL. But I still got a problem: How can I check the well foundedness of

def f : ℕ × ℕ → ℕ × ℕ | (y+3, z+3) := if ((y+3) < 10) ∧ ((z+3) < 10) then (y+3, z+3) else f (y, z) | (y, z) := (y, z)

?

#### Kevin Buzzard (Jun 22 2019 at 16:22):

Some hints are here: https://github.com/leanprover-community/mathlib/blob/master/docs/extras/well_founded_recursion.md

#### Bryan Gin-ge Chen (Jun 23 2019 at 02:19):

I had a go at this here (web editor link):

import data.prod /- set_option pp.all true -/ def f : ℕ × ℕ → ℕ × ℕ | (y+3, z+3) := have h : (y, z).lex (<) (<) (y+3, z+3) := begin simp [prod.lex_def], have hy : y < y + 3 := nat.lt_add_of_pos_right (dec_trivial), exact or.inl hy, end, if ((y+3) < 10) ∧ ((z+3) < 10) then (y+3, z+3) else f (y, z) -- error here, see below | (y, z) := (y, z)

However, I'm not able to satisfy the equation compiler either:

failed to prove recursive application is decreasing, well founded relation @has_well_founded.r (ℕ × ℕ) (@prod.has_well_founded ℕ ℕ (@has_well_founded_of_has_sizeof ℕ nat.has_sizeof) (@has_well_founded_of_has_sizeof ℕ nat.has_sizeof)) Possible solutions: - Use 'using_well_founded' keyword in the end of your definition to specify tactics for synthesizing well founded relations and decreasing proofs. - The default decreasing tactic uses the 'assumption' tactic, thus hints (aka local proofs) can be provided using 'have'-expressions. The nested exception contains the failure state for the decreasing tactic. nested exception message: match failed state: f : ℕ × ℕ → ℕ × ℕ, y z : ℕ, h : prod.lex has_lt.lt has_lt.lt (y, z) (y + 3, z + 3) ⊢ prod.lex has_lt.lt has_lt.lt (y, z) (y + 3, z + 3)

(Isn't `h`

the same thing as the goal? Using `pp.all true`

doesn't reveal any differences either)

I think it's easier to define `f`

by working with the curried version, like this:

def f_curry : ℕ → ℕ → ℕ × ℕ | (y+3) (z+3) := if ((y+3) < 10) ∧ ((z+3) < 10) then (y+3, z+3) else f_curry y z | y z := (y, z) def f' (p : ℕ × ℕ) : ℕ × ℕ := f_curry p.1 p.2

#### Mario Carneiro (Jun 23 2019 at 02:26):

I think, from the trajectory of the questions, that @Adrian Chu wants to know how to do general recursions, rather than breaking them down into structural recursions. It looks like this:

def f : ℕ × ℕ → ℕ × ℕ | (x, y) := if h : x < 10 ∧ y < 10 then (x, y) else have has_well_founded.r (x - 3, y - 3) (x, y), from sorry, f (x - 3, y - 3) using_well_founded { dec_tac := `[assumption] }

You have to give a proof that `(x - 3, y - 3) < (x, y)`

according to some well founded relation. The default one here is lexicographic order on the natural numbers, which works in this case but may need to be replaced with something else in other examples

#### Adrian Chu (Jun 23 2019 at 03:57):

I see.. I didn't know we have `[assumption]

#### Adrian Chu (Jun 23 2019 at 03:59):

I am trying to prove the following variant of Bazout lemma:

lemma baz (a : ℕ) (b : ℕ) (h: nat.coprime a b): ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := sorry

As pointed before, we can use `gcd_a`

and `gcd_b`

#### Adrian Chu (Jun 23 2019 at 04:04):

but the trouble is gcd_a, gcd_b can be negative

#### Adrian Chu (Jun 23 2019 at 04:05):

so my plan is to keep adding multiples of a*b to `-gcd_a a b`

and `gcd_b a b`

repsectively until they are both >= 0

#### Adrian Chu (Jun 23 2019 at 04:07):

But I run into 2 problems. First, (as an example,) given a recursive function like

def f : ℕ → ℕ | (y+3) := if h:(y+3<10) then (y+3) else f y | y := y

how can we show that f x < 10 for all x ?

#### Adrian Chu (Jun 23 2019 at 04:09):

second, given a function f from Z to Z and a proof that f x >=0 for all x, how can we define a new function F from Z to N using f?

#### Mario Carneiro (Jun 23 2019 at 04:35):

Rather than "repeatedly adding multiples of a*b", just add `k*a*b`

for some k. This avoids all the mess of determining any well founded recursion, but it leaves the question - what is k?

#### Mario Carneiro (Jun 23 2019 at 04:36):

Also `baz`

is really obviously false when `a = b = 0`

#### Mario Carneiro (Jun 23 2019 at 04:36):

or more generally when they aren't coprime

#### Adrian Chu (Jun 23 2019 at 04:44):

Also

`baz`

is really obviously false when`a = b = 0`

oops, I carelessly left out the coprime condition. now fixed

#### Mario Carneiro (Jun 23 2019 at 04:46):

If a and b are both nonzero, then `a*b`

is also nonzero, so it is at least 1 and hence `k*a*b`

is at least k. So you can pick k to be `gcd_a a b`

and then `k*a*b - gcd_a a b`

will be nonnegative

#### Adrian Chu (Jun 23 2019 at 04:49):

Rather than "repeatedly adding multiples of a*b", just add

`k*a*b`

for some k. This avoids all the mess of determining any well founded recursion, but it leaves the question - what is k?

Yes, I have thought of that, we can choose `k=max (nat_abs (nat.gcd_a x y)) (nat_abs (nat.gcd_b x y))`

#### Adrian Chu (Jun 23 2019 at 04:49):

yes yours is simpler!

#### Mario Carneiro (Jun 23 2019 at 04:50):

You can use either `to_nat`

or `nat_abs`

to turn a nonnegative int into a nat

#### Adrian Chu (Jun 23 2019 at 04:52):

ok, let me try now

#### Adrian Chu (Jun 23 2019 at 05:02):

If a and b are both nonzero, then

`a*b`

is also nonzero, so it is at least 1 and hence`k*a*b`

is at least k. So you can pick k to be`gcd_a a b`

and then`k*a*b - gcd_a a b`

will be nonnegative

i was mistaken. we should add `k*b`

to `-gcd_a a b`

and add `k*a`

to `gcd_b a b`

#### Adrian Chu (Jun 23 2019 at 05:04):

and take `k = max (nat_abs (nat.gcd_a a b)) (nat_abs (nat.gcd_b a b))`

#### Adrian Chu (Jun 23 2019 at 11:26):

May I ask how to prove

`theorem test (a : ℤ) (ha : a >= 0) : a = nat_abs a`

?

#### Kevin Buzzard (Jun 23 2019 at 11:26):

Does cases on a work?

#### Kevin Buzzard (Jun 23 2019 at 11:26):

Alternatively just use library_search.

#### Kevin Buzzard (Jun 23 2019 at 11:27):

You might want to import data.int.basic before you search

#### Kevin Buzzard (Jun 23 2019 at 11:28):

Here's where I'm up to: https://github.com/kbuzzard/xena/blob/cd8e0de23adf8c0e7c56d39f1b6f5a55d93bf6ef/Examples/mario_glueing.lean#L144

#### Kenny Lau (Jun 23 2019 at 12:17):

open int theorem test : Π a : ℤ, 0 ≤ a → a = nat_abs a | (of_nat n) h := rfl

#### Kenny Lau (Jun 23 2019 at 12:18):

import data.int.basic open int theorem test (a : ℤ) (ha : 0 ≤ a) : a = nat_abs a := (nat_abs_of_nonneg ha).symm

#### Adrian Chu (Jun 23 2019 at 14:22):

Ah, so we already have this theorem. Thanks

#### Adrian Chu (Jun 23 2019 at 14:50):

I am proving using `calc`

, the first and last expression has type `nat`

but some of the intermediate steps have type `int`

#### Mario Carneiro (Jun 23 2019 at 14:51):

apply `int.coe_nat_inj`

#### Adrian Chu (Jun 23 2019 at 14:51):

which hence give type mismatch error

#### Mario Carneiro (Jun 23 2019 at 14:51):

before starting the calc block

#### Adrian Chu (Jun 23 2019 at 15:27):

can someone kindly write a simple example to illustrate how a proof involving `int.coe_nat_inj`

and `calc`

should look like? since I can't find this in mathlib

#### Mario Carneiro (Jun 23 2019 at 15:40):

you can't find the theorem?

#### Mario Carneiro (Jun 23 2019 at 15:41):

show me your proof and I'll fix it

#### Mario Carneiro (Jun 23 2019 at 15:44):

example (x y : ℕ) (a : ℤ) (h1 : a = x) (h2 : a = y) : x = y := int.coe_nat_inj $ calc ↑x = a : h1.symm ... = ↑y : h2

#### Adrian Chu (Jun 23 2019 at 15:47):

wow thanks.

#### Adrian Chu (Jun 23 2019 at 15:48):

lemma baz (a : ℕ) (b : ℕ) (h : nat.coprime a b) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b)), let x := -nat.gcd_a a b + (gcd_b a b)*k, let y := nat.gcd_b a b + (gcd_a a b)*k, have hx : x = nat_abs x := sorry, have hy : y = nat_abs y := sorry, fapply exists.intro, exact nat_abs x, fapply exists.intro, exact nat_abs y, calc a*(nat_abs x) + 1 = b*(nat_abs y) : by sorry end

#### Adrian Chu (Jun 23 2019 at 15:48):

bit by bit, I'm first dealing with the last `sorry`

. do you think i should use `int.coe_nat_inj`

here?

#### Mario Carneiro (Jun 23 2019 at 15:52):

lemma baz (a : ℕ) (b : ℕ) (h : nat.coprime a b) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b)), let x := -nat.gcd_a a b + (gcd_b a b)*k, let y := nat.gcd_b a b + (gcd_a a b)*k, have hx : x = nat_abs x := sorry, have hy : y = nat_abs y := sorry, existsi nat_abs x, existsi nat_abs y, apply int.coe_nat_inj, calc (a*(nat_abs x) + 1 : ℤ) = a*(nat_abs x) + 1 : by simp ... = a * x + 1 : by rw ← hx ... = b * y : by sorry ... = b * (nat_abs y) : by rw ← hy ... = (b * (nat_abs y) : ℤ) : by simp end

#### Adrian Chu (Jun 23 2019 at 15:59):

thx, i'll keep working on it

#### Adrian Chu (Jun 23 2019 at 16:20):

what should replace `sorry`

in this step? +_+ why doesn't `left_distrib`

works?

a*(-nat.gcd_a a b + b*k) + 1 ... = - a*nat.gcd_a a b + a*b*k + 1 : by sorry

#### Mario Carneiro (Jun 23 2019 at 16:21):

it's not just `left_distrib`

, you also associated + and distributed `-`

over `*`

#### Mario Carneiro (Jun 23 2019 at 16:22):

`simp [mul_add]`

should do it

#### Adrian Chu (Jun 23 2019 at 16:23):

`by simp [mul_add]`

doesnt work...

#### Mario Carneiro (Jun 23 2019 at 16:24):

look at what you get, and add theorems that make them look more alike

#### Adrian Chu (Jun 23 2019 at 16:28):

i need `mul_assoc`

!!

#### Adrian Chu (Jun 24 2019 at 05:05):

lemma baz (a : ℕ) (b : ℕ) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b)), let x := -nat.gcd_a a b + b*k, let y := nat.gcd_b a b + a*k, have hx : x = nat_abs x := sorry, have hy : y = nat_abs y := sorry, existsi nat_abs x, existsi nat_abs y, apply int.coe_nat_inj, calc (a*(nat_abs x) + 1 : ℤ) = a * x + 1 : by simp [hx.symm] ... = - a*nat.gcd_a a b + nat.gcd a b + a*b*k : by simp [mul_add, mul_assoc, add_assoc, h] ... = - a*nat.gcd_a a b + a*nat.gcd_a a b + b*nat.gcd_b a b + a*b*k : by sorry ... = b*nat.gcd_b a b + b*a*k : by simp [add_neg_self, mul_comm] ... = b * y : by simp [mul_add, mul_assoc] ... = (b * (nat_abs y) : ℤ) : by simp [hy.symm] end

#### Adrian Chu (Jun 24 2019 at 05:06):

I have trouble to finish the last sorry (on the 5th last line)

#### Adrian Chu (Jun 24 2019 at 05:07):

I know that i should use `gcd_eq_gcd_ab`

, but type problems of Z and N keep coming up, and I don't know how to use ↑ correctly

#### Mario Carneiro (Jun 24 2019 at 06:34):

make sure to bracket things correctly so that the work is isolated

#### Mario Carneiro (Jun 24 2019 at 06:34):

when you write `a+b+c+d`

that gets associated as `((a+b)+c)+d`

, so you are mixing different concerns

#### Mario Carneiro (Jun 24 2019 at 06:36):

lemma baz (a : ℕ) (b : ℕ) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b)), let x := -nat.gcd_a a b + b*k, let y := nat.gcd_b a b + a*k, have hx : x = nat_abs x := sorry, have hy : y = nat_abs y := sorry, existsi nat_abs x, existsi nat_abs y, apply int.coe_nat_inj, calc (a*(nat_abs x) + 1 : ℤ) = a * x + 1 : by simp [hx.symm] ... = - a*nat.gcd_a a b + nat.gcd a b + a*b*k : by simp [mul_add, mul_assoc, add_assoc, h] ... = - a*nat.gcd_a a b + (a*nat.gcd_a a b + b*nat.gcd_b a b) + a*b*k : by rw gcd_eq_gcd_ab ... = b*nat.gcd_b a b + b*a*k : by simp [add_neg_self, mul_comm] ... = b * y : by simp [mul_add, mul_assoc] ... = (b * (nat_abs y) : ℤ) : by simp [hy.symm] end

#### Adrian Chu (Jun 24 2019 at 06:38):

nope, this doesnt work, it has error message :

rewrite tactic failed, did not find instance of the pattern in the target expression

gcd ?m_4 ?m_5

state:

a b : ℕ,

h : nat.gcd a b = 1,

k : ℕ := max (nat_abs (gcd_a ↑a ↑b)) (nat_abs (gcd_b ↑a ↑b)),

x : ℤ := -nat.gcd_a a b + ↑b * ↑k,

y : ℤ := nat.gcd_b a b + ↑a * ↑k,

hx : x = ↑(nat_abs x),

hy : y = ↑(nat_abs y)

⊢ -↑a * nat.gcd_a a b + ↑(nat.gcd a b) + ↑a * ↑b * ↑k =

-↑a * nat.gcd_a a b + (↑a * nat.gcd_a a b + ↑b * nat.gcd_b a b) + ↑a * ↑b * ↑k

state:

a b : ℕ,

h : nat.gcd a b = 1,

k : ℕ := max (nat_abs (gcd_a ↑a ↑b)) (nat_abs (gcd_b ↑a ↑b)),

x : ℤ := -nat.gcd_a a b + ↑b * ↑k,

y : ℤ := nat.gcd_b a b + ↑a * ↑k,

hx : x = ↑(nat_abs x),

hy : y = ↑(nat_abs y)

⊢ ↑(a * nat_abs x + 1) = ↑(b * nat_abs y)

#### Mario Carneiro (Jun 24 2019 at 06:52):

this works for me:

import data.int.gcd open nat int lemma baz (a : ℕ) (b : ℕ) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b)), let x := -nat.gcd_a a b + b*k, let y := nat.gcd_b a b + a*k, have hx : x = nat_abs x := sorry, have hy : y = nat_abs y := sorry, existsi nat_abs x, existsi nat_abs y, apply int.coe_nat_inj, calc (a*(nat_abs x) + 1 : ℤ) = a * x + 1 : by simp [hx.symm] ... = - a*nat.gcd_a a b + nat.gcd a b + a*b*k : by simp [mul_add, mul_assoc, add_assoc, h] ... = - a*nat.gcd_a a b + (a*nat.gcd_a a b + b*nat.gcd_b a b) + a*b*k : by rw gcd_eq_gcd_ab ... = b*nat.gcd_b a b + b*a*k : by simp [add_neg_self, mul_comm] ... = b * y : by simp [mul_add, mul_assoc] ... = (b * (nat_abs y) : ℤ) : by simp [hy.symm] end

#### Mario Carneiro (Jun 24 2019 at 06:54):

it's not clear to me what you have open, but it seems to work with `nat`

and `int`

open

#### Mario Carneiro (Jun 24 2019 at 06:55):

maybe you have something else imported so that `gcd`

means something else?

#### Adrian Chu (Jun 24 2019 at 07:14):

I opened euclidean domain, thats why. but actually i don't need it. Thanks!

#### Adrian Chu (Jun 24 2019 at 09:45):

lemma baz (a : ℕ) (b : ℕ) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b)), let x := -nat.gcd_a a b + b*k, let y := nat.gcd_b a b + a*k, have hxx : 0 ≤ x := calc 0 ≤ -nat.gcd_a a b + b*nat_abs (gcd_a a b) : by sorry ... ≤ -nat.gcd_a a b + b*(max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b))) : by sorry --simp [le_max_left] ... = -nat.gcd_a a b + b*k : by sorry--simp ... = x : by simp,

this time is about the 2nd and 3rd `sorry`

. why dont those corresponding commands (which I commented above) work?

#### Kevin Buzzard (Jun 24 2019 at 11:32):

I insert my canonical remark. [It would be easier for people like me if you could just post fully working code. I need to open things, maybe import things, etc; can you do this part of the job for me please?].

#### Kevin Buzzard (Jun 24 2019 at 11:34):

But the answer to your question might be that the first sorry won't work because `simp`

is designed to prove equalities, not inequalities, and the second proof should probably be `rfl`

because it looks to me like it's true by definition.

#### Kevin Buzzard (Jun 24 2019 at 11:36):

All the simplifier does is that it proves things of the form X = Y by attempting to simplify both sides into a canonical form and then checking that they're true by definition. A simp lemma is of the form A = B, and if A is a sub-term in X then the simplifier will replace A by B; that's why simp lemmas should have (complicated) = (simpler) in that order.

#### Adrian Chu (Jun 24 2019 at 11:42):

I insert my canonical remark. [It would be easier for people like me if you could just post fully working code. I need to open things, maybe import things, etc; can you do this part of the job for me please?].

ok sure!

import data.int.basic data.int.gcd open nat int lemma baz (a : ℕ) (b : ℕ) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b)), let x := -gcd_a a b + b*k, let y := gcd_b a b + a*k, have hxx : x ≥ 0 := calc 0 ≤ -gcd_a a b + b*nat_abs (gcd_a a b) : by sorry ... ≤ -gcd_a a b + b*(max (nat_abs (gcd_a a b)) (nat_abs (gcd_b a b))) : by sorry --simp [le_max_left] ... = -gcd_a a b + b*k : sorry ... = x : by simp, have hx : x = nat_abs x := (nat_abs_of_nonneg hxx).symm, have hy : y = nat_abs y := sorry, existsi nat_abs x, existsi nat_abs y, apply int.coe_nat_inj, calc (a*(nat_abs x) + 1 : ℤ) = a * x + 1 : by simp [hx.symm] ... = - a*gcd_a a b + nat.gcd a b + a*b*k : by simp [mul_add, mul_assoc, add_assoc, h] ... = - a*gcd_a a b + (a*gcd_a a b + b*gcd_b a b) + a*b*k : by rw gcd_eq_gcd_ab ... = b*gcd_b a b + b*a*k : by simp [add_neg_self, mul_comm] ... = b * y : by simp [mul_add, mul_assoc] ... = (b * (nat_abs y) : ℤ) : by simp [hy.symm] end

#### Adrian Chu (Jun 24 2019 at 11:44):

for the 3rd sorry, rfl doesnt work...

#### Adrian Chu (Jun 24 2019 at 11:44):

All the simplifier does is that it proves things of the form X = Y by attempting to simplify both sides into a canonical form and then checking that they're true by definition. A simp lemma is of the form A = B, and if A is a sub-term in X then the simplifier will replace A by B; that's why simp lemmas should have (complicated) = (simpler) in that order.

ah, i see

#### Kevin Buzzard (Jun 24 2019 at 11:47):

calc 0 ≤ -gcd_a a b + b*nat_abs (gcd_a a b) : by sorry

a=1 and b=0 is a counterexample to this. This is the problem with skipping stuff -- even if your later two sorries are fixed the code might be unusable anyway.

#### Adrian Chu (Jun 24 2019 at 11:49):

no, we cant have a=1, b=0 because of the hypothesis h

#### Mario Carneiro (Jun 24 2019 at 11:49):

1 and 0 are coprime

#### Kevin Buzzard (Jun 24 2019 at 11:49):

The problem with the final sorry is that `k`

is defined using the max on nat, and the other max is on int.

#### Adrian Chu (Jun 24 2019 at 11:50):

oh no! :(

#### Adrian Chu (Jun 24 2019 at 11:50):

i need to add more hypothesis then

#### Adrian Chu (Jun 24 2019 at 11:51):

The problem with the final sorry is that

`k`

is defined using the max on nat, and the other max is on int.

how should I get around?

#### Kevin Buzzard (Jun 24 2019 at 11:54):

There might be some magic tactic which does it, I've not tried those new cast tactics. If not, then you have to fix things up yourself.

My recommendation would be to change every single variable into an int as soon as possible, make all the problems go away, and then just prove that various things are >= 0 at the end and then cast them back to nats.

#### Mario Carneiro (Jun 24 2019 at 11:54):

I would keep the max on nat, because that's the definition of k

#### Kevin Buzzard (Jun 24 2019 at 11:54):

You're just in the typical nat/int hell which several of my students found themselves in over the summer. In my mind, if your proof uses ints, then why even use nats at all? Just use ints with a proof that they're >= 0.

#### Mario Carneiro (Jun 24 2019 at 11:55):

it might help to write some up arrows because it's important to know where they are

#### Kevin Buzzard (Jun 24 2019 at 11:55):

Maybe this would be a great place to test out the cast tactics. Did they get written up in the tactics docs?

#### Kevin Buzzard (Jun 24 2019 at 11:56):

https://github.com/leanprover-community/mathlib/blob/master/docs/tactics.md

They're right at the bottom. Maybe these can help. I've never used them though.

#### Mario Carneiro (Jun 24 2019 at 11:56):

to use the cast tactics, you have to actually have a specific goal that you can use them with

#### Mario Carneiro (Jun 24 2019 at 11:57):

distributing the arrows in this case is easily enough done by simp, that's already working in the example

#### Adrian Chu (Jun 24 2019 at 13:27):

yes, i should definitely use int instead of nat. because I have to add the condition that a, b >0 anyway, as you pointed out

#### Adrian Chu (Jun 24 2019 at 13:36):

but since gcd_a and gcd_b are for nat instead of int, other troubles will arise...

#### Adrian Chu (Jun 24 2019 at 14:24):

import data.int.basic data.int.gcd open nat int lemma baz (a : ℕ) (b : ℕ) (ha : a > 0) (hb : b > 0) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max ↑(nat_abs (gcd_a a b)) ↑(nat_abs (gcd_b a b)), let x := -gcd_a a b + b*k, let y := gcd_b a b + a*k, have hxx : x ≥ 0 := calc 0 ≤ -gcd_a a b + b*(nat_abs (gcd_a a b)) : by sorry ... = -gcd_a a b + b*↑(nat_abs (gcd_a a b)) : by simp ... ≤ -gcd_a a b + b*(max ↑(nat_abs (gcd_a a b)) ↑(nat_abs (gcd_b a b))) : by sorry -- rw le_max_left ... = -gcd_a a b + b*k : by simp ... = x : by simp, have hx : x = nat_abs x := (nat_abs_of_nonneg hxx).symm, have hy : y = nat_abs y := sorry, existsi nat_abs x, existsi nat_abs y, apply int.coe_nat_inj, calc (a*(nat_abs x) + 1 : ℤ) = a * x + 1 : by simp [hx.symm] ... = - a*gcd_a a b + nat.gcd a b + a*b*k : by simp [mul_add, mul_assoc, add_assoc, h] ... = - a*gcd_a a b + (a*gcd_a a b + b*gcd_b a b) + a*b*k : by rw gcd_eq_gcd_ab ... = b*gcd_b a b + b*a*k : by simp [add_neg_self, mul_comm] ... = b * y : by simp [mul_add, mul_assoc] ... = (b * (nat_abs y) : ℤ) : by simp [hy.symm] end

#### Adrian Chu (Jun 24 2019 at 14:25):

now i have changed the definition of k so that everything should be in int. why `rw le_max_left `

still doesnt work?

#### Adrian Chu (Jun 24 2019 at 15:05):

oh, i know why, its because i need a proof for `(b:nat)(a:int)(c:int)(a<=c) : b*a <= b*c`

. i guess this is proven somewhere already?

#### Adrian Chu (Jun 24 2019 at 15:25):

import data.int.basic data.int.gcd open nat int lemma baz (a : ℕ) (b : ℕ) (ha : a ≥ 1) (hb : b ≥ 1) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max ↑(nat_abs (gcd_a a b)) ↑(nat_abs (gcd_b a b)), let x := -gcd_a a b + b*k, let y := gcd_b a b + a*k, have hxx : x ≥ 0 := calc (0 : ℤ) = 0 * nat_abs (gcd_a a b) : by simp ... ≤ (b - 1) * (nat_abs (gcd_a a b)) : by sorry -- rw hb ...? ... ≤ (↑b - ↑1) * (nat_abs (gcd_a a b)) : by simp ... = ↑b*↑(nat_abs (gcd_a a b)) - ↑1*↑(nat_abs (gcd_a a b)) : sub_mul (↑b) (↑1) ↑(nat_abs (gcd_a a b)) ... ≤ b*(nat_abs (gcd_a a b)) - gcd_a a b : by sorry -- ...? ... = -gcd_a a b + b*↑(nat_abs (gcd_a a b)) : by simp ... ≤ -gcd_a a b + b*(max ↑(nat_abs (gcd_a a b)) ↑(nat_abs (gcd_b a b))) : by sorry -- rw le_max_left ...? ... = -gcd_a a b + b*k : by simp ... = x : by simp, have hx : x = nat_abs x := (nat_abs_of_nonneg hxx).symm, have hy : y = nat_abs y := sorry, existsi nat_abs x, existsi nat_abs y, apply int.coe_nat_inj, calc (a*(nat_abs x) + 1 : ℤ) = a * x + 1 : by simp [hx.symm] ... = - a*gcd_a a b + nat.gcd a b + a*b*k : by simp [mul_add, mul_assoc, add_assoc, h] ... = - a*gcd_a a b + (a*gcd_a a b + b*gcd_b a b) + a*b*k : by rw gcd_eq_gcd_ab ... = b*gcd_b a b + b*a*k : by simp [add_neg_self, mul_comm] ... = b * y : by simp [mul_add, mul_assoc] ... = (b * (nat_abs y) : ℤ) : by simp [hy.symm] end

#### Adrian Chu (Jun 24 2019 at 15:29):

OK, I can finally see the end. Only 3 sorry's (the first 3) are left to be filled, which I believe can be done using existing theorems. Can someone please tell me what the right commands are? Thanks!

#### Kevin Buzzard (Jun 24 2019 at 15:53):

Did you try library search?

#### Adrian Chu (Jun 24 2019 at 15:58):

How to do `library_search`

? :exhausted:

#### Reid Barton (Jun 24 2019 at 15:58):

Read the documentation for `library_search`

?

#### Reid Barton (Jun 24 2019 at 16:00):

in https://github.com/leanprover-community/mathlib/blob/master/docs/tactics.md

#### Adrian Chu (Jun 24 2019 at 16:08):

Read the documentation for

`library_search`

?

Ya, I just noticed this, embarrassing lol

#### Adrian Chu (Jun 24 2019 at 16:16):

library_search times out :(

#### Kevin Buzzard (Jun 24 2019 at 16:24):

So your first sorry is a proof of

0 * nat_abs (gcd_a a b) ≤ (b - 1) * (nat_abs (gcd_a a b))

You wrote `rw hb`

, and here `hb : b >= 1`

. NB that's not the canonical way to write that inequality, the canonical way is `1 <= b`

. Yes, it does make a difference :-/

But you probably know what rw really does. For `rw h`

to work, `h`

must really be of the form `A = B`

, and then `rw`

takes the A's and replaces them with B's. So this is not a rewrite at all. What it is is a theorem. It's the theorem that x <= y and c >= 0 then x * c <= y * c. And that theorem will have a name, which you can either guess (once you understand the naming conventions), remember (once you've used Lean for a while) or look up using library-search (if it works).

#### Kevin Buzzard (Jun 24 2019 at 16:25):

example (a b c : ℤ) (h : 0 ≤ c) (h2 : a ≤ b) : a * c ≤ b * c := by library_search -- works!

#### Kevin Buzzard (Jun 24 2019 at 16:30):

Your second sorry seems to be of the form

↑b*↑(nat_abs (gcd_a a b)) - ↑1*↑(nat_abs (gcd_a a b)) ≤ b*(nat_abs (gcd_a a b)) - gcd_a a b

Why are you proving an inequality? It looks to me like this should be an equality. It seems to me that there are three issues here. The first is the inequality. The second is that a bunch of arrows disappeared (either because you didn't write them and they're still there really, or because you really are switching from integers to naturals). To sort that out you'll either need some cast tactic, or you should just work with integers all the way through. The final thing is the issue that you are implicitly assuming `1 * x = x`

. That's a theorem, it's not true by definition, so it will have a name (I guess the name is `one_mul`

), and that theorem needs to be applied somehow (by rewriting I guess) to get rid of that 1.

#### Kevin Buzzard (Jun 24 2019 at 16:39):

The third sorry is

-gcd_a a b + b*↑(nat_abs (gcd_a a b)) ≤ -gcd_a a b + b*(max ↑(nat_abs (gcd_a a b)) ↑(nat_abs (gcd_b a b)))

You want to use `le_max_left`

but again there are several steps which you're leaving out. As well as le_max_left there's the analogous theorem for the first sorry, but this time with conclusion `c * a <= c * b`

(different conclusion = different theorem name), and you're also using the fact that a <= b implies c + a <= c + b. I think the simplifier is not well suited for these questions -- but I might be wrong. In some sense this is the problem with calc mode. You could just enter tactic mode again with a begin end, and then do the rewriting yourself.

calc (0 : ℤ) = 0 * nat_abs (gcd_a a b) : by simp ... ≤ (b - 1) * (nat_abs (gcd_a a b)) : by sorry -- rw hb ...? ... ≤ (↑b - ↑1) * (nat_abs (gcd_a a b)) : by simp ... = ↑b*↑(nat_abs (gcd_a a b)) - ↑1*↑(nat_abs (gcd_a a b)) : sub_mul (↑b) (↑1) ↑(nat_abs (gcd_a a b)) ... ≤ b*(nat_abs (gcd_a a b)) - gcd_a a b : by sorry -- ...? ... = -gcd_a a b + b*↑(nat_abs (gcd_a a b)) : by simp ... ≤ -gcd_a a b + b*(max ↑(nat_abs (gcd_a a b)) ↑(nat_abs (gcd_b a b))) : begin apply add_le_add_left, apply mul_le_mul_of_nonneg_left, exact le_max_left _ _, -- etc sorry end ... = -gcd_a a b + b*k : by simp

Tactic mode is the best mode really. You wrote `rw le_max_left`

but that doesn't even make sense; you can only rewrite equalities. You are not thinking about what is really happening. What is really happening is that you are constantly applying little lemmas. The proof of `c + d * a <= c + d * (max a b)`

is really not a rewrite. It is an application of several unrelated facts about inequalities, each of which has been painstakingly proved by the library creators. One of them is le_max_left but there are others.

#### Adrian Chu (Jun 24 2019 at 16:43):

import data.int.basic data.int.gcd tactic.library_search algebra.ordered_ring open nat int lemma baz (a : ℕ) (b : ℕ) (ha : a ≥ 1) (hb : b ≥ 1) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max ↑(nat_abs (gcd_a a b)) ↑(nat_abs (gcd_b a b)), let x := -gcd_a a b + b*k, let y := gcd_b a b + a*k, have hg : 0 ≤ nat_abs (gcd_a a b) := by simp, have hb : 0 ≤ ↑b - 1 := by simp, have hxx : x ≥ 0 := calc (0 : ℤ) = 0 * nat_abs (gcd_a a b) : by simp ... ≤ (↑b - 1) * (nat_abs (gcd_a a b)) : by exact mul_le_mul_of_nonneg_right hb hg ... ≤ (↑b - ↑1) * (nat_abs (gcd_a a b)) : by simp ... = ↑b*↑(nat_abs (gcd_a a b)) - ↑1*↑(nat_abs (gcd_a a b)) : sub_mul (↑b) (↑1) ↑(nat_abs (gcd_a a b)) ... = b*(nat_abs (gcd_a a b)) - nat_abs (gcd_a a b) : by simp ... ≤ b*(nat_abs (gcd_a a b)) - gcd_a a b : by sorry ... = -gcd_a a b + b*↑(nat_abs (gcd_a a b)) : by simp ... ≤ -gcd_a a b + b*(max ↑(nat_abs (gcd_a a b)) ↑(nat_abs (gcd_b a b))) : by sorry -- rw le_max_left ...? ... = -gcd_a a b + b*k : by simp ... = x : by simp, have hx : x = nat_abs x := (nat_abs_of_nonneg hxx).symm, have hy : y = nat_abs y := sorry, existsi nat_abs x, existsi nat_abs y, apply int.coe_nat_inj, calc (a*(nat_abs x) + 1 : ℤ) = a * x + 1 : by simp [hx.symm] ... = - a*gcd_a a b + nat.gcd a b + a*b*k : by simp [mul_add, mul_assoc, add_assoc, h] ... = - a*gcd_a a b + (a*gcd_a a b + b*gcd_b a b) + a*b*k : by rw gcd_eq_gcd_ab ... = b*gcd_b a b + b*a*k : by simp [add_neg_self, mul_comm] ... = b * y : by simp [mul_add, mul_assoc] ... = (b * (nat_abs y) : ℤ) : by simp [hy.symm] end

#### Adrian Chu (Jun 24 2019 at 16:44):

This is my attempt to resolve the first sorry, but I got an enormous error message. As for the second sorry, it should be an inequality since gcd_a can be negative. I will come back to this tomorrow, its so late at night. Anyway, thanks~

#### Kevin Buzzard (Jun 24 2019 at 16:48):

As for the second sorry, it should be an inequality since gcd_a can be negative.

Oh, apologies! I thought these were the nat ones. In which case my comments apply about how things like a <= b implies c - a >= c - b are not immediate or automatic, they are theorems which need applying. You'll also need one_mul, and the fact that z <= abs z.

#### Kevin Buzzard (Jun 24 2019 at 16:52):

The enormous error message is only enormous because Lean has expanded everything out for you to explain what the problem is. If it just said "I was expecting a proof that something was <= something, and you gave me a proof that something was <= something" then it would be confusing. But

type mismatch at application mul_le_mul_of_nonneg_right hb term hb has type @has_le.le nat nat.has_le 0 (@has_sub.sub nat nat.has_sub (@coe nat nat (@coe_to_lift nat nat (@coe_base nat nat (@nat.cast_coe nat nat.has_zero nat.has_one nat.has_add))) b) 1) but is expected to have type @has_le.le int (@preorder.to_has_le int ...

says that `hb`

has type `@has_le.le nat ...`

(i.e. it's a proof that one nat is <= another nat) and you attempted to insert it into a function which was expecting something of type `@has_le.le int ...`

i.e. expecting a proof that one int is <= another int.

#### Kevin Buzzard (Jun 24 2019 at 16:55):

replacing `by exact mul_le_mul_of_nonneg_right`

with the more refined "attempt to solve the goal with this function but give me the inputs as new goals" `refine`

tactic

begin refine mul_le_mul_of_nonneg_right _ _, sorry, sorry end

shows you what the problem is. You now need to fill in those two holes, and the results you have are not good enough because they have the wrong type. Did I mention the idea of just using integers everywhere by the way?

#### Adrian Chu (Jun 24 2019 at 17:00):

Did I mention the idea of just using integers everywhere by the way?

but gcd_a, gcd_b are defined for nat instead of int

#### Kevin Buzzard (Jun 24 2019 at 17:00):

But this issue is just a general pain in type theory. Do you understand how to use these new cast tactics? The first hole needs a resolution of this issue

hb : 0 ≤ ↑b - 1 ⊢ 0 ≤ ↑b - 1

Here your hypothesis `hb`

has an arrow in, which turns out to be a cast from `nat`

to `nat`

. You can see all the gory details of everything by `set_option pp.all true`

. Setting this option will give you some real insight into how the computer is thinking about what you are doing. In particular you cannot cast from a nat to an int by just putting an up-arrow -- Lean doesn't know where you're casting to.

#### Kevin Buzzard (Jun 24 2019 at 17:01):

Did I mention the idea of just using integers everywhere by the way?

but gcd_a, gcd_b are defined for nat instead of int

Feed them absolute values of ints? I don't know if this is a feasible solution.

#### Adrian Chu (Jun 24 2019 at 17:04):

Did I mention the idea of just using integers everywhere by the way?

but gcd_a, gcd_b are defined for nat instead of int

Feed them absolute values of ints? I don't know if this is a feasible solution.

i will try tomorrow

#### Kevin Buzzard (Jun 24 2019 at 17:11):

I don't know if this is feasible. All I see when I look at your code though is a bunch of stuff about ints, with subtractions etc. You might well find the int analogue of the `gcd_a`

functions in `data.int.basic`

somewhere or maybe even in core.

#### Kevin Buzzard (Jun 24 2019 at 17:27):

Maybe now we have `norm_cast`

you don't need to make everything an integer. This little lemma might be a nice test case. I just tried it on the first sorry and had positive results:

calc (0 : ℤ) = 0 * nat_abs (gcd_a a b) : by simp ... ≤ (↑b - 1) * (nat_abs (gcd_a a b)) : begin apply mul_le_mul_of_nonneg_right, {norm_cast at hb ⊢, exact hb}, {norm_cast, simp} end

#### Kevin Buzzard (Jun 24 2019 at 17:32):

For the next one you need the theorem that a <= b implies c - b <= c - a (note how I always stick with <=, otherwise there would be lots of ways of saying inequalities; there is a "canonical form" for many expressions in Lean and it is only slowly dawning on me how important this is in practice). To find that theorem you see that the conclusion is of the form "a subtraction is less than or equal to a subtraction" so I type "apply sub_le_sub" and then I press ctrl-space in VS Code and see a list of all Lean's theorems that start `sub_le_sub`

and all their types too, so it's easy to find the one I want. That's another way of finding out the right name for a theorem.

#### Patrick Massot (Jun 24 2019 at 18:57):

@Adrian Chu I don't think you're going in the right direction. The first thing to understand is natural numbers are bad. You never noticed because real world always insert coercions to integers and all the coercion related lemmas. My advice is to first state and prove the integer version. Then we'll talk about deducing the evil version. Here is the exercise I propose:

import algebra.euclidean_domain import tactic.linarith open euclidean_domain lemma foo (a b : ℤ) (ha : a ≥ 1) (hb : b ≥ 1) (h : gcd a b = 1) : ∃ x y, a*x + 1 = b*y ∧ 0 ≤ x ∧ 0 ≤ y := begin let u := gcd_a a b, let v := gcd_b a b, let k := max (abs u) (abs v), use [-u+b*k, v + a*k], repeat { split }, { rw show 1 = a*u + b*v, from sorry, ring }, { suffices : u ≤ b * max (abs u) (abs v), by linarith, -- now you can `calc` sorry }, { sorry } end

#### Patrick Massot (Jun 24 2019 at 18:58):

each `calc`

block is 4 lines long

#### Patrick Massot (Jun 24 2019 at 18:59):

The first sorry is 20 characters (including spaces)

#### Patrick Massot (Jun 24 2019 at 19:16):

Hmm, do we have any lemma relating `nat.gcd`

and `euclidean_domain.gcd`

?

#### Patrick Massot (Jun 24 2019 at 19:19):

@Johan Commelin maybe?

#### Johan Commelin (Jun 24 2019 at 19:19):

I'm not too familiar with that part of the lib...

#### Patrick Massot (Jun 24 2019 at 19:19):

@Chris Hughes ?

#### Johan Commelin (Jun 24 2019 at 19:20):

I don't know, but I think there is glue between `nat.gcd`

and `int.gcd`

, and then also between `int.gcd`

and `euclidean_domain.gcd`

...

#### Patrick Massot (Jun 24 2019 at 19:20):

Where would that be?

#### Johan Commelin (Jun 24 2019 at 19:21):

Not sure... my memories might be wrong.

#### Patrick Massot (Jun 24 2019 at 19:22):

Going from the int version of Adrian's lemma to the evil version is trivial except for:

a b : ℕ, h : nat.gcd a b = 1 ⊢ gcd ↑a ↑b = 1

#### Johan Commelin (Jun 24 2019 at 19:25):

Hmm... if library_search can't close that, then we probably don't have it.

#### Patrick Massot (Jun 24 2019 at 19:27):

I'm sure British people can prove this

#### Kevin Buzzard (Jun 24 2019 at 19:32):

Maybe show they both have the same universal property? ;-)

#### Kevin Buzzard (Jun 24 2019 at 19:36):

example (a b : ℕ) (h : nat.gcd a b = 1) : int.gcd (a : ℤ) (b : ℤ) = 1 := by rw ←h; unfold int.gcd; congr'

#### Kevin Buzzard (Jun 24 2019 at 19:38):

example (a b : ℕ) (h : nat.gcd a b = 1) : int.gcd (a : ℤ) (b : ℤ) = 1 := by convert h

#### Kevin Buzzard (Jun 24 2019 at 19:39):

I was slightly surprised this worked until I realised that `int.nat_abs (\u m) = m`

was defeq for m a nat.

#### Patrick Massot (Jun 24 2019 at 19:40):

Nice! This bridges to `int.gcd`

but not to `euclidean_domain.gcd`

#### Patrick Massot (Jun 24 2019 at 19:49):

and the `by convert`

is useless...

#### Kevin Buzzard (Jun 24 2019 at 19:50):

example (a b : ℕ) (h : nat.gcd a b = 1) : int.gcd (a : ℤ) (b : ℤ) = 1 := h

Oh yeah! I used `convert`

because I "knew" there would be trouble with the cast to int and back.

#### Patrick Massot (Jun 24 2019 at 19:51):

but the game is to use `euclidean_domain.gcd`

#### Kevin Buzzard (Jun 24 2019 at 20:00):

#eval euclidean_domain.gcd (-1 : ℤ) (1 : ℤ) -- -1 #eval int.gcd (-1 : ℤ) (1 : ℤ) -- +1

:-(

#### Kevin Buzzard (Jun 24 2019 at 20:08):

example (a b : ℤ) : int.nat_abs (euclidean_domain.gcd a b) = int.gcd a b := begin apply gcd.induction a b, intro x, rw gcd_zero_left, exact (nat.gcd_zero_left (int.nat_abs x)).symm, intros a b ha hab, convert hab using 1, rw gcd_val a b, -- ⊢ int.gcd a b = int.gcd (b % a) a sorry end

This code (thanks, whoever wrote gcd.induction!) reduces the question to ` int.gcd a b = int.gcd (b % a) a`

.

#### Kevin Buzzard (Jun 24 2019 at 20:09):

...which is almost a question about nat.gcd apart from the fact that one needs to relate |b%a| to |b|%|a|.

#### Patrick Massot (Jun 24 2019 at 20:13):

What you are doing is not what I asked for

#### Kevin Buzzard (Jun 24 2019 at 20:13):

why don't you formalise the question?

#### Patrick Massot (Jun 24 2019 at 20:13):

I did!

#### Kevin Buzzard (Jun 24 2019 at 20:14):

Oh -- I misunderstood the arrows.

#### Patrick Massot (Jun 24 2019 at 20:14):

Going from the int version of Adrian's lemma to the evil version is trivial except for:

a b : ℕ, h : nat.gcd a b = 1 ⊢ gcd ↑a ↑b = 1

#### Kevin Buzzard (Jun 24 2019 at 20:14):

I thought I did that one.

#### Kevin Buzzard (Jun 24 2019 at 20:14):

I have Lean code which looks like that on my screen right now, with int open :-)

#### Patrick Massot (Jun 24 2019 at 20:14):

You need to open `euclidean_domain`

instead

#### Kevin Buzzard (Jun 24 2019 at 20:15):

you didn't post a MWE ;-)

#### Patrick Massot (Jun 24 2019 at 20:15):

I did

#### Patrick Massot (Jun 24 2019 at 20:15):

a few messages above

#### Patrick Massot (Jun 24 2019 at 20:15):

anyway

#### Kevin Buzzard (Jun 24 2019 at 20:15):

It was too minimal -- I guessed you'd opened the wrong thing :-)

I still propose we prove the thing I said about int. You can't use the inductive predicate for euclidean_domain.gcd on nats.

#### Kevin Buzzard (Jun 24 2019 at 20:16):

So unless you want to get your hands dirty, we prove some statement which is valid for all ints (as I was in the middle of doing) and then deduce the nat thing via some dirty work.

#### Patrick Massot (Jun 24 2019 at 20:16):

The context was my first message to Adrian, which included all imports and open

#### Kevin Buzzard (Jun 24 2019 at 20:17):

And the best statement I could find which was true for all ints was the one I posted.

#### Kevin Buzzard (Jun 24 2019 at 20:20):

bleurgh we need (a b : int), both >=0 implies euclidean_domain.gcd a b >=0 :-/

#### Kevin Buzzard (Jun 24 2019 at 20:37):

theorem useful (a b : ℤ) : int.nat_abs (b % a) = (int.nat_abs b) % (int.nat_abs a) := sorry

Meh.

#### Kevin Buzzard (Jun 24 2019 at 20:56):

import algebra.euclidean_domain open euclidean_domain theorem useful (a b : ℤ) : int.nat_abs (b % a) = (int.nat_abs b) % (int.nat_abs a) := sorry theorem useful2 (a b : ℤ) : a ≥ 0 → b ≥ 0 → euclidean_domain.gcd a b ≥ 0 := sorry theorem abs_gcd_eq_int_gcd (a b : ℤ) : int.nat_abs (euclidean_domain.gcd a b) = int.gcd a b := begin apply gcd.induction a b, intro x, rw gcd_zero_left, exact (nat.gcd_zero_left (int.nat_abs x)).symm, intros a b ha hab, convert hab using 1, rw gcd_val a b, --goal now : a ≠ 0 → int.gcd a b = int.gcd (b % a) a -- should be in mathlib unfold int.gcd, rw useful, have h : ∀ x y : ℕ, x ≠ 0 → nat.gcd x y = nat.gcd (y % x) x, { intros x y hx, cases x with x, revert hx, simp, simp -- nat.gcd equation lemma }, have h2 : int.nat_abs a ≠ 0 := λ h3, ha (int.eq_zero_of_nat_abs_eq_zero h3), -- missing a trick here generalize h3 : int.nat_abs a = A, generalize : int.nat_abs b = B, apply h, rwa ←h3, end theorem patrick (a b : ℕ) (h : nat.gcd a b = 1) : euclidean_domain.gcd (a : ℤ) b = 1 := begin show gcd (a : ℤ) b = (1 : ℕ), rw ←h, show _ = (int.gcd a b : ℤ), rw ←abs_gcd_eq_int_gcd (a : ℤ) b, convert (int.of_nat_nat_abs_eq_of_nonneg _).symm, -- ⊢ euclidean_domain.gcd ↑a ↑b ≥ 0 apply useful2; exact int.of_nat_nonneg _, end

Two sorries at the top. @Chris Hughes @Mario Carneiro what tricks am I missing?

#### Kevin Buzzard (Jun 24 2019 at 20:56):

The goal is to prove theorem `patrick`

.

#### Kevin Buzzard (Jun 24 2019 at 21:37):

theorem useful2 (a b : ℤ) : a ≥ 0 → b ≥ 0 → euclidean_domain.gcd a b ≥ 0 := begin apply gcd.induction a b, intros x h hx, rwa gcd_zero_left, intros c d hcn h hc hd, rw gcd_val, apply h _ hc, exact int.mod_nonneg d hcn, end

#### Chris Hughes (Jun 24 2019 at 21:48):

Personally I would question why you want to prove that theorem?

#### Mario Carneiro (Jun 24 2019 at 21:57):

I don't see why euclidean_domain.gcd is getting involved at all if the goal is to prove adrian's original statement

#### Kevin Buzzard (Jun 24 2019 at 22:17):

Yeah I already tried that but Patrick moaned

#### Kevin Buzzard (Jun 24 2019 at 22:18):

I'm having trouble working with `-A % B`

with (A B : nat) and `int.mod`

#### Kevin Buzzard (Jun 24 2019 at 22:19):

1 goal B : ℕ, A : ℕ, ⊢ int.nat_abs (-↑A % ↑B) = int.nat_abs (-↑A) % B

#### Kevin Buzzard (Jun 24 2019 at 22:34):

That's the last goal but it's bedtime

#### Kevin Buzzard (Jun 24 2019 at 22:36):

The left mod is int.mod

#### Kevin Buzzard (Jun 24 2019 at 22:45):

Aargh I don't think some of these are true :-/

#### Kevin Buzzard (Jun 24 2019 at 22:45):

No wonder I was struggling :-)

#### Kevin Buzzard (Jun 24 2019 at 22:46):

Do we have a tool which checks statements like these for a few random values of the integers in question?

#### Mario Carneiro (Jun 24 2019 at 22:47):

sure, you can use list ops to make a mini quickcheck

#### Mario Carneiro (Jun 24 2019 at 22:49):

#eval do A ← list.range 10, B ← list.range 10, guard (int.nat_abs (-↑A % ↑B) ≠ int.nat_abs (-↑A) % B), return (A, B) -- [(1, 3), ...]

#### Adrian Chu (Jun 25 2019 at 15:14):

Adrian Chu I don't think you're going in the right direction. The first thing to understand is natural numbers are bad. You never noticed because real world always insert coercions to integers and all the coercion related lemmas. My advice is to first state and prove the integer version. Then we'll talk about deducing the evil version. Here is the exercise I propose:

import algebra.euclidean_domain import tactic.linarith open euclidean_domain lemma foo (a b : ℤ) (ha : a ≥ 1) (hb : b ≥ 1) (h : gcd a b = 1) : ∃ x y, a*x + 1 = b*y ∧ 0 ≤ x ∧ 0 ≤ y := begin let u := gcd_a a b, let v := gcd_b a b, let k := max (abs u) (abs v), use [-u+b*k, v + a*k], repeat { split }, { rw show 1 = a*u + b*v, from sorry, ring }, { suffices : u ≤ b * max (abs u) (abs v), by linarith, -- now you can `calc` sorry }, { sorry } end

So i guess i should try finishing this version of my lemma, and forget everything about nat.gcd

#### Patrick Massot (Jun 25 2019 at 15:17):

If you can afford that then of course everything becomes much easier. I still think it's a problem that mathlib has at least three gcd with no lemmas relating them in the case of integers

#### Patrick Massot (Jun 25 2019 at 15:17):

Did you manage to fill in the sorries in my exercise?

#### Adrian Chu (Jun 25 2019 at 15:18):

I am working on them now

#### Patrick Massot (Jun 25 2019 at 15:20):

Ok, I hope I inserted them wisely. I first wrote the full proof and then removed it to help you getting better training. Don't hesitate to ask questions if it's still too painful

#### Adrian Chu (Jun 25 2019 at 15:50):

import algebra.euclidean_domain import tactic.linarith tactic.library_search open euclidean_domain lemma foo (a b : ℤ) (ha : a ≥ 1) (hb : b ≥ 1) (h : gcd a b = 1) : ∃ x y, a*x + 1 = b*y ∧ 0 ≤ x ∧ 0 ≤ y := begin let u := gcd_a a b, let v := gcd_b a b, let k := max (abs u) (abs v), have hk : 0 ≤ k := calc 0 ≤ abs u : abs_nonneg u ... ≤ max (abs u) (abs v) : le_max_left (abs u) (abs v) ... = k : by simp, use [-u+b*k, v + a*k], repeat { split }, { rw show 1 = a*u + b*v, from sorry, ring }, { suffices : u ≤ b * max (abs u) (abs v), by linarith, calc u ≤ abs u : by exact le_max_left u (-u) ... ≤ max (abs u) (abs v) : le_max_left (abs u) (abs v) ... = 1 * max (abs u) (abs v) : by simp ... ≤ b * max (abs u) (abs v) : by exact mul_le_mul_of_nonneg_right hb hk }, { suffices : -v ≤ a * max (abs u) (abs v), by linarith, calc -v ≤ abs (-v) : by exact le_max_left (-v) (-(-v)) ... = abs v : abs_neg v ... ≤ max (abs u) (abs v) : le_max_right (abs u) (abs v) ... = 1 * max (abs u) (abs v) : by simp ... ≤ a * max (abs u) (abs v) : by exact mul_le_mul_of_nonneg_right ha hk} end

#### Adrian Chu (Jun 25 2019 at 15:50):

I'm not able to figure out the first sorry (with 20 characters only). i know we need to use `gcd_eq_gcd_ab`

@Patrick Massot

#### Adrian Chu (Jun 25 2019 at 15:55):

Personally I would question why you want to prove that theorem?

This is a crucial lemma in a theorem i want to prove

#### Kevin Buzzard (Jun 25 2019 at 15:57):

Personally I would question why you want to prove that theorem?

This is a crucial lemma in a theorem i want to prove

I think that "that theorem" might have referred to the compatibility of the various notions of `gcd`

which we have in Lean.

#### Patrick Massot (Jun 25 2019 at 16:06):

You can shorten your first calc block to

calc 0 ≤ abs u : abs_nonneg u ... ≤ _ : le_max_left _ _

(or of course you can say `have hk : 0 ≤ k, from le_trans (abs_nonneg u) (le_max_left _ _),`

)

#### Patrick Massot (Jun 25 2019 at 16:06):

The sorry you couldn't do is `h ▸ gcd_eq_gcd_ab a b`

#### Patrick Massot (Jun 25 2019 at 16:07):

I hope you've learn a couple of tricks (those are no so easy to document)

#### Adrian Chu (Jun 25 2019 at 16:09):

wow thanks a lot!

#### Adrian Chu (Jun 25 2019 at 16:09):

i can finally move one to the second lemma, and then my main theorem :)

#### Kevin Buzzard (Jun 25 2019 at 16:09):

The triangle is term mode's version of the rewrite tactic.

#### Adrian Chu (Jun 25 2019 at 16:53):

import algebra.euclidean_domain algebra.big_operators data.finset open euclidean_domain def fin_n_to_list {n : ℕ} (x: fin n → ℤ) : list ℤ := (list.range n).map (λ i, if h : i < n then x ⟨i, h⟩ else 0) def n_lcm {n : ℕ} (x : fin n → ℤ) : ℤ := list.foldl lcm 1 (fin_n_to_list x) lemma n_lcm_coprime {n : ℕ} (x : fin n → ℤ) (y : ℤ) (hx : ∀ i : fin n, 1 ≤ x i) (hy : 1 ≤ y) (coprime : ∀ i : fin n, gcd (x i) y = 1) : gcd (n_lcm x) y = 1 := sorry

#### Adrian Chu (Jun 25 2019 at 16:54):

i will start to prove this lemma. is my formulation of definitions and the statement of lemma appropriate?

#### Kevin Buzzard (Jun 25 2019 at 17:07):

Why are you using maps `fin n -> int`

at all? You are carrying around the length of your list, but you could just read it off by looking at the length of the list.

#### Kevin Buzzard (Jun 25 2019 at 17:13):

import algebra.euclidean_domain algebra.big_operators data.finset open euclidean_domain def list.lcm (x : list ℤ) : ℤ := list.foldl lcm 1 x lemma list.lcm_coprime (x : list ℤ) (y : ℤ) (hx : ∀ s ∈ x, (1 : ℤ) ≤ s) (hy : 1 ≤ y) (coprime : ∀ s ∈ x, gcd s y = 1) : gcd (x.lcm) y = 1 := sorry

No `n`

in sight -- it's never needed.

#### Kevin Buzzard (Jun 25 2019 at 17:14):

Note also `x.lcm`

for `list.lcm x`

, it's a cool thing which I only recently understood.

#### Kevin Buzzard (Jun 25 2019 at 17:15):

The CS purists might even say that these should all be theorems about multisets not lists, because the lcm does not depend on the order, so why are you carrying that around?

#### Adrian Chu (Jun 26 2019 at 04:04):

a naive question: can a list have infinite length?

#### Mario Carneiro (Jun 26 2019 at 04:13):

No

#### Johan Commelin (Jun 26 2019 at 04:16):

Nope, they can not.

#### Johan Commelin (Jun 26 2019 at 04:16):

`list.length`

is defined for all lists, and it is a function to `nat`

.

#### Johan Commelin (Jun 26 2019 at 04:16):

Infinite lists are usually called "streams", and they need a different implementation. Not sure if we have them in Lean.

#### Johan Commelin (Jun 26 2019 at 04:16):

Sorry... laggy internet connection...

#### Mario Carneiro (Jun 26 2019 at 04:20):

we have streams, they are just defined as `stream A := nat -> A`

#### Adrian Chu (Jun 26 2019 at 04:21):

i see. all I want is an n-tuple of numbers. so i will use list Z instead of fin n -> Z.

#### Mario Carneiro (Jun 26 2019 at 04:21):

There are also lazy lists, which are more like haskell lists in implementation but are still finite according to the theory. But in meta land you can construct infinite lazy lists

#### Johan Commelin (Jun 26 2019 at 04:22):

i see. all I want is an n-tuple of numbers. so i will use list Z instead of fin n -> Z.

The question is... is your `n`

fixed? Because with a list, you don't know if it has length `n`

.

#### Adrian Chu (Jun 26 2019 at 04:25):

no, the lemma (and my main theorem) just need an arbitrary finite number of positive integers as inputs.

#### Adrian Chu (Jun 26 2019 at 04:26):

so list should be good

#### Adrian Chu (Jun 26 2019 at 04:48):

i just realized a problem. in my main theorem, i have a statement involving a^b, where a and b are positive integers. but since we are now letting a, b be int instead of nat, i cant use the default a^b function

#### Adrian Chu (Jun 26 2019 at 04:53):

well i can use (nat_abs a)^(nat_abs b). its a bit ugly, but nvm

#### Mario Carneiro (Jun 26 2019 at 05:13):

I think you should keep them as nats rather than do something like that

#### Mario Carneiro (Jun 26 2019 at 05:13):

if they are actually nats then there's nothing wrong with that

#### Johan Commelin (Jun 26 2019 at 05:28):

@Adrian Chu You can let `a`

and `b`

be `nat`

s but nevertheless use `int.gcd`

instead of `nat.gcd`

.

#### Johan Commelin (Jun 26 2019 at 05:28):

I agree that it's quite messy.

#### Adrian Chu (Jun 26 2019 at 05:45):

Adrian Chu You can let

`a`

and`b`

be`nat`

s but nevertheless use`int.gcd`

instead of`nat.gcd`

.

I don't think we can, since this gives error:

theorem mythm (r : list ℕ) (s : ℕ) (hr : ∀ z ∈ r, 1 ≤ z) (hs : 1 ≤ s) (coprime : ∀ z ∈ r, int.gcd z s = 1) : 1 = 1 := sorry

#### Mario Carneiro (Jun 26 2019 at 05:47):

what error? You may need to put some up arrows in there

#### Adrian Chu (Jun 26 2019 at 05:49):

OK, fixed

#### Adrian Chu (Jun 26 2019 at 06:01):

let x : list nat and I want to sum it. if for some reason I want to use `finset.univ.sum`

instead of `x.sum`

, what is the correct syntax?

#### Adrian Chu (Jun 26 2019 at 06:04):

import data.fintype variable x : list ℕ #check finset.univ.sum (λ i, x.nth i) #check finset.univ.sum (λ i : fin x.range, x.nth i)

this doesnt work

#### Johan Commelin (Jun 26 2019 at 06:05):

You need to `range`

of the `length`

of the list.

#### Johan Commelin (Jun 26 2019 at 06:05):

Why do you want to use `finset.sum`

?

#### Johan Commelin (Jun 26 2019 at 06:05):

If so... shouldn't you be using `finset nat`

instead of `list nat`

?

#### Adrian Chu (Jun 26 2019 at 06:09):

well I have `x : list nat`

and `r : list nat`

, lists of positive integers of the same length, and i want to sum (i-th term of x)^(i-th term of r) over i

#### Adrian Chu (Jun 26 2019 at 06:09):

i thought using `finset.univ.sum`

is the easiest

#### Mario Carneiro (Jun 26 2019 at 06:10):

I would do something like `zip_with`

to put the powers together and `list.sum`

to add them up

#### Johan Commelin (Jun 26 2019 at 06:10):

well I have

`x : list nat`

and`r : list nat`

, lists of positive integers of the same length, and i want to sum (i-th term of x)^(i-th term of r) over i

??? why are they of the same length?

#### Adrian Chu (Jun 26 2019 at 06:11):

well I have

`x : list nat`

and`r : list nat`

, lists of positive integers of the same length, and i want to sum (i-th term of x)^(i-th term of r) over i??? why are they of the same length?

by assumption

#### Scott Morrison (Jun 26 2019 at 07:13):

Can you arrange to have a `list (nat x nat)`

?

#### Adrian Chu (Jun 26 2019 at 07:29):

Can you arrange to have a

`list (nat x nat)`

?

how will this help?

#### Marc Huisinga (Jun 26 2019 at 07:31):

it ensures that you have two lists of the same size, but is this really the easiest way to do it? why not pass a proof that the lengths are equal instead?

#### Mario Carneiro (Jun 26 2019 at 07:31):

it separates the information somewhat and generally makes the proofs harder

#### Mario Carneiro (Jun 26 2019 at 07:32):

I don't know about the direction of this thread though. What is the actual goal? I think the encoding decisions are being made without a good idea of the target theorem and that's bad news

#### Adrian Chu (Jun 26 2019 at 07:33):

okok, let me write out the complete thm

#### Adrian Chu (Jun 26 2019 at 07:37):

import data.fintype theorem mythm (n : ℕ) (r : fin n → ℕ) (s : ℕ) (hr : ∀ i, 1 ≤ r i) (hs : 1 ≤ s) (coprime : ∀ i : fin n, nat.coprime (r i) s) : ∃ x : fin n → ℕ, ∃ y : ℕ, (∀ i, 1 ≤ x i) ∧ (0 ≤ y) ∧ (finset.univ.sum (λ i, (x i)^(r i)) = y^s) := sorry

#### Adrian Chu (Jun 26 2019 at 07:37):

this is the original form of my theorem

#### Adrian Chu (Jun 26 2019 at 07:38):

and we can discuss whether (1) to use nat or int, and (2) to use fin n -> nat or list nat

#### Marc Huisinga (Jun 26 2019 at 07:40):

the only adv of fin n -> nat i can think of is that fin n -> nat can be handy when dealing with nested inductive types

#### Adrian Chu (Jun 26 2019 at 07:43):

and I will use these 2 lemmas

import data.fintype import algebra.euclidean_domain import algebra.big_operators open euclidean_domain def fin_n_to_list {n : ℕ} (x: fin n → ℕ) : list ℕ := (list.range n).map (λ i, if h : i < n then x ⟨i, h⟩ else 0) def n_lcm {n : ℕ} (x : fin n → ℕ) : ℕ := list.foldl nat.lcm 1 (fin_n_to_list x) lemma n_lcm_coprime {n : ℕ} (x : fin n → ℕ) (y : ℕ) (coprime : ∀ i : fin n, nat.coprime (x i) y) : nat.coprime (n_lcm x) y := sorry -- not yet proven lemma bazout (a b : ℤ) (ha : a ≥ 1) (hb : b ≥ 1) (h : gcd a b = 1) : ∃ x y, a*x + 1 = b*y ∧ 0 ≤ x ∧ 0 ≤ y := sorry -- already proven

#### Mario Carneiro (Jun 26 2019 at 07:47):

The statement of `mythm`

looks fine, except that you can replace `fin n`

with any fintype `A`

, and the `0 <= y`

is redundant

#### Adrian Chu (Jun 26 2019 at 09:58):

no... i just tried, the conversion between int and nat is so frustrating (since my bazout lemma is using int). i want to use int in mythm. do we have a^b for a, b in int ?

#### Mario Carneiro (Jun 26 2019 at 10:02):

You want to have your bezout lemma on nat too

#### Adrian Chu (Jun 26 2019 at 10:13):

good idea, i can make a new lemma bazout2 for nat using bazout

#### Adrian Chu (Jun 26 2019 at 10:22):

my strategy now is to use a, b : nat, but use euclidean_domain.gcd (a:int) b

#### Mario Carneiro (Jun 26 2019 at 10:30):

Since you seem to already have a solution, I will show you how I would have proven your original bezout theorem:

lemma bezout (a : ℕ) (b : ℕ) (ha : a ≥ 1) (hb : b ≥ 1) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin let k := max (int.nat_abs (nat.gcd_a a b)) (int.nat_abs (nat.gcd_b a b)) + 1, let x : ℤ := b*k - nat.gcd_a a b, let y : ℤ := a*k + nat.gcd_b a b, refine ⟨int.to_nat x, int.to_nat y, int.coe_nat_inj _⟩, suffices : (a * int.to_nat x + 1 : ℤ) = b * int.to_nat y, {simpa}, have k1 : 1 ≤ k := nat.le_add_left _ _, have ha' : (1:ℤ) ≤ a := int.coe_nat_le.2 ha, have hb' : (1:ℤ) ≤ b := int.coe_nat_le.2 hb, have x0 : 0 ≤ x, { refine sub_nonneg.2 _, have := mul_le_mul_of_nonneg_right hb' (int.coe_nat_nonneg k), rw one_mul at this, refine le_trans (le_trans int.le_nat_abs _) this, refine int.coe_nat_le.2 _, exact nat.le_succ_of_le (le_max_left _ _) }, have y0 : 0 ≤ y, { refine sub_le_iff_le_add.1 _, rw zero_sub, have := mul_le_mul_of_nonneg_right ha' (int.coe_nat_nonneg k), rw one_mul at this, refine le_trans (le_trans int.le_nat_abs _) this, rw [int.nat_abs_neg, int.coe_nat_le], exact nat.le_succ_of_le (le_max_right _ _) }, rw [int.to_nat_of_nonneg x0, int.to_nat_of_nonneg y0], have := nat.gcd_eq_gcd_ab a b, rw [h, int.coe_nat_one] at this, rw [this, mul_sub, ← add_assoc, sub_add_cancel, mul_left_comm, ← mul_add], end

I don't think it's necessary to ban the use of nats as long as you are conscientious in your use of the up arrow. In particular if it makes you use more unusual functions with a worse interface then it's not worth it

#### Patrick Massot (Jun 26 2019 at 11:00):

I think this is completely crazy. How can it be a good idea to have `nat.gcd`

, `int.gcd`

and `euclidean_domain.gcd`

and not a single lemma relating those?

#### Mario Carneiro (Jun 26 2019 at 11:00):

Of course there should be such lemmas

#### Mario Carneiro (Jun 26 2019 at 11:01):

but we don't need the other gcds for this theorem

#### Patrick Massot (Jun 26 2019 at 11:02):

The proof of this theorem is much nicer with `euclidean_domain.gcd`

. Your proof is hideous, even if it could be nicer using `norm_cast`

#### Mario Carneiro (Jun 26 2019 at 11:02):

None of the proof has to do with gcd

#### Mario Carneiro (Jun 26 2019 at 11:03):

It's quite possible that norm_cast can be used in a few places, but it's mostly about dealing with max and simple algebra

#### Mario Carneiro (Jun 26 2019 at 11:03):

also I don't really care about calc blocks

#### Mario Carneiro (Jun 26 2019 at 11:05):

Are you saying that there is a completely different proof that uses `euclidean_domain.gcd`

with some different lemmas about it? I'm mostly following adrian's proof sketch here

#### Kevin Buzzard (Jun 26 2019 at 11:06):

The fact that we have

theorem gcd_eq_gcd_ab : (gcd a b : ℤ) = a * gcd_a a b + b * gcd_b a b := ...

and this proof is still nearly 30 lines is what is so jarring.

#### Mario Carneiro (Jun 26 2019 at 11:06):

This theorem is proving something completely different

#### Kevin Buzzard (Jun 26 2019 at 11:06):

Hmm, I guess gcd_a is probably an int, so there's some content getting all the signs right

#### Mario Carneiro (Jun 26 2019 at 11:07):

The important part is the fact that you can shift around solutions by a multiple

#### Kevin Buzzard (Jun 26 2019 at 11:07):

Right.

#### Mario Carneiro (Jun 26 2019 at 11:07):

and then you have some inequalities to check

#### Patrick Massot (Jun 26 2019 at 11:07):

I mean

lemma foo (a b : ℤ) (ha : a ≥ 1) (hb : b ≥ 1) (h : gcd a b = 1) : ∃ x y, a*x + 1 = b*y ∧ 0 ≤ x ∧ 0 ≤ y := begin let u := gcd_a a b, let v := gcd_b a b, let k := max (abs u) (abs v), have hk : 0 ≤ k := le_trans (abs_nonneg u) (le_max_left _ _), use [-u+b*k, v + a*k], repeat { split }, { rw show 1 = a*u + b*v, from h ▸ gcd_eq_gcd_ab a b, ring }, { suffices : u ≤ b * max (abs u) (abs v), by linarith, calc u ≤ abs u : le_abs_self u ... ≤ max (abs u) (abs v) : le_max_left _ _ ... = 1*max (abs u) (abs v) : by simp ... ≤ b*max (abs u) (abs v) : mul_le_mul_of_nonneg_right hb hk }, { suffices : -v ≤ a * max (abs u) (abs v), by linarith, calc -v ≤ abs v : neg_le_abs_self _ ... ≤ max (abs u) (abs v) : le_max_right _ _ ... = 1*max (abs u) (abs v) : by simp [le_max_right ] ... ≤ a*max (abs u) (abs v) : mul_le_mul_of_nonneg_right ha hk } end

#### Kevin Buzzard (Jun 26 2019 at 11:08):

I think our views on the meaning of the phrase "completely different" might have diverged a bit but I do take your point that more needs to be said.

#### Mario Carneiro (Jun 26 2019 at 11:08):

Ah, you linarith'd

#### Patrick Massot (Jun 26 2019 at 11:08):

And then, modulo relating different versions of `gcd`

, the proof of the `nat`

version is:

lemma baz (a : ℕ) (b : ℕ) (ha : a ≥ 1) (hb : b ≥ 1) (h : nat.gcd a b = 1) : ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin rcases foo a b (by exact_mod_cast ha) (by exact_mod_cast hb) sorry with ⟨x, y, h, hx, hy⟩, use [x.to_nat, y.to_nat], rw [show x = (x.to_nat : ℤ), by simp [hx], show y = (y.to_nat : ℤ), by simp [hy]] at h, exact_mod_cast h end

#### Kevin Buzzard (Jun 26 2019 at 11:08):

and he rung.

#### Patrick Massot (Jun 26 2019 at 11:10):

Of course I linarith, I ring, and norm_cast. I very strongly believe all this is essential for the future of proof assistants for mathematicians

#### Adrian Chu (Jun 26 2019 at 11:15):

million thanks. i am building up the framework of the proof of mythm currently.

#### Kevin Buzzard (Jun 26 2019 at 11:27):

It's quite possible that norm_cast can be used in a few places, but it's mostly about dealing with max and simple algebra

This is just the metamath-mario trying to get out. You can see a proof from very low-level principles and so figure that this is a good way to do it. The mathematicians see the theorem for `int`

and figure that the boring work is already done, and that we should just use that.

#### Mario Carneiro (Jun 26 2019 at 11:29):

I don't use a high level tactic until I understand very well how effective it is and in what circumstances it can be used. AKA "no magic"

#### Kevin Buzzard (Jun 26 2019 at 11:30):

That's exactly what I'm saying. You were brought up on metamath and this has formed the way you think about how to solve goals.

#### Mario Carneiro (Jun 26 2019 at 11:30):

That said there are also a few proof tricks in Patrick's proof that I missed

#### Patrick Massot (Jun 26 2019 at 11:30):

We need both. We need Mario on one side, and we need Rob and Paul-Nicolas (and hopefully many more) on the other side

#### Kevin Buzzard (Jun 26 2019 at 11:31):

Yes absolutely. I'm just saying that it's interesting.

#### Kevin Buzzard (Jun 26 2019 at 11:32):

I think Mario wrote `ring`

;-)

#### Mario Carneiro (Jun 26 2019 at 11:33):

The difference between us is I feel the pressure of the actual proof term at all times

#### Mario Carneiro (Jun 26 2019 at 11:33):

If I use a complicated tactic that spews some gigantic... *thing*... in for my proof, I think it's a long and ugly proof

#### Kevin Buzzard (Jun 26 2019 at 11:33):

That's an interesting comment. I have no concept of what proof terms look like.

#### Mario Carneiro (Jun 26 2019 at 11:33):

even if the proof script is just `by magic`

#### Mario Carneiro (Jun 26 2019 at 11:34):

When I use `refine`

and `apply`

and `rw`

, I know exactly what proof term is getting generated and how heavy that's going to be for the kernel

#### Kevin Buzzard (Jun 26 2019 at 11:35):

With Olympiad training as a kid I was taught the importance of getting the solution out as quickly as possible, who cares about other stuff like elegance.

#### Kevin Buzzard (Jun 26 2019 at 11:35):

I'm hoping that `refine`

and `apply`

don't add too much weight to the proof term! I know how to do them in term mode :-)

#### Mario Carneiro (Jun 26 2019 at 11:35):

that's exactly the point

#### Patrick Massot (Jun 26 2019 at 11:36):

Then we need more CS people working on the magic tactics, so that they produce nicer terms

#### Mario Carneiro (Jun 26 2019 at 11:36):

You don't really need them - you could do them in term mode. They are just slight shorthands for building that proof term

#### Kevin Buzzard (Jun 26 2019 at 11:37):

why do I care about the size of a proof term? It's immediately forgotten.

#### Mario Carneiro (Jun 26 2019 at 11:37):

I wish that was a bigger concern in lean-land. I care about it very much, but lean fights me when it comes to proof optimization

#### Mario Carneiro (Jun 26 2019 at 11:37):

It's not forgotten, it's stored and passed around and checked hundreds of time on travis and burns many CPU hours around the world

#### Patrick Massot (Jun 26 2019 at 11:45):

I'm seriously thinking about buying a bigger CPU tonight

#### Patrick Massot (Jun 26 2019 at 11:46):

I think I want more cores and more RAM, right?

#### Kevin Buzzard (Jun 26 2019 at 11:46):

gone are the days of more pixels. I remember wanting more colours once!

#### Kevin Buzzard (Jun 26 2019 at 11:46):

I upgraded from 2 to 8

#### Reid Barton (Jun 26 2019 at 11:47):

But some of the 8 colors were different on every other pixel so it was really more like... 12

#### Patrick Massot (Jun 26 2019 at 11:47):

I think VScode doesn't use much more than 8 colors, so you should be fine now. Let's go for CPU cores instead

#### Kevin Buzzard (Jun 26 2019 at 11:47):

Is a GPU of any use?

#### Patrick Massot (Jun 26 2019 at 11:47):

Unfortunately no

#### Patrick Massot (Jun 26 2019 at 11:48):

But wait until @Gabriel Ebner starts working on Lean 4 on GPU

#### Kevin Buzzard (Jun 26 2019 at 11:51):

This isn't really a noob question and probably deserves its own thread. My laptop has something like 4 cores and 16 gigs of ram and even if I compile mathlib I don't ever get close to those 16 gigs.

#### Patrick Massot (Jun 26 2019 at 11:51):

I have RAM problems when I want to compile mathlib while using VScode on some other project

#### Kevin Buzzard (Jun 26 2019 at 11:52):

In particular, I wonder whether somehow there's a theorem of the form "if you have x cores, then don't buy any more than c * x gigs of ram because Lean won't use that much" for some constant c.

#### Kevin Buzzard (Jun 26 2019 at 11:53):

When I want to compile mathlib and do something else too, I see how much free ram I have and then compile from the command line with the -M flag. Occasionally the build just runs out of memory and stops, and then I just start it again.

#### Johan Commelin (Jun 26 2019 at 11:54):

I haven't compile mathlib in the last 6 weeks. `cache-olean`

is quite awesome.

#### Kevin Buzzard (Jun 26 2019 at 11:54):

Using the M flag also stopped my desktop (which had no swap) from randomly crashing when Lean suddenly goes beserk and fills up all memory.

#### Kevin Buzzard (Jun 26 2019 at 11:55):

IIRC it still didn't stop Lean going crazy when invoked via VS Code, so I added some swap anyway.

#### Patrick Massot (Jun 26 2019 at 11:55):

compiling mathlib is still required when working on mathlib

#### Kevin Buzzard (Jun 26 2019 at 12:56):

It's not forgotten, it's stored and passed around and checked hundreds of time on travis and burns many CPU hours around the world

You make it sound like I'm responsible for climate change!

#### Patrick Massot (Jun 26 2019 at 12:57):

This is clearly what he means.

#### Patrick Massot (Jun 26 2019 at 12:57):

And now you'll make it worse by flying to Portland

#### Adrian Chu (Jun 26 2019 at 14:49):

import algebra.euclidean_domain data.vector import tactic.linarith tactic.library_search algebra.big_operators data.fintype import data.finset data.nat.basic open euclidean_domain def fin_n_to_list {n : ℕ} (x: fin n → ℕ) : list ℕ := (list.range n).map (λ i, if h : i < n then x ⟨i, h⟩ else 0) def n_lcm {n : ℕ} (x : fin n → ℕ) : ℕ := list.foldl nat.lcm 1 (fin_n_to_list x) lemma sum_to_mul (a b : ℕ): finset.univ.sum (λ i : fin a, b) = a*b := by simp [finset.sum_const, finset.card_univ, fintype.card_fin] lemma n_lcm_coprime {n : ℕ} (x : fin n → ℕ) (y : ℕ) (coprime : ∀ i : fin n, nat.gcd (x i) y = 1) : nat.gcd (n_lcm x) y = 1 := sorry -- need to prove lemma bazout_int (a b : ℤ) (ha : a ≥ 1) (hb : b ≥ 1) (h : gcd a b = 1) : ∃ x : ℤ, ∃ y : ℤ, a*x + 1 = b*y ∧ 0 ≤ x ∧ 0 ≤ y := sorry -- already proven lemma bazout_nat (a b : ℕ) (ha : a ≥ 1) (hb : b ≥ 1) (h : nat.gcd a b = 1) : -- this is Patrick's version, or we can use mario's version above ∃ x : ℕ, ∃ y : ℕ, a*x + 1 = b*y := begin rcases bazout_int a b (by exact_mod_cast ha) (by exact_mod_cast hb) sorry with ⟨x, y, h, hx, hy⟩, -- need to prove use [x.to_nat, y.to_nat], rw [show x = (x.to_nat : ℤ), by simp [hx], show y = (y.to_nat : ℤ), by simp [hy]] at h, exact_mod_cast h end theorem mythm (n : ℕ) (r : fin n → ℕ) (s : ℕ) (hn ≥ 1) (hr : ∀ i, r i ≥ 1) (hs : s ≥ 1) (coprime : ∀ i : fin n, nat.gcd (r i) s = 1) : ∃ x : fin n → ℕ, ∃ y : ℕ, (∀ i, x i ≥ 1) ∧ (finset.univ.sum (λ i, (x i)^(r i)) = y^s) := begin let t := n_lcm r, have t_geq_1 : t ≥ 1 := sorry, -- need to prove have ri_div_t : (∀ i : fin n, t % (r i) = 0) := sorry, -- need to prove let t_s_coprime := n_lcm_coprime r s coprime, cases (bazout_nat t s t_geq_1 hs t_s_coprime) with a ha, cases ha with b hab, let x := λ i : fin n, n^((a*t)/r i), let y := n^b, have trivial1 : a*t + 1 = t*a + 1 := by simp [mul_comm], fapply exists.intro, exact x, fapply exists.intro, exact y, repeat {split}, {intro i, sorry}, -- for all i xi ≥ 1 calc finset.univ.sum (λ i, (n^((a*t)/r i))^(r i)) = finset.univ.sum (λ i, (n^(((a*t)/r i)*(r i)))) : by sorry ... = finset.univ.sum (λ i: fin n, n^(a*t)) : by sorry ... = n*(n^(a*t)) : sum_to_mul n (n^(a*t)) ... = n^(a*t + 1): by exact mul_comm n (n^(a*t)) ... = n^(t*a + 1) : by exact congr_arg (pow n) trivial1 ... = n^(s*b) : by exact congr_arg (pow n) hab ... = n^(b*s) : by exact congr_arg (pow n) (mul_comm s b) ... = (n^b)^s : nat.pow_mul b s n end

#### Adrian Chu (Jun 26 2019 at 14:50):

this is a record of my (our) progress, i am (we are) getting closer! but for me it's time to rest

#### Reid Barton (Jun 26 2019 at 14:54):

I know Lean doesn't care, but the name is `bezout`

(or even better, Bézout)

#### Johan Commelin (Jun 26 2019 at 14:55):

I'm inclined to say *baz…* (oh, well, never mind…)

#### Patrick Massot (Jun 26 2019 at 15:25):

I assumed all along this was an intentional reference to the foo, bar, baz sequence, being a variation on Bézout

#### Patrick Massot (Jun 26 2019 at 15:26):

but maybe I was wrong and Adrian wants to also credit Bachet

#### Kevin Kappelmann (Jun 26 2019 at 19:14):

When using `induction t : e`

, I get a weird inductive hypothesis. For example,

example {n : ℕ} : n ≤ n * n := begin induction n_eq : n with m IH, { simp }, { sorry } -- ill-formed IH here end

will create a goal

case nat.succ n m : ℕ, IH : n = m → m ≤ m * m, n_eq : n = nat.succ m ⊢ nat.succ m ≤ nat.succ m * nat.succ m

but clearly, `n ≠ m`

so I cannot use the IH for my proof. Am I doing something wrong? Note: I need `n_eq`

in the theorem I am actually proving, so just dropping `n_eq :`

is not an option.

#### Kevin Buzzard (Jun 26 2019 at 19:20):

What does "I need `n_eq`

in the theorem I am actually proving" mean? What's wrong with just `induction n with m IH`

?

#### Johan Commelin (Jun 26 2019 at 19:21):

~~Wrong thread?~~

#### Kevin Buzzard (Jun 26 2019 at 19:21):

(deleted)

#### Kevin Buzzard (Jun 26 2019 at 19:22):

What does "I need

`n_eq`

in the theorem I am actually proving" mean? What's wrong with just`induction n with m IH`

?

I mean, what hypothesis do you actually want, if it's not what you got?

#### Kevin Buzzard (Jun 26 2019 at 19:27):

But yeah something is weird there. It's as if `induction n_eq : n`

will always cause you trouble.

#### Kevin Kappelmann (Jun 26 2019 at 19:27):

That was ill-phrased, let me clarify. I did not want to say that I need `n_eq`

, but rather, I still want to be able to refer to `n`

in both cases. Basically, I want to do some work for both case nat.zero and nat.succ using `all_goals`

and referring to `n`

like this:

example {n : ℕ} : n ≤ n * n := begin induction n_eq : n with m IH, all_goals { have : 0 ≤ n, from n.zero_le }, { simp }, { sorry } end

#### Kevin Buzzard (Jun 26 2019 at 19:28):

But when you do `induction n`

, doesn't `n`

literally disappear from the context?

#### Kevin Kappelmann (Jun 26 2019 at 19:28):

Yep, that's what I want to avoid!

#### Kevin Buzzard (Jun 26 2019 at 19:29):

But isn't that how induction works?

#### Kevin Buzzard (Jun 26 2019 at 19:29):

I see what you're trying to do.

#### Kevin Buzzard (Jun 26 2019 at 19:29):

Why don't you just do what you want to do with n before you start on the induction?

#### Kevin Kappelmann (Jun 26 2019 at 19:32):

Because then it introduces these statements as premises in my IH as well. Cf:

example {n : ℕ} : n ≤ n * n := begin have : 0