Zulip Chat Archive

Stream: maths

Topic: How to formalize recursion theorem?


Rui Liu (Nov 13 2020 at 00:15):

I have defined natural numbers with Peano axioms, rather than inductive types. Any hint on how to formalize natural number recursion theorem from this framework?

The reference book I got is Elements of Set Theory, in which the recursion theorem is proved by taking union of functions. This operation doesn't seem to be very native in type theory language. Is there any other way to formalize it?

Johan Commelin (Nov 13 2020 at 06:49):

@Rui Liu Could you share some code? That will make it easier to help you.

Rui Liu (Nov 13 2020 at 12:31):

Johan Commelin said:

Rui Liu Could you share some code? That will make it easier to help you.

I have defined the following code for Peano axioms:

constant Nat: Type
constant Zero: Nat
constant Successor: Nat  Nat
constant AxiomNat1: n:Nat, Successor n  Zero
constant AxiomNat2: n:Nat, m:Nat, Successor n = Successor m  n = m
constant AxiomNat3: P: Nat  Prop, (P Zero  n:Nat, P n  P (Successor n))  n:Nat, P n

Now I would like to prove the recursion theorem, which takes a:A and f: A → A, to get a unique recursion function g on Nat.

theorem recursion: A:Type, Πa:A, Πf:(A  A), g: Nat  A, (g Zero = a  n:Nat, g (Successor n) = f (g n)) := sorry

I know a proof in documented in "Elements of Set Theory" but that's too heavily based on sets and feel unnatural to do in type theory (not even sure if it will work). Is there a more natural way to prove it in type theory that only uses the axioms above without resorting to inductive types?

Reid Barton (Nov 13 2020 at 13:44):

I think you can approach the problem like this:

  • there's no way we can produce the required function g directly, so we have to use (unique) choice to obtain it from its graph
  • so let's try to produce the relation r : Nat -> A -> Prop which is supposed to be the graph of g
  • now if we used the inductive type nat, we could define r as an inductive predicate with two constructors r 0 a and r n x -> r (n+1) (f x)
  • by a general procedure, we can replace the inductive description of this predicate by the following "impredicative" one: take the intersection of all relations s satisfying s 0 a and forall n x, s n x -> s (n + 1) (f x), and call it r
  • now try to prove by induction on n that this r is the graph of a function g
  • since it satisfies the conditions r 0 a and r n x -> r (n+1) (f x) (basically by definition) we conclude the function g will satisfy the desired recurrence relation.

Reid Barton (Nov 13 2020 at 13:46):

in the next to last point by "is the graph of a function" I just mean that for each n, there is a unique y such that r n y

Reid Barton (Nov 13 2020 at 13:54):

Probably you could approximate the relation r "from below" rather than "from above", as well

Mario Carneiro (Nov 13 2020 at 15:33):

If you want a minimal addition to your setup that allows you to construct such functions, I suggest adding a constant the : unique A -> A

Mario Carneiro (Nov 13 2020 at 15:34):

or possibly even a first order version of that to avoid needing subtypes

Rui Liu (Nov 13 2020 at 20:50):

Reid Barton said:

I think you can approach the problem like this:

  • there's no way we can produce the required function g directly, so we have to use (unique) choice to obtain it from its graph
  • so let's try to produce the relation r : Nat -> A -> Prop which is supposed to be the graph of g
  • now if we used the inductive type nat, we could define r as an inductive predicate with two constructors r 0 a and r n x -> r (n+1) (f x)
  • by a general procedure, we can replace the inductive description of this predicate by the following "impredicative" one: take the intersection of all relations s satisfying s 0 a and forall n x, s n x -> s (n + 1) (f x), and call it r
  • now try to prove by induction on n that this r is the graph of a function g
  • since it satisfies the conditions r 0 a and r n x -> r (n+1) (f x) (basically by definition) we conclude the function g will satisfy the desired recurrence relation.

How do you define "intersection of all relations", when we're working in a synthetic theory rather than reducing everything to sets?

Reid Barton (Nov 13 2020 at 20:52):

Intersection meaning "and", if you will. The relations are all effectively subsets of Nat \x A.

Reid Barton (Nov 13 2020 at 20:53):

let r : Nat -> A -> Prop := \lam n x,
\all (s : Nat -> A -> Prop), (s 0 a /\ \all n x, s n x -> s (n + 1) (f x)) -> s n x

Rui Liu (Nov 14 2020 at 21:12):

Reid Barton said:

let r : Nat -> A -> Prop := \lam n x,
\all (s : Nat -> A -> Prop), (s 0 a /\ \all n x, s n x -> s (n + 1) (f x)) -> s n x

Thanks and I'm trying out this approach. Intuitively with this definition I should be able to prove:

h0: r 0 a
h1: forall n x, r n x -> r (n + 1) (f x)

These properties are directly provable when we define r as intersection in set theory. However in type theory, I managed to prove h0, but I don't see how to prove h1 yet. Any ideas?

Reid Barton (Nov 14 2020 at 21:15):

I think just expanding everything and apply hypotheses will do it

Reid Barton (Nov 14 2020 at 21:19):

we know every "good" s (satisfying the inner condition) satisfies s n x, and we need to show every good s satisfies s (n+1) (f x), but by definition, for a good s, s n x implies s (n+1) (f x)

Reid Barton (Nov 14 2020 at 21:24):

in general, any set theory that doesn't rely on a global membership relation (like ordinals) should translate directly to type theory

Rui Liu (Nov 14 2020 at 21:28):

Reid Barton said:

in general, any set theory that doesn't rely on a global membership relation (like ordinals) should translate directly to type theory

I might missed something ... but the following code doesn't type check:

constant Nat: Type
constant Zero: Nat
constant Succ: Nat  Nat
constant AxiomNat1: n:Nat, Succ n  Zero
constant AxiomNat2: n:Nat, m:Nat, Succ n = Succ m  n = m
constant AxiomNat3: P: Nat  Prop, (P Zero  n:Nat, P n  P (Succ n))  n:Nat, P n

constant A: Type
constant a: A
constant f: A  A
def r: Nat  A  Prop :=
    λ (n: Nat) (x:A),
     s: Nat  A  Prop, (s Zero a   (n: Nat) (x: A), s n x  s (Succ n) (f x))  s n x

theorem h0: r Zero a :=
    assume s: Nat  A  Prop,
    assume h: s Zero a   n x, s n x  s (Succ n) (f x),
    and.elim_left h

theorem h1:  n x, r n x  r (Succ n) (f x) :=
    assume n: Nat,
    assume x: A,
    assume h: r n x,
    assume s: Nat  A  Prop,
    assume h': s Zero a   (n: Nat) (x: A), s n x  s (Succ n) (f x),
    and.elim_right h' n x h

Reid Barton (Nov 14 2020 at 21:31):

You're using term mode = challenge mode.

theorem h1:  n x, r n x  r (Succ n) (f x) :=
    assume n: Nat,
    assume x: A,
    assume h: r n x,
    assume s: Nat  A  Prop,
    assume h': s Zero a   (n: Nat) (x: A), s n x  s (Succ n) (f x),
    begin
      apply h'.2,
      apply h,
      exact h',
    end

Reid Barton (Nov 14 2020 at 21:34):

Or in term mode, it's easier if you keep the proof type checking aside from holes (_).

Rui Liu (Nov 14 2020 at 21:45):

@Reid Barton

Thank you! I just spent like an hour staring at screen...

Or in term mode, it's easier if you keep the proof type checking aside from holes (_).

How do you do this?

Also you mentioned that it's also possible to approximate from below, can you elaborate how would you do with that strategy?

Reid Barton (Nov 14 2020 at 21:47):

e.g. after all the assumes, put _

Reid Barton (Nov 14 2020 at 21:48):

then only replace the _ by some other term with a hole that still type checks, like h'.2 _

Reid Barton (Nov 14 2020 at 21:55):

Rui Liu said:

Also you mentioned that it's also possible to approximate from below, can you elaborate how would you do with that strategy?

Something like: take the union of all relations s such that

  • if s 0 x, then x = a
  • if s (n+1) y, then there exists x with s n x and y = f x

Rui Liu (Nov 14 2020 at 22:03):

Reid Barton said:

Rui Liu said:

Also you mentioned that it's also possible to approximate from below, can you elaborate how would you do with that strategy?

Something like: take the union of all relations s such that

  • if s 0 x, then x = a
  • if s (n+1) y, then there exists x with s n x and y = f x

Sorry but how do you do union of relations in type theory?

Reid Barton (Nov 14 2020 at 22:06):

\exists

Mario Carneiro (Nov 14 2020 at 22:16):

I don't think this proof works, because to show the relation is not trivial you have to first prove the theorem

Rui Liu (Nov 15 2020 at 15:42):

Thanks for the help from everyone! I finally got this theorem formalized using @Reid Barton 's idea! This is so much more involved than informal proof, since I need to find out all the details to derive a simple property and find the correct way to invoke them in type theory. This makes me realize how much implicit formula manipulation that's happening in mind without realizing them.... Also, after several levels deep inside the proof, I don't see a good way to name the proof terms and bookkeeping them any more... All the important steps and non-important steps are all listed together, which doesn't give a good high level overview. I'd like to try tactics mode at some point and I don't know if they will solve some of the problems above.

Here's the proof (also here a reference on the proof http://math.iisc.ernet.in/~gadgil/BasicAnalysis/blog/2016/01/08/natural-numbers-axioms/) for anyone interested:

open classical

constant Nat: Type
constant Zero: Nat
constant Succ: Nat  Nat
constant AxiomNat1: n:Nat, Succ n  Zero
constant AxiomNat2: n:Nat, m:Nat, Succ n = Succ m  n = m
constant AxiomNat3: {P: Nat  Prop}, (P Zero  n:Nat, P n  P (Succ n))  n:Nat, P n

def Unique: Π {S: Type}, Π P: S  Prop, Prop := λ (S: Type)(P: S  Prop), x:S, P x  y:S, P y  y = x
def UniqueIntro:  {A: Type},  {P: A  Prop},  (a: A),  (h0: P a),  (h1:  b: A, P b  b = a), Unique P :=
    λ (A: Type) (P: A  Prop) (a: A) (h0: P a) (h1:  b: A, P b  b = a), exists.intro a (and.intro h0 h1)
constant UniqueObj: Π {S: Type}, Π {P: S  Prop}, Unique P  S
constant UniqueProp: Π {S: Type}, Π {P: S  Prop}, Π (h: Unique P), P (UniqueObj h)
def UniqueEq: Π {S: Type}, Π {P: S  Prop}, Π {a b: S}, Unique P  P a  P b  a = b :=
    assume S: Type,
    assume P: S  Prop,
    assume a b: S,
    assume h: Unique P,
    assume ha: P a,
    assume hb: P b,
    exists.elim h (
        assume x: S,
        assume h: P x   (y : S), P y  y = x,
        let ea: a = x := and.elim_right h a ha in
        let eb: b = x := and.elim_right h b hb in
        (eq.trans ea (eq.symm eb))
    )

theorem NotAnd:  {P Q: Prop}, ¬ (P  Q)  (¬ P  ¬ Q) :=
    assume P Q: Prop,
    assume h: ¬ (P  Q),
    by_cases (
        assume p: P,
        by_cases (
            assume q: Q,
            absurd (and.intro p q) h
        ) (
            assume q: ¬ Q,
            or.intro_right _ q
        )
    ) (
        assume p: ¬ P,
        or.intro_left _ p
    )

theorem NotOr:  {P Q: Prop}, ¬ (P  Q)  (¬ P  ¬ Q) :=
    assume P Q: Prop,
    assume h: ¬ (P  Q),
    let np: ¬P := (assume p: P, h (or.intro_left _ p)) in
    let nq: ¬Q := (assume q: Q, h (or.intro_right _ q)) in
    and.intro np nq

theorem NotPOrQ:  {P Q: Prop}, ¬ P  Q  (P  Q) :=
    assume P Q: Prop,
    assume h: ¬P  Q,
    assume p: P,
    or.elim h (
        assume notP: ¬P,
        absurd p notP
    ) (
        assume q: Q,
        q
    )

theorem NeSymm:  {A:Type},  {a b:A}, a  b  b  a :=
    assume A:Type,
    assume a: A,
    assume b: A,
    assume h: a  b,
    assume hh: b = a,
    h (eq.symm hh)

theorem NeNot:  {A:Type},  {a b:A}, ¬ a  b  a = b :=
    assume A: Type,
    assume a b: A,
    assume h: ¬ a  b,
    by_contradiction (
        assume nh: a  b,
        h nh
    )

theorem Recursion: A:Type, Πa:A, Πf:(A  A), g: Nat  A, (g Zero = a  n:Nat, g (Succ n) = f (g n)) :=
assume A: Type,
assume a: A,
assume f: A  A,
let r: Nat  A  Prop :=
    λ (n: Nat) (x:A),
     s: Nat  A  Prop, (s Zero a   (n: Nat) (x: A), s n x  s (Succ n) (f x))  s n x
in
let r0: r Zero a :=
    assume s: Nat  A  Prop,
    assume h: s Zero a   n x, s n x  s (Succ n) (f x),
    and.elim_left h
in
let r1:  n x, r n x  r (Succ n) (f x) :=
    assume n: Nat,
    assume x: A,
    assume h: r n x,
    assume s: Nat  A  Prop,
    assume h': s Zero a   (n: Nat) (x: A), s n x  s (Succ n) (f x),
    and.elim_right h' n x (h s h')
in
let unique_zero: Unique (λ x: A, r Zero x) :=
    let unique: b:A, r Zero b  b = a :=
        assume b:A,
        assume h: r Zero b,
        by_contradiction(
            assume b_ne_a: b  a,
            let r' : Nat  A  Prop :=
                λ (n: Nat) (x: A),
                r n x  (Zero  n  b  x)
            in
            let r'0 : r' Zero a :=
                and.intro r0 (or.intro_right _ b_ne_a)
            in
            let r'1 :  (n:Nat)(x:A), r' n x  r' (Succ n) (f x) :=
                assume n: Nat,
                assume x: A,
                assume r'_n_x: r' n x,
                let r_n_x: r n x := and.elim_left r'_n_x in
                let r_succ_n_f_x: r (Succ n) (f x) := r1 n x r_n_x in
                and.intro r_succ_n_f_x (or.intro_left (b  f x) (NeSymm (AxiomNat1 n)))
            in
            let r'_zero_b : r' Zero b :=
                h r' (and.intro r'0 r'1)
            in
            or.elim (and.elim_right r'_zero_b)
                (assume f: Zero  Zero, f (eq.refl Zero))
                (assume f: b  b, f (eq.refl b))
        )
    in
    UniqueIntro a r0 unique
in
let unique_succ:  n:Nat, Unique (λ x:A, r n x)  Unique (λ x:A, r (Succ n) x) :=
    assume n:Nat,
    assume h: Unique (λ x:A, r n x),
    let x: A := UniqueObj h in
    let r_n_x: r n x := UniqueProp h in
    let r_succ_n_f_x: r (Succ n) (f x) := r1 n x r_n_x in
    let unique:  b:A, r (Succ n) b  b = (f x) :=
        assume b:A,
        assume r_succ_n_b: r (Succ n) b,
        by_contradiction(
            assume b_ne_x: b  (f x),
            let r' : Nat  A  Prop :=
                λ (n': Nat) (x': A),
                r n' x'  ((Succ n)  n'  b  x')
            in
            let r'0 : r' Zero a :=
                and.intro r0 (or.intro_left _ (AxiomNat1 n))
            in
            let r'1 :  (n':Nat)(x':A), r' n' x'  r' (Succ n') (f x') :=
                assume n': Nat,
                assume x': A,
                assume r'_n'_x': r' n' x',
                let r_n'_x': r n' x' := and.elim_left r'_n'_x' in
                let r_succ_n'_f_x': r (Succ n') (f x') := r1 n' x' r_n'_x' in
                let r'_succ_n'_f_x': r' (Succ n') (f x') := by_contradiction(
                    assume hh: ¬ (r' (Succ n') (f x')),
                    let hh': (¬ ((Succ n)  (Succ n'))  ¬ (b  (f x'))) := NotOr (NotPOrQ (NotAnd hh) r_succ_n'_f_x') in
                    let eq1: Succ n = Succ n' := NeNot (and.elim_left hh') in
                    let eq2: b = f x' := NeNot (and.elim_right hh') in
                    let eq3: n = n' := AxiomNat2 n n' (eq1) in
                    let r_n_x': r n x' := eq.subst (eq.symm eq3) r_n'_x' in
                    let eq4: x = x' := UniqueEq h r_n_x r_n_x' in
                    let eq5: f x = f x' := congr_arg f eq4 in
                    b_ne_x (eq.trans eq2 (eq.symm eq5))
                ) in
                r'_succ_n'_f_x'
            in
            let r'_false : r' (Succ n) b :=
                r_succ_n_b r' (and.intro r'0 r'1)
            in
            or.elim (and.elim_right r'_false)
                (assume f: Succ n  Succ n, f (eq.refl (Succ n)))
                (assume f: b  b, f (eq.refl b))
        )
    in
    UniqueIntro (f x) r_succ_n_f_x unique
in
let unique_all:  n:Nat, Unique (λ x:A, r n x) :=
    AxiomNat3 (and.intro unique_zero unique_succ)
in
let g: Π n:Nat, A := λ n:Nat, UniqueObj (unique_all n) in
let h0: g Zero = a :=
    let h: r Zero (g Zero) := UniqueProp unique_zero in
    UniqueEq unique_zero h r0
in
let h1:  (n:Nat), g (Succ n) = f (g n) :=
    assume n:Nat,
    let h0: r (Succ n) (g (Succ n)) := UniqueProp (unique_all (Succ n)) in
    let h1: r (Succ n) (f (g n)) := r1 n (g n) (UniqueProp (unique_all n)) in
    UniqueEq (unique_all (Succ n)) h0 h1
in
exists.intro g (and.intro h0 h1)

Kevin Buzzard (Nov 15 2020 at 17:34):

You definitely want to try either proper term mode or tactic mode!

Rui Liu (Nov 15 2020 at 18:08):

@Kevin Buzzard What's proper term mode?

Reid Barton (Nov 15 2020 at 18:21):

I guess Kevin means a far more concise version of term mode. Using things like ⟨a, b⟩ instead of and.intro a b, h.2 instead of and.elim_right h, λ n x h s h', instead of five lines of assume.

Reid Barton (Nov 15 2020 at 18:22):

This proof looks like some kind of compiler intermediate language where there are only three constructs left--assume, let and function application.

Kevin Buzzard (Nov 15 2020 at 18:23):

theorem h1:  n x, r n x  r (Succ n) (f x) :=
λ _ _ h _ h', h'.2 _ _ $ h _ $ h'

Kevin Buzzard (Nov 15 2020 at 18:26):

theorem NeNot:  {A:Type},  {a b:A}, ¬ a  b  a = b :=
begin
  finish
end

Kevin Buzzard (Nov 15 2020 at 18:26):

theorem NeSymm:  {A:Type},  {a b:A}, a  b  b  a :=
begin
  library_search
end

Kevin Buzzard (Nov 15 2020 at 18:27):

theorem NotPOrQ:  {P Q: Prop}, ¬ P  Q  (P  Q) :=
begin
  tauto
end

Kevin Buzzard (Nov 15 2020 at 18:28):

(the latter three only work with import tactic at the top of your file, and working in a project with mathlib as a dependency)

Reid Barton (Nov 15 2020 at 18:28):

I guess maybe part of the exercise was to use almost nothing even from the core library.

Kevin Buzzard (Nov 15 2020 at 18:30):

theorem NotPOrQ  {P Q: Prop} (hn : ¬ P  Q) (hp : P) : Q :=
begin
  cases hn,
  { contradiction },
  { assumption }
end

Reid Barton (Nov 15 2020 at 18:30):

Actually I think starting out with this style can be good if you have the stomach for it, because then as you learn more (syntax, library functions, tactics) you can understand how they work at this level.

Kevin Buzzard (Nov 15 2020 at 18:31):

Clearly Jeremy thinks the same way, or he wouldn't have written TPIL like that.

Adam Topaz (Nov 15 2020 at 18:50):

Rui Liu said:

Kevin Buzzard What's proper term mode?

Just like in math, "proper" = "compact" :)

Rui Liu (Nov 15 2020 at 19:01):

@Kevin Buzzard That's short :sweat: , what's good resource to learn more about tactics and more utilities? I know this https://leanprover.github.io/theorem_proving_in_lean/tactics.html , any more?

I guess maybe part of the exercise was to use almost nothing even from the core library.

@Reid Barton That was indeed the intention. I started to learn about type theory, then got interested in how to formalize maths in type theory. So I was deliberately not using any tools.

Btw, how do you manage proof term variables? I was struggling to give sensible names to it. Or is it when you're using more advanced tactic modes, you usually don't need to write proof variables?

Kevin Buzzard (Nov 15 2020 at 19:42):

More tactics here: https://leanprover-community.github.io/mathlib_docs/tactics.html . But many of these tactics need the maths library mathlib.

Mario Carneiro (Nov 15 2020 at 22:43):

Rui Liu said:

Btw, how do you manage proof term variables? I was struggling to give sensible names to it. Or is it when you're using more advanced tactic modes, you usually don't need to write proof variables?

You can inline the use of a variable if it appears only once. That way you only have to focus on the "interesting" subgoals. In tactic mode, if you use have without a name it gets the default name this, which is also useful if your proof is somewhat linear and you just keep shadowing the name

Mario Carneiro (Nov 15 2020 at 23:35):

@Rui Liu Here's a conversion of your proof to tactic style (keeping the original proof structure, i.e. no fancy tactics like finish or even rw). I have also tried to use mathlib style elsewhere.

import tactic

constant Nat : Type
constant Zero : Nat
constant Succ : Nat  Nat
constant AxiomNat1 : n, Succ n  Zero
constant AxiomNat2 {n m} : Succ n = Succ m  n = m
constant AxiomNat3 {P : Nat  Prop} : P Zero  (n, P n  P (Succ n))  n, P n

def Unique {S : Type} (P : S  Prop) : Prop :=  x, P x   y, P y  y = x
theorem UniqueIntro {A : Type} {P : A  Prop}
  (a : A) (h0 : P a) (h1 :  b : A, P b  b = a) : Unique P := a, h0, h1
constant UniqueObj {S : Type} {P : S  Prop} : Unique P  S
constant UniqueProp {S : Type} {P : S  Prop} (h : Unique P) : P (UniqueObj h)
theorem UniqueEq {S : Type} {P : S  Prop} {a b : S}
  (h : Unique P) (ha : P a) (hb : P b) : a = b :=
let x, h1, h2 := h in (h2 _ ha).trans (h2 _ hb).symm

theorem NotAnd {P Q : Prop} : ¬ (P  Q)  (¬ P  ¬ Q) := not_and_distrib.1

theorem NotOr {P Q : Prop} : ¬ (P  Q)  (¬ P  ¬ Q) := not_or_distrib.1

theorem NotPOrQ {P Q : Prop} : ¬ P  Q  (P  Q) := or.neg_resolve_left

theorem NeSymm {A : Type} {a b : A} : a  b  b  a := ne.symm

theorem NeNot {A : Type} {a b : A} : ¬ a  b  a = b := not_not.1

theorem Recursion {A : Type} (a : A) (f : A  A) :
   g : Nat  A, g Zero = a  n : Nat, g (Succ n) = f (g n) :=
begin
  let r : Nat  A  Prop := λ n x,
     s : Nat  A  Prop,
      s Zero a  ( (n : Nat) (x : A), s n x  s (Succ n) (f x))  s n x,
  have r0 : r Zero a := λ s h, h.1,
  have r1 :  {n x}, r n x  r (Succ n) (f x) := λ n x h s h', h'.2 _ _ (h s h'),
  suffices hu :  n : Nat, Unique (λ x : A, r n x),
  { let g : Π n : Nat, A := λ n, UniqueObj (hu n),
    have pg :  n, r n (g n) := λ n, UniqueProp (hu n),
    refine g, UniqueEq (hu Zero) (pg Zero) r0, λ n, _⟩,
    exact UniqueEq (hu (Succ n)) (pg (Succ n)) (r1 (pg n)) },
  refine AxiomNat3 _, λ n h, _⟩,
  { refine a, r0, λ b h, _⟩,
    apply by_contradiction (λ b_ne_a, _),
    let r' : Nat  A  Prop := λ n x, r n x  (Zero  n  b  x),
    suffices : r' Zero b,
    { obtain f | f := this.2; exact f rfl },
    refine h r' _, _⟩,
    { exact r0, or.inr b_ne_a },
    { rintro n x h1, h2⟩,
      exact r1 h1, or.inl (AxiomNat1 n).symm } },
  { let x : A := UniqueObj h,
    have px : r n x := UniqueProp h,
    refine UniqueIntro (f x) (r1 px) (λ b pb, _),
    apply by_contradiction (λ b_ne_x, _),
    let r' : Nat  A  Prop := λ n' x', r n' x'  (Succ n  n'  b  x'),
    suffices : r' (Succ n) b,
    { obtain f | f := this.2; exact f rfl },
    refine pb r' _, _⟩,
    { exact r0, or.inl (AxiomNat1 n)⟩ },
    { rintro n' x' px', h2⟩,
      apply by_contradiction (λ hh, b_ne_x _),
      have := (not_and_distrib.1 hh).neg_resolve_left (r1 px'),
      obtain h3, h4 := not_or_distrib.1 this,
      cases AxiomNat2 (not_not.1 h3),
      cases not_not.1 h4,
      congr,
      obtain rfl := UniqueEq h px px',
      refl } }
end

Yakov Pechersky (Nov 15 2020 at 23:37):

(Isn't obtain rfl := ... a fancy tactic?)

Mario Carneiro (Nov 15 2020 at 23:37):

as in it keeps to axiomatically basic tactics

Mario Carneiro (Nov 15 2020 at 23:38):

obtain rfl is a way to write subst h, a.k.a eq.rec

Mario Carneiro (Nov 15 2020 at 23:39):

but I think proper use of obtain or cases is important for writing tactic proofs

Johan Commelin (Nov 16 2020 at 07:03):

I think obtain is relatively new (compared to intro), but I wouldn't say it's "fancy".

Eric Wieser (Nov 16 2020 at 13:04):

If rw is being considered fancy I'd probably consider obtain fancy too

Kevin Buzzard (Nov 16 2020 at 13:45):

obtain is just syntax sugar for cases, right? rw is a non-trivial extension of eq.rec.

Reid Barton (Nov 16 2020 at 13:54):

I think obtain+rfl is actually subst which is somehow different from cases, but I still wouldn't count it as fancy

Rui Liu (Nov 16 2020 at 19:30):

@Mario Carneiro Thank you for the proof! Will try using this style some time!

Rui Liu (Nov 18 2020 at 00:08):

@Mario Carneiro Is there a reason to prefer have over let? I saw in your proof you got several places using have instead of let

Mario Carneiro (Nov 18 2020 at 00:08):

Use have for proofs and let only if you want to unfold the definition in subsequent reasoning

Mario Carneiro (Nov 18 2020 at 00:09):

it is similar to the difference between def and theorem

Mario Carneiro (Nov 18 2020 at 00:10):

Really you don't need let at all in the proof, you could just construct and then destruct an existential statement

Rui Liu (Nov 18 2020 at 00:12):

as far as I understand, let is a more general version of have, since in the places we want to unfold body only let works? Any reason for not always using let?

what's the difference between def and theorem?

Really you don't need let at all in the proof, you could just construct and then destruct an existential statement

Are you talking about this particular proof? or something in general?

Mario Carneiro (Nov 18 2020 at 00:14):

in this particular proof

Mario Carneiro (Nov 18 2020 at 00:16):

Using let clutters up the context with := ... because it has to retain the definition, and ultimately it boils down to a substitution anyway so it's not really using the cut rule in the way have does

Mario Carneiro (Nov 18 2020 at 00:16):

let x := e in a becomes a [e / x] while have x := e, a becomes (\lam x, a) e

Rui Liu (Nov 18 2020 at 00:24):

so the reason for preferring have is let is not needed?

Rui Liu (Nov 18 2020 at 00:26):

oh I see, so by "clutters up the context", you mean it will create a lot more entries in the context which makes it harder to look at the hypothesis

Mario Carneiro (Nov 18 2020 at 00:41):

This is less true of the widget view because it hides := ... in the context, unlike the plain text view, but I'm worried about an explosion of type checking cost in some cases

Mario Carneiro (Nov 18 2020 at 00:41):

generally, it is useful information for lean to know that a definition will not be used later except via its type


Last updated: Dec 20 2023 at 11:08 UTC