Zulip Chat Archive
Stream: Program verification
Topic: Elimination rules
Hanno Becker (Apr 18 2024 at 11:10):
Hi :wave: Coming from SW verification in Isabelle/HOL, I wonder if there's support for elimination rules in Lean, whereby I mean rules which consume a hypothesis by destructing it in a particular way.
As a concrete example, I'd like a tactic which repeatedly finds and splits _all_ disjunctions in the current list of hypothesis, without me having to name them explicitly. In Isabelle, for example, elim disjE
would do that, where disjE: ?P ∨ ?Q ⟹ (?P ⟹ ?R) ⟹ (?Q ⟹ ?R) ⟹ ?R
.
How would this best be implemented in Lean? The combination of apply+assumption is somewhat close to a single-step elimination, but it does not remove the hypothesis to be destructed and is thus not repeatable.
Tomaz Mascarenhas (Apr 18 2024 at 12:31):
You can use the clear
tactic to remove hypothesis. In general I think you would have to write your own tactic that does what you intend, as I don't think we have a tactic specific for that. Did you check the metaprogramming book? https://github.com/leanprover-community/lean4-metaprogramming-book
Shreyas Srinivas (Apr 18 2024 at 13:39):
I don't know how exactly elim
works, but for examples like the one you have given tauto
handles them easily
Hanno Becker (Apr 18 2024 at 13:50):
@Shreyas Srinivas Can you elaborate? Note that it is not about proving the elimination rule itself, but applying it in the way described.
Hanno Becker (Apr 18 2024 at 13:50):
@Tomaz Gomes Thank you! I will have a look at the clear
tactic and also double-check the metaprogramming book, which I did have a look at but could have missed it.
Shreyas Srinivas (Apr 18 2024 at 13:52):
Here are two examples. the second one is a bit contrived:
import Mathlib.Tactic
example {p q r : Prop} : p ∨ q → (p → r) → (q → r) → r := by
tauto
example {p q r s: Prop} : p ∨ q ∨ s → q ∨ r ∨ s → (p → r) → (q → r) → (s → r) → r := by
tauto
Shreyas Srinivas (Apr 18 2024 at 13:53):
If you go to the top right of the code panel, you should be able to open this in the lean playground and check for yourself.
Shreyas Srinivas (Apr 18 2024 at 13:53):
About applying the elimination rule in the way described at the level of a single step, I would use the cases
or even rcases
tactic.
Hanno Becker (Apr 18 2024 at 14:00):
@Shreyas Srinivas Thank you, but proving those theorems is not the problem, it's about using them as elimination rules. As mentioned, one aspect is that one should have to name hypothesis explicitly, which cases
would require you to do.
Shreyas Srinivas (Apr 18 2024 at 14:11):
You mean something like this?
import Mathlib.Tactic
example {p q r : Prop} : p ∨ q → (p → r) → (q → r) → r := by
apply Or.elim
Tomaz Mascarenhas (Apr 18 2024 at 14:14):
I think he mean that he wants a general tactic that would take a proof of a theorem like this and use it to prove a more general case, i.e., about a disjunction with an arbitrary number of propositions
Shreyas Srinivas (Apr 18 2024 at 14:20):
I am confused. On the one hand one can apply Or.elim
to get a single step of the elimination rule. On the other end of the spectrum tauto
does all the eliminations automatically and iteratively. Maybe you are looking for something in the middle of the spectrum.
Shreyas Srinivas (Apr 18 2024 at 14:20):
Maybe something that works solely with disjunctions for example
Shreyas Srinivas (Apr 18 2024 at 14:22):
there is also pattern destructuring of the form illustrated below, but this requires appropriately introducing the necessary names:
import Mathlib.Tactic
example {p q r : Prop} : p ∨ (q ∧ s) → (p → r) → (q → r) → r := by
rintro (hp | ⟨hq,_⟩) hpr hqr <;> solve_by_elim
Shreyas Srinivas (Apr 18 2024 at 14:25):
Actually naming is not necessary in this example. Destructuring namelessly suffices:
import Mathlib.Tactic
example {p q r : Prop} : p ∨ (q ∧ s) → (p → r) → (q → r) → r := by
rintro (_ | ⟨_,_⟩) _ _ <;> solve_by_elim
Hanno Becker (Apr 18 2024 at 14:41):
tauto
does not do eliminations, it may only be helpful in proving some of the elimination rules themselves (if they are propositional in nature). Here's an example application hopefully clarifying what I'm looking for: Given the goal P ∨ Q → T ∨ W → S → U
, I want to be able to do elim disjE
and -- up to intro
s -- get four new goals P → T → S → U
, Q → T → S → U
, P → W → S → U
, Q → W → S → U
. Another example would be: _Given_ (proving this is again not the issue) nat_boundE: i < Suc n → (i = n → R) → (i < n → R) → R
, and given a goal with hypothesis i < Suc (Suc (Suc (Suc 0))))
somewhere, I want to be able to call elim nat_boundE
and immediately see subgoals emerge for i=0,1,2,3
.
Hanno Becker (Apr 18 2024 at 14:43):
I'll have a stab at following @Tomaz Gomes's suggestion of implementing this as a combination of the core functions underlying apply
, assumption
and clear
.
Joachim Breitner (Apr 18 2024 at 14:45):
It sounds a bit like you want a tactic that does “as long as there is an assumption h
of inductive type T
, run cases h
”, where T
could be disjunction, but possibly others as well.
Hanno Becker (Apr 18 2024 at 14:46):
@Joachim Breitner Yes, that would already be useful, though there are elimination rules that are not of this kind, e.g. the last example.
Joachim Breitner (Apr 18 2024 at 14:48):
I haven’t seen this elimination rule style theorems in Lean a lot besides genuine pattern matchers and recursors. The cases
tactic supports cases foo using bar
where bar
is an arbitrary elimination theorem. Not sure how well it copes when the conclusion is R
and not R x y
(applied to some targets), though.
Shreyas Srinivas (Apr 18 2024 at 15:00):
Hanno Becker said:
Here's an example application hopefully clarifying what I'm looking for: Given the goal
P ∨ Q → T ∨ W → S → U
, I want to be able to doelim disjE
and -- up tointro
s -- get four new goalsP → T → S → U
,Q → T → S → U
,P → W → S → U
,Q → W → S → U
. Another example would be: _Given_ (proving this is again not the issue)nat_boundE: i < Suc n → (i = n → R) → (i < n → R) → R
, and given a goal with hypothesisi < Suc (Suc (Suc (Suc 0))))
somewhere, I want to be able to callelim nat_boundE
and immediately see subgoals emerge fori=0,1,2,3
.
thanks. I think these examples help me understand what you are looking for. I don't think there is a tactic that does this. It requires repeatedly recursing over all hypothesis, applying rcases
or interval_cases
(for the Nat
example) and bring them to a point where you can solve_by_elim
on all goals
Kyle Miller (Apr 18 2024 at 17:30):
Mathlib has a tactic for this. Here's recursively doing cases on all disjunctions and conjunctions:
casesm* _ ∨ _, _ ∧ _
Kyle Miller (Apr 18 2024 at 17:33):
There's a variant where you can do it by type name rather than by patterns:
cases_type* Or And
Hanno Becker (Apr 19 2024 at 03:34):
Thank you very much @Kyle Miller @Shreyas Srinivas, those are very helpful pointers. I will look at the tactics and how they are implemented, and experiment with a generic elim
. Elimination rules are ubiquitous and provide a nice unified framework for destructing tactics like interval_cases
or casesm*
, so it may be useful to have them at our disposal in Lean as well.
Hanno Becker (Apr 19 2024 at 06:04):
Here's a first attempt. I'm completely new to Lean metaprogramming, so any comment is welcome.
import Mathlib
import Lean
open Lean Meta Elab.Tactic
open Mathlib
--
-- Almost a copy from stdlib, but remembering the fvarid
--
/-- Try to close goal by assumption. Upon succes, return fvar id of
matching assumption. Otherwise, return none. --/
def assumptionCore' (mvarId : MVarId) : MetaM (Option FVarId) :=
mvarId.withContext do
mvarId.checkNotAssigned `assumption
match (← findLocalDeclWithType? (← mvarId.getType)) with
| none => return none
| some fvarId => mvarId.assign (mkFVar fvarId); return (some fvarId)
/-- Close goal `mvarId` using an assumption. Throw error message if failed. -/
def assumption' (mvarId : MVarId) : MetaM FVarId := do
let res ← assumptionCore' mvarId
match res with
| none => throwTacticEx `assumption mvarId "assumption' tactic failed"
| some fvarid => return fvarid
--
--
--
def erule (e : Expr) (mvid : MVarId) (with_intro : Bool := true) : MetaM (List MVarId) := do
let s ← saveState
try
let mvids ← mvid.apply e
match mvids with
| [] => throwError "ill-formed elimination rule {e}"
| main :: other =>
-- Try to solve main goal by assumption, remember fvar of hypothesis
let fvid ← assumption' main
-- Remove hypothesis from all other goals
let other' ← other.mapM (fun mvid => do
if (← mvid.isAssignedOrDelayedAssigned) then
return mvid
let mvid ← mvid.clear fvid
if with_intro then
let (_, mvid) ← mvid.intros
return mvid
else
return mvid
)
return other'
catch _ =>
restoreState s
throwError "erule_tac failed"
-- Run erule repeatedly
def elim (e : Expr) (mvid : MVarId) : MetaM (List MVarId) := do
Meta.repeat1' (erule e) [mvid]
elab "erule" e:term : tactic => do
let e ← elabTermForApply e
Elab.Tactic.liftMetaTactic (erule e)
elab "elim" e:term : tactic => do
let e ← elabTermForApply e
Elab.Tactic.liftMetaTactic (elim e)
-- Some examples of elimination rules
--
-- Elimination rules abound, but here are some simple examples.
-- While those can be handled by existing tactics as well, the point
-- is to show that `elim` unifies various types of deconstruction
-- in a single tactic, parametrized by a suitable elimination rule.
-- 1/ Disjunction elimination
example {A B C D E : Prop} : A ∨ B → C ∨ D → E := by
intros
elim Or.elim /- Now we have 4 goals -/
<;> sorry
-- 2/ Conjunction elimination
theorem conjE {P Q R: Prop}: P ∧ Q → (P → Q → R) → R := by tauto
example {A B C D E} (t : A ∧ B) (t' : C ∧ D) : E := by
intros
elim conjE -- Separate hypothesis A, B, C, D
sorry
-- 3/ Destructing existentials
theorem exE' {R : Prop} {α : Type} {P : α → Prop}:
(∃x:α, P x) → (∀x:α, (P x → R)) → R := by
intro f g
cases f
apply g
assumption
example {R : Prop} (a: ∃x:Nat, x > 42 ∧ x < 100) (b: ∃y:Nat, y > 100 ∧ y < 128) : R := by
elim exE'
sorry
-- 4/ Destructing compound definitions
structure dummyStruct :=
mkDummy :: (f0: Nat) (f1: Nat)
def isValidDummy (x : dummyStruct) : Prop :=
(x.f0 * x.f1) > 42 ∧
(∃d:Nat, d > 5 ∧ d ∣ x.f0 ∧ d ∣ x.f1)
def isValidDummyE {x : dummyStruct} (v: isValidDummy x):
(∀d:Nat, d > 5 → d ∣ x.f0 → d ∣ x.f1 → x.f0 * x.f1 > 42 → R) → R := by
simp [isValidDummy] at v
erule conjE
tauto
example {P : Prop} {x y z : dummyStruct} (vx : isValidDummy x) (vy : isValidDummy y) : isValidDummy z := by
elim isValidDummyE -- Breaks up both validity assumptions, but leaves goal intact
sorry
-- 5/ Case-splitting on natural numbers
theorem bound_nat_succ {i n : Nat} (lt: i < n) : i = n-1 ∨ i < n-1 := by omega
theorem bound_nat_succE {R : Prop} {i n : Nat} (lt: i < n) (t0 : i = n-1 → R) (t1 : i < n-1 → R) : R :=
by
cases (bound_nat_succ lt)
apply t0
assumption
apply t1
assumption
-- Syntax directed version of bound_nat_succE
theorem bound_nat_succE' {R : Prop} {i n : Nat} (lt: i < Nat.succ n) (t0 : i = n → R) (t1 : i < n → R) : R := by
apply (bound_nat_succE lt) <;> simp <;> assumption
open Nat
example {P : Prop} {i : Nat} (lt: i < succ (succ (succ (succ 0)))) : P := by
elim bound_nat_succE' /- Now we have 4 goals for i=0,1,2,3 -/
<;> sorry
Shreyas Srinivas (Apr 19 2024 at 06:06):
You could try replacing the apply + assumption sequences using solve_by_elim
Hanno Becker (Apr 19 2024 at 07:30):
@Shreyas Srinivas I'm not sure this works, as solve_by_elim
either solves the current goal or fails?
Kyle Miller (Apr 19 2024 at 07:32):
By the way, there are a number of elim
theorems already, like for instance Or.elim
:
theorem disjE {P Q R: Prop} (h : P ∨ Q) (hP : P → R) (hQ : Q → R) : R := Or.elim h hP hQ
Kyle Miller (Apr 19 2024 at 07:33):
(and that can be proved using the recursor that's defined for Or
directly, with Or.recOn h hP hQ
)
Hanno Becker (Apr 19 2024 at 08:09):
Is this something the stdlib might benefit from? The prototype above is surely incomplete, but if there's interest in having a general elimination rule in Lean, I could try to make time to open a PR and things could be discussed further there.
Shreyas Srinivas (Apr 19 2024 at 08:23):
About solve_by_elim: putting. a <;> try solve_by_elim
might just close any number of trivial subgoals.
About Std
: I like the idea in principle. At least for verification this is very useful. In practice, the Lean FRO is substantially restructuring Std
(I'll try to find the link to the announcement). So there is a non trivial chance that this PR might potentially languish for a while without attention.
Shreyas Srinivas (Apr 19 2024 at 08:27):
The announcement message about Std changes
Hanno Becker (Apr 19 2024 at 08:30):
Thank you Shreyas, that's good to know
Shreyas Srinivas (Apr 19 2024 at 08:31):
A point about your examples: Single step elimination of propositional operators is not going to be a good way to explain the tactic. The basic issue is illustrated by the confusion I had in my messages above. cases
or apply elim_rule
already does that.
Shreyas Srinivas (Apr 19 2024 at 08:32):
Also, once the goals are in a nice shape, tactics like simp_all and solve_by_elim will perfectly work without hypothesis names. So that's not necessarily the key difference either.
EDIT: The key differentiator here is the recursive traversal of all hypothesis and destructuring them in every possible way. The examples need to illustrate this aspect
Hanno Becker (Apr 19 2024 at 08:34):
I'm still not sure about solve_by_elim
-- this does seem to strive to solve goals entirely, while elim ...
is just a proof step (usually an early one). Can you show how to solve one of the examples (not the elimination rules themselves) using solve_by_elim
?
Shreyas Srinivas (Apr 19 2024 at 08:38):
I often use something like rcases x <;> try simp_all
or rcases x <;> try aesop
to get past a lot of steps.
Shreyas Srinivas said:
Also, once the goals are in a nice shape, tactics like simp_all and solve_by_elim will perfectly work without hypothesis names
About solve_by_elim
, I am not on my machine now, but my first try would be to replace the proof of example 6 by elim bound_Nat_succE' <;> intros <;> try solve_by_elim
Shreyas Srinivas (Apr 19 2024 at 08:38):
(The last example)
Hanno Becker (Apr 19 2024 at 08:40):
@Kim Morrison @Joachim Breitner Would a generic elim
be something worth considering for either [the new] Std or Boost?
Shreyas Srinivas (Apr 19 2024 at 09:24):
EDIT : Some examples weren't solved
Shreyas Srinivas (Apr 19 2024 at 09:24):
The tactic combination is : <;> intros <;> try (first | assumption | solve_by_elim)
Joachim Breitner (Apr 19 2024 at 09:47):
I’m personally hesitant to comment on which tactics should be in core or std, simply because I have not done enough serious proving/verification to make a judgement call. I generally see a risk that if we create tactics as scratches need to be itched we’ll end up with a huge zoo of inconsistent, overlapping, special-cases tactics that make it very hard to learn and master lean. That’s also why I am holding back my own “let’s do this or that kind of tactic” impulses.
In this particular case I have a gut feeling that a generic elim
may be motivated by a different approach to proof automation than is common in Lean. It feels natural to me (having had Isabelle experience), which is why I don’t trust my own opinion here. It makes me wonder what I am missing that such a tactic doesn’t already exist – did the existing heavy Lean users really never think of it, or do they approach the problem differently and thus there is a smaller need for this.
So I’m inclined to recommend to develop possible new tactics outside first (as a library or in mathlib) to see how useful it in general and later worry about upstreaming.
Joachim Breitner (Apr 19 2024 at 09:48):
Oh, and about solve_by_elim
, I think someone recently mentioned that it is very badly named, and has very little to do with “elimination rules”.
Shreyas Srinivas (Apr 19 2024 at 09:48):
I think it should be called by solve_by_applies
;)
Joachim Breitner said:
Oh, and about
solve_by_elim
, I think someone recently mentioned that it is very badly named, and has very little to do with “elimination rules”.
Hanno Becker (Apr 19 2024 at 10:03):
@Joachim Breitner Thank you for your thoughtful reply, I think we are on the same page. It would be good to understand if indeed there is a different approach to proof automation altogether in Lean, or whether there just hasn't been as much need for it due to the emphasis on mathematics proofs rather than software verification (which tends to have a much smaller ratio of ideas vs. boilerplate reasoning than mathematics).
Shreyas Srinivas (Apr 19 2024 at 10:08):
@Hanno Becker : The elimination of hidden variable names seems to affect the ability of closing tactics to solve some of the goals. Is it essential for elim to clear the hypotheses? EDIT: Sorry that didn't turn out to be the case for the examples you have given
Hanno Becker (Apr 19 2024 at 10:22):
@Shreyas Srinivas
Is it essential for elim to clear the hypotheses?
Yes, otherwise it would spin forever
Mario Carneiro (Apr 19 2024 at 10:56):
I think it would be better to present this as a problem rather than a specific solution to that problem, because it won't be clear whether a new tactic is needed without that. Regarding the examples, here's my best attempt at automated and idiomatic versions of them:
import Mathlib.Tactic.IntervalCases
import Lean
open Lean Meta Elab.Tactic
open Mathlib
-- 1/ Disjunction elimination
example {A B C D E : Prop} : A ∨ B → C ∨ D → E := by
intros
cases_type* Or /- Now we have 4 goals -/
<;> sorry
-- more idiomatic
example {A B C D E : Prop} : A ∨ B → C ∨ D → E := by
rintro (ha | hb) (hc | hd) /- Now we have 4 goals -/
<;> sorry
-- 2/ Conjunction elimination
theorem conjE {P Q R: Prop}: P ∧ Q → (P → Q → R) → R := by tauto
example {A B C D E} (t : A ∧ B) (t' : C ∧ D) : E := by
intros
cases_type* And /- Now we have 4 hypotheses -/
sorry
-- more idiomatic
example {A B C D E} : A ∧ B → C ∧ D → E := by
intro ⟨ha, hb⟩ ⟨hc, hd⟩ /- Now we have 4 hypotheses -/
sorry
-- 3/ Destructing existentials
example {R : Prop} (a: ∃x:Nat, x > 42 ∧ x < 100) (b: ∃y:Nat, y > 100 ∧ y < 128) : R := by
casesm* ∃ _, _, _ ∧ _
sorry
-- more idiomatic
example {R : Prop} (a: ∃x:Nat, x > 42 ∧ x < 100) (b: ∃y:Nat, y > 100 ∧ y < 128) : R := by
let ⟨x, hx1, hx2⟩ := a
let ⟨y, hy1, hy2⟩ := b
sorry
-- 4/ Destructing compound definitions
structure dummyStruct :=
mkDummy :: (f0: Nat) (f1: Nat)
def isValidDummy (x : dummyStruct) : Prop :=
(x.f0 * x.f1) > 42 ∧
(∃d:Nat, d > 5 ∧ d ∣ x.f0 ∧ d ∣ x.f1)
example {P : Prop} {x y z : dummyStruct} (vx : isValidDummy x) (vy : isValidDummy y) : isValidDummy z := by
casesm* isValidDummy _, ∃ _, _, _ ∧ _
sorry
-- more idiomatic
example {P : Prop} {x y z : dummyStruct} (vx : isValidDummy x) (vy : isValidDummy y) : isValidDummy z := by
let ⟨hx1, dx, hx2, hx3, hx4⟩ := vx
let ⟨hy1, dy, hy2, hy3, hy4⟩ := vy
sorry
-- 5/ Case-splitting on natural numbers
theorem bound_nat_succ {i n : Nat} (lt: i < n) : i = n-1 ∨ i < n-1 := by omega
-- Syntax directed version of bound_nat_succ
theorem bound_nat_succ' {i n : Nat} (lt: i < Nat.succ n) : i = n ∨ i < n := by omega
open Nat
example {P : Prop} {i : Nat} (lt: i < 4) : P := by
repeat' (rename _<_ => lt; have := bound_nat_succ' lt; cases this <;> clear lt)
<;> sorry
-- more idiomatic
example {P : Prop} {i : Nat} (lt: i < 4) : P := by
interval_cases i /- Now we have 4 goals for i=0,1,2,3 -/
<;> sorry
Mario Carneiro (Apr 19 2024 at 11:03):
Of the examples, the last one is most clearly requiring something new (especially if we generalize it to something other than splitting natural numbers because we already have a tactic for that). It is possible to automate this with existing tactics, but there are various unsavory aspects of it and I wouldn't expect it to be used in practice.
The biggest issue IMO with this elim
tactic is that it doesn't give you a way to name the hypotheses that come out, which makes it not chain very well with anything except for high-automation tactics.
As a stylistic matter, it is much more common to see elimination rules not presented in continuation passing style as you are showing here but rather as an inductive type (or and exists in some combination), relying on the builtin case splitting tools for these.
There was an idea for a clear_after
style annotation to go on variable uses to say "clear this after using it in the current expression", which would allow for writing things like obtain _ | lt := bound_nat_succ' (clear_after lt)
. But it's not really clear how to implement this I think, because the tactic consuming the expression (here obtain
) has to opt in somehow to handle the clearing job and define what "after the expression has been used" means.
Shreyas Srinivas (Apr 19 2024 at 11:17):
Mario: I get the impression that elim
only really stands apart from the cases
family of tactics when there are large numbers of case distinctions to be made. Is my impression correct?
Joachim Breitner (Apr 19 2024 at 11:18):
In these examples, the “more idomatic” style doesn’t help if you want the tactic to handle an unknown, arbitrary number of cases/nesting depth etc, e.g. when writing a reusable tactic (either ad-hoc for, say induction <;> …
, or a “real” tactic (macro) composed of smaller tactics that you use in many different profs)
Hanno Becker (Apr 19 2024 at 11:18):
@Mario Carneiro Thank you very much.
The biggest issue IMO with this
elim
tactic is that it doesn't give you a way to name the hypotheses that come out, which makes it not chain very well with anything except for high-automation tactics.
Yes, I think this would be by design: In high-automation tactics, names of hypothesis usually don't matter (Isabelle doesn't even have named hypothesis).
Since elim
repeats elimination and (in Isabelle at least) can be given a list of elimination rules to apply, it also would not be clear how to name the hypotheses in the first place.
However, at least the single-step version erule
would benefit from (a) optionally allowing to specify the names of the hypotheses that are introduced, (b) optionally allowing the destruction of a specific hypothesis.
As a stylistic matter, it is much more common to see elimination rules not presented in continuation passing style as you are showing here but rather as an inductive type (or and exists in some combination), relying on the builtin case splitting tools for these.
This starts to get cumbersome when the elimination is not destructing a single inductive data type. In my experience, the pattern of Example 4 (Destructing compound definitions) is quite common: You have some sort of well-formedness property on datatypes which needs unpacking, and that unpacking may take a few steps before hypothesis are in the shape you want. The elim
-approach captures this neatly without having to write any new tactics. However, whether proving the elimination rule is better than defining a macro for (say) casesm* isValidDummy _, ∃ _, _, _ ∧ _
is perhaps a matter of taste.
One positive of varying theorems rather than tactics though is that you can extend elim
to work on a set of theorems (which ideally is captured via some named theorem list / attribute, so they can be extended) to routinely apply a set of elimination rules to your hypothesis. If every elimination rule is its own tactic / macro, this is not as easy.
Mario Carneiro (Apr 19 2024 at 11:22):
Yes, I think this would be by design: In high-automation tactics, names of hypothesis usually don't matter (Isabelle doesn't even have named hypothesis).
Well, I think this relates to what Joachim said about this being a tool for proofs in a different style than that used by lean
Hanno Becker (Apr 19 2024 at 11:24):
Well, I think this relates to what Joachim said about this being a tool for proofs in a different style than that used by lean
Yes, agreed. But is that because Lean's focus has been mostly on math so far? As the scope widens to include [more] program verification, the proof style(s) will likely broaden as well to meet the different demands.
Mario Carneiro (Apr 19 2024 at 11:25):
Your example 4 seems to contain an unnecessary elim lemma, which just restates the definition in this different style. When you rewrite it to and/exists you end up with exactly the same thing as the original definition, so there is no need to have it in the first place. But if it was doing something interesting, you could still write it using and and exists. For example docs#Nat.le.dest is roughly of this form
Mario Carneiro (Apr 19 2024 at 11:26):
Hanno Becker said:
Well, I think this relates to what Joachim said about this being a tool for proofs in a different style than that used by lean
Yes, agreed. But is that because Lean's focus has been mostly on math so far? As the scope widens to include [more] program verification, the proof style(s) will likely broaden as well to meet the different demands.
Maybe? I've done a fair number of program verification proofs and haven't really felt the need to forgo hypothesis names
Hanno Becker (Apr 19 2024 at 11:28):
which just restates the definition in this different style
Yes, but that's the point of elimination rules -- they are never difficult/interesting, but restate a definition or lemma in a style that's most suitable for automation.
Mario Carneiro (Apr 19 2024 at 11:32):
what I mean is: there is a mechanical translation you can do to elimination rules in the style you have been giving which removes the parameter R
, replaces multiple hypotheses with \/
, and multiple arguments with \exists
or /\
. This is the idiomatic style for writing elimination lemmas in lean, and if you treat such lemmas as elimination rules you will find many more of them in mathlib than if you require the lemmas to be in this backward style
Mario Carneiro (Apr 19 2024 at 11:32):
when you apply that transformation to isValidDummyE
, you get precisely the definition of isValidDummy
Joachim Breitner (Apr 19 2024 at 11:33):
Translating this into Isabelle-Sprech: In Lean, drule
is more idiomatic than erule
:-)
(If I remember correctly)
Mario Carneiro (Apr 19 2024 at 11:33):
(okay, the argument order is different but I assume that's not relevant?)
Hanno Becker (Apr 19 2024 at 11:34):
Yes, but then you still have to destruct the \exists
and Or
afterwards -- the point of the elimination rule is to capture this as a single step.
Mario Carneiro (Apr 19 2024 at 11:34):
that's what your elim
would do
Mario Carneiro (Apr 19 2024 at 11:34):
or cases
/ obtain
/ etc
Mario Carneiro (Apr 19 2024 at 11:36):
if I want to apply le.dest
as an elimination rule I use something like
obtain ⟨k, rfl⟩ := h.dest
Mario Carneiro (Apr 19 2024 at 11:36):
rather than (the equivalent of)
refine h.dest fun k => ?_
Mario Carneiro (Apr 19 2024 at 11:37):
which is what you would have to do if le.dest was written in CPS
Shreyas Srinivas (Apr 19 2024 at 12:31):
Here's the version with solve_by_elim
that works. I removed all the elim
occurrences because in at least one case, the elim
version didn't work, which I am guessing has something to do with cleared hypotheses
The Solve_by_elim version.
Hanno Becker (Apr 19 2024 at 12:55):
Thanks all for the feedback and the alternatives! Looks like existing tactics are good enough for the proofs people have been wanting to do so far. I can only report from my own experience that when proofs scale up to hundreds of hypotheses and thousands of proofs steps, automation is crucial, and elim
can be a very useful framework for uniformly applying a variety of destruction operations in that context. It also allows to keep the number of custom tactics down, which seems to align with the sentiment expressed by @Joachim Breitner above that a large zoo of special-case tactics makes Lean more difficult to learn and master.
Shreyas Srinivas (Apr 19 2024 at 12:58):
Hanno Becker said:
Thanks all for the feedback and the alternatives! Looks like existing tactics are good enough for the proofs people have been wanting to do so far. I can only report from my own experience that when proofs scale up to hundreds of hypotheses and thousands of proofs steps, automation is crucial, and
elim
can be a very useful framework for uniformly applying a variety of destruction operations in that context. It also allows to keep the number of custom tactics down, which seems to align with the sentiment expressed by Joachim Breitner above that a large zoo of special-case tactics makes Lean more difficult to learn and master.
Agreed in principle. I just think that to lean users, the key to explain the importance of a tactic is to provide an example that demonstrates the need for the tactic beyond a doubt. That means coming up with an example that is tedious with the cases tactics we have, but a breeze for elim
. The current examples are useful test cases, but to demonstrate the utility of elim
over others, the example needs to be much more complex.
Shreyas Srinivas (Apr 19 2024 at 12:59):
We had a similar difficulty understanding the utility of small scale reflection in Lean SSR.
Hanno Becker (Apr 19 2024 at 13:06):
@Shreyas Srinivas The "issue" is that an elimination rule is hardly ever complicated, so for any instance of elim thm
it is most likely that one can counter "But tactic sequence X,Y,Z, does the same and is good enough?" The value lies in the uniformity of the approach, not any individual instance: The "X,Y,Z" will likely vary with the example (cf. cases, obtain, casesm*, repeat', interval_cases, cases_type*
considered in alternatives above), but with elim
, all that varies is the theorem you feed it, but not the tactic itself.
Shreyas Srinivas (Apr 19 2024 at 15:17):
I agree on the proof discipline aspect and fwiw I'd like to see elim in whatever version of std comes out. I am just pointing out that to convince the community, two things need to be communicated properly:
- In the form of examples, exactly when and how we ought to use elim to benefit from it over the existing array of cases tactics.
- How
elim
works in conjunction with or composes with other tactics, and the effect of clearing hypothesis on this.
Shreyas Srinivas (Apr 19 2024 at 15:19):
I think the second aspect is not as simple as "let's do what Isabelle does" because many a proof will ultimately have to use other tactics in combination with elim
and these are lean tactics
Last updated: May 02 2025 at 03:31 UTC