Zulip Chat Archive
Stream: general
Topic: tactic question
Neil Strickland (Jun 14 2019 at 13:19):
If I have h : i = j ∨ k = l
, I might do rcases h with rfl | rfl
. If I have h : i = j ∨ k = l ∨ p = q
, I might do rcases h with rfl | rfl | rfl
. Now suppose I want to handle a large number of goals in parallel, each of which has an h
as before, but with varying numbers of equations or'd together. What is a tidy way to apply rcases
with the right number of rfl
's in each goal?
Patrick Massot (Jun 14 2019 at 13:31):
Indeed it would be nice to have a more automatic rcases
Kevin Kappelmann (Jun 14 2019 at 13:44):
Maybe use the repeat
combinator?
Neil Strickland (Jun 14 2019 at 13:50):
I tried repeat { rcases h with rfl | _ }
but that did not work. Perhaps some other variant would work, but I could not find one.
Johan Commelin (Jun 14 2019 at 13:51):
You might have to explicitly call the second argument h
, instead of _
.
Johan Commelin (Jun 14 2019 at 13:51):
Otherwise rcases
might generate the wrong name.
Kevin Kappelmann (Jun 14 2019 at 14:04):
I am a bit confused, by the way, about the with rfl
part. I thought you'd need to write something like
repeat {cases h}; rfl
Neil Strickland (Jun 14 2019 at 14:05):
@Johan Commelin Yes, that was the problem, thanks
Neil Strickland (Jun 14 2019 at 14:09):
@Kevin Kappelmann If h : i = j
then rcases h with rfl
will "specialise to the case where h
is rfl : i = i
" and so will remove change j
to i
in all hypotheses and in the goal. Similarly, if h : i = j ∨ i = k
then rcases h with rfl | rfl
will give two new goals, one will all j
's changed to i
, and one with all k
's changed to i
.
Kevin Kappelmann (Jun 14 2019 at 14:22):
Ohhh, alright - thanks for explaining! :)
Wojciech Nawrocki (Jun 14 2019 at 14:26):
Hello! This is a question about tactics also - is ring
the only tactic for resolving equalities in an algebraic structure? Would something similar for other structures be considered useful or is simp
good enough?
Patrick Massot (Jun 14 2019 at 14:32):
There is also abel
, and any other one would be very useful
Sebastien Gouezel (Jun 14 2019 at 15:03):
Is there any tactic yet that could do
example {α : Type*} {s t u : set α} : s ∩ (t ∩ u) = (s ∩ t) ∩ (s ∩ u) := by { ext x, simp, split; finish }
This proof is quick to come up with, but slow. I could also come up with the right sequence of rewrites, which would be much faster, but my time is too precious for this :)
Simon Hudon (Jun 14 2019 at 16:45):
Does casesm
help at all? https://github.com/leanprover-community/lean/blob/master/library/init/meta/interactive.lean#L773
Wojciech Nawrocki (Jun 14 2019 at 17:17):
Regarding ring, is it more or less accurate that ring
runs faster than ring2
, but ring2
generates much smaller proof terms thanks to reflection and is therefore better for large expressions?
Mario Carneiro (Jun 16 2019 at 22:56):
Generally speaking, the "smaller proof term" heuristic for complexity breaks down in the presence of "heavy rfls"
Wojciech Nawrocki (Jun 17 2019 at 23:28):
What do you mean by heavy rfls? Is it that the normalizing procedure over the reflected expression has to take many steps and so does the kernel to decide judgmental eq., or something else?
Mario Carneiro (Jun 17 2019 at 23:30):
Definitional equality is a large and complicated judgment. Normally we don't stress it too much, we just unfold a definition here or a beta reduction there. By "heavy rfl" I mean something with many steps of definitional reduction, comparable to the size of the entire rest of the proof term or even much much larger
Wojciech Nawrocki (Jun 17 2019 at 23:32):
Okay, I think that's what I meant. So when you have that, you end up with a very small proof term which also happens to require a really large computation to typecheck, right?
Mario Carneiro (Jun 17 2019 at 23:32):
right
Mario Carneiro (Jun 17 2019 at 23:32):
and doing that computation in the kernel is not necessarily the best idea
Wojciech Nawrocki (Jun 17 2019 at 23:33):
Okay, I think when saying ".. better for large expressions" I should've really said "better in terms of proof size", but not necessarily better in terms of time lean --make
.
Mario Carneiro (Jun 17 2019 at 23:33):
Most of the time, the length of the typing derivation is proportional to the size of the proof term, but heavy rfls break that comparison
Mario Carneiro (Jun 17 2019 at 23:34):
In such cases it really is more accurate to measure the length of the typing derivation
Mario Carneiro (Jun 17 2019 at 23:35):
To me, "proof size" means size of typing derivation, because that correlates to wall time
Mario Carneiro (Jun 17 2019 at 23:36):
it is far too easy to write "short proofs" of every statement imaginable by doing a proof search in the kernel
Wojciech Nawrocki (Jun 17 2019 at 23:36):
By typing derivation you mean the time it takes for the kernel to typecheck the term, but not elaboration/inference etc time?
Mario Carneiro (Jun 17 2019 at 23:36):
yes
Mario Carneiro (Jun 17 2019 at 23:37):
the other stuff is engineering challenges, and also it's a one time cost, at least in principle. The kernel typechecking is mandatory for correctness
Wojciech Nawrocki (Jun 17 2019 at 23:40):
Right, but if a reflection tactic generates heavy rfls which take less time to check than an equivalent sequence of eq.subst
or some such would, then it would still be preferrable?
Mario Carneiro (Jun 17 2019 at 23:40):
yes
Mario Carneiro (Jun 17 2019 at 23:41):
That's why I wrote ring
and ring2
Mario Carneiro (Jun 17 2019 at 23:41):
because it wasn't obvious which method is faster in practice
Mario Carneiro (Jun 17 2019 at 23:42):
but the kernel is not very optimized, in order to keep code size down, and it has to unfold many things because it doesn't exactly know where to go
Mario Carneiro (Jun 17 2019 at 23:42):
with a proof term you can expose the kernel to only those things it needs to know
Mario Carneiro (Jun 17 2019 at 23:44):
basically, you have a lot more control with a proof term than with a rfl proof
Wojciech Nawrocki (Jun 17 2019 at 23:50):
With a rfl proof, could you not write your decision procedure in a way that makes it easy to reduce for the kernel?
Mario Carneiro (Jun 17 2019 at 23:55):
yes you can, and it's pretty important that you do
Mario Carneiro (Jun 17 2019 at 23:55):
but there is still the matter of the O(1) constant
Mario Carneiro (Jun 17 2019 at 23:56):
and the only way to really find that out is to profile it
Mario Carneiro (Jun 17 2019 at 23:57):
I think even just using the equation compiler to define, say, functions on lists is bad for the kernel
Mario Carneiro (Jun 17 2019 at 23:57):
not that bad, but measurable
Wojciech Nawrocki (Jun 18 2019 at 00:18):
I see
Wojciech Nawrocki (Jun 19 2019 at 14:37):
@Mario Carneiro Is there any documentation/paper explaining what the purpose of the cache in abel is?
Mario Carneiro (Jun 19 2019 at 23:22):
It's a bunch of info I want to inject into the proof term at various points without recomputing all the time. So I need the base type, the level of that type for constants, the zero constant because that comes up a lot, and the instance of either add_comm_monoid
or add_comm_group
(whichever it was able to find). This data can be calculated just from the type, which we find as soon as we start entering the term, and that means we can avoid almost all uses of mk_app
, which tends to be unpredictably slow sometimes (see also the recent bug in https://leanprover.zulipchat.com/#narrow/stream/116395-maths/topic/witt.20vectors ).
Wojciech Nawrocki (Jun 19 2019 at 23:48):
Thanks!
Wojciech Nawrocki (Jun 20 2019 at 15:55):
Is there a better notation for doing something on failure of a tactic match than
(do `(some_expr) <- t, next_t) <|> on_fail_t
? It gets ugly quickly with multiple nested matches and failure handlers. Ideally something local like
do (`(some_expr) <- t) except on_fail_t, next_t,
Wojciech Nawrocki (Jun 20 2019 at 17:35):
Found it, it's just
do `(some_expr) <- t | on_fail_t,
Wojciech Nawrocki (Jul 16 2019 at 15:55):
@Mario Carneiro is there any particular reason for using a custom tree
instead of rbnode
in ring2
? (The pos_num
indexing stuff could be added if rbnode
doesn't have it already.)
Mario Carneiro (Jul 16 2019 at 17:58):
The red-black tree implementation in core is broken,, so I try to use it as little as possible. Also it's important that the kernel compute with it well, so extraneous rebalancing code would be not good. All the rbnode handling happens in the VM, the kernel only sees tree
.
Wojciech Nawrocki (Jul 16 2019 at 18:02):
Oh I see, in what way is it broken? I think w.r.t. rebalancing it would be fine, because the rebalancing only happens on tree modifications, but neither normalization in of_csexpr
, nor eval
/cseval
modify the tree.
Mario Carneiro (Jul 16 2019 at 18:03):
It's still a more complicated recursive function since there are more cases
Mario Carneiro (Jul 16 2019 at 18:04):
You can't erase from an rbtree
Mario Carneiro (Jul 16 2019 at 18:04):
because the red black invariant is wrong
Wojciech Nawrocki (Jul 16 2019 at 18:45):
?! Do you mean it doesn't maintain the invariant, or it maintains the wrong invariant, or ..?
Mario Carneiro (Jul 16 2019 at 19:44):
It maintains the wrong invariant. The actual red black invariant is not defined; instead it does the lazy thing and says "whatever you can get to by applying insert to empty" which is obviously sufficient to get insert and empty but basically nothing else
Wojciech Nawrocki (Jul 16 2019 at 20:21):
So it is not defined, but are you also saying that "being a red-black tree" is actually dynamically broken, e.g. a red_node
appears directly as the child of a red_node
or some such? Or just that it doesn't maintain a proof of the right "being red-black" proposition (since well_formed
is defined as "either a leaf or the result of insert
on a well-formed tree" rather than "meets all the rbtree requirements")?
Mario Carneiro (Jul 16 2019 at 20:54):
The red-black property is maintained, but this is not proven
Mario Carneiro (Jul 16 2019 at 20:55):
The current definition of well_formed
is not equivalent to the red black property, but implies it
Mario Carneiro (Jul 16 2019 at 20:56):
But because of the over-strict well formedness predicate, you can't define anything on these red black trees other than insert and empty
Patrick Massot (Jul 17 2019 at 09:15):
I don't really remember what is an rb-tree but I remember this came up last summer. See https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/Ed's.20question.20barrage and https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/using_well_founded
Wojciech Nawrocki (Jul 18 2019 at 12:30):
I see what you mean now Mario, thanks a bunch for explaining! In that case it does make sense to use a custom tree
.
Thanks Patrick for the links - it does look like those discussions were partly about the same problem - I should have searched harder, sorry!
Wojciech Nawrocki (Jul 27 2019 at 20:22):
While a term and its reduced form are equivalent in the type theory, they are not so syntactically. This matters if I want to compare expr
s syntactically in a tactic. My question: is there a builtin/way to reduce an expr
to normal form?
Andrew Ashworth (Jul 27 2019 at 20:45):
whnf
Wojciech Nawrocki (Jul 27 2019 at 21:45):
Nice, there are also a bunch of others like head_beta
next to it. Thanks!
Johan Commelin (Aug 20 2020 at 06:29):
Suppose that I have a goal state, how do I get a list of all hypotheses that are mentioned in the target?
Johan Commelin (Aug 20 2020 at 06:29):
#xy : I want a tactic that reverts all those hyps (and whatever depends on them), and then runs exact dec_trivial
.
Mario Carneiro (Aug 20 2020 at 06:31):
I found this in tactic.clear
:
hyps.mmap' (λ h, do
dep ← kdepends_on tgt h,
when dep $ fail $
format!"Cannot clear hypothesis {h} since the target depends on it."),
n ← revert_lst hyps,
Johan Commelin (Aug 20 2020 at 06:32):
Ok, let me see if I can make this work.
Johan Commelin (Aug 20 2020 at 06:55):
import data.nat.basic
namespace tactic
namespace interactive
open tactic
meta def «dec_trivial» : tactic unit :=
do tgt ← target,
ctx ← local_context,
l ← ctx.mfilter (kdepends_on tgt),
n ← revert_lst l,
exact_dec_trivial
end interactive
end tactic
example (t : ℕ) (h : t < 2) : t = 0 ∨ t = 1 :=
begin
dec_trivial,
end
Johan Commelin (Aug 20 2020 at 06:55):
Bingo
Johan Commelin (Aug 20 2020 at 07:07):
Johan Commelin (Aug 20 2020 at 07:07):
@Joseph Myers This should make your fin 3
problems go away.
Johan Commelin (Aug 20 2020 at 07:09):
It means that you can now write things like
obtain ⟨hu, hs, he⟩ :
(finset.univ : finset (fin 3)) = {i₁, i₂, i₃} ∧
(finset.univ \ {i₂, i₃} : finset (fin 3)) = {i₁} ∧
finset.univ.erase i₁ = {i₂, i₃},
{ dec_trivial, },
Johan Commelin (Aug 20 2020 at 07:10):
Ooh, it's not good enough yet
Johan Commelin (Aug 20 2020 at 07:10):
It doesn't see through let
statements
Johan Commelin (Aug 20 2020 at 07:13):
This doesn't work yet
example (n : ℕ) (h : n < 3) : true :=
begin
let k : ℕ := n - 1,
have : ∀ i < k, i = 0 ∨ i = 1,
by dec_trivial,
end
Johan Commelin (Aug 20 2020 at 07:19):
It reverts k
, but the dependency checking isn't recursive :sad:
Johan Commelin (Aug 20 2020 at 07:19):
I guess I can fix that.
Johan Commelin (Aug 20 2020 at 07:25):
Is this ok?
meta def revert_target_deps : tactic unit :=
do tgt ← target,
ctx ← local_context,
l ← ctx.mfilter (kdepends_on tgt),
n ← revert_lst l,
if l = [] then skip else revert_target_deps
Johan Commelin (Aug 20 2020 at 07:30):
@Joseph Myers I've now gotten proofs like
/-- The orthocenter lies in the altitudes. -/
lemma orthocenter_mem_altitude (t : triangle ℝ P) {i₁ : fin 3} :
t.orthocenter ∈ t.altitude i₁ :=
begin
let s₂₃ : finset (fin 3) := finset.univ.erase i₁,
obtain ⟨i₂, h₂⟩ :=
finset.card_pos.1 (show 0 < finset.card s₂₃, by dec_trivial),
let s₃ : finset (fin 3) := s₂₃.erase i₂,
obtain ⟨i₃, h₃⟩ :=
finset.card_pos.1 (show 0 < finset.card s₃, by dec_trivial),
obtain ⟨h₁₂, h₂₃, h₁₃⟩ : i₁ ≠ i₂ ∧ i₂ ≠ i₃ ∧ i₁ ≠ i₃, by dec_trivial,
rw [orthocenter_eq_monge_point, t.altitude_eq_monge_plane h₁₂ h₁₃ h₂₃],
exact t.monge_point_mem_monge_plane h₂₃
end
Johan Commelin (Aug 20 2020 at 07:34):
Golfed:
/-- The orthocenter lies in the altitudes. -/
lemma orthocenter_mem_altitude (t : triangle ℝ P) {i₁ : fin 3} :
t.orthocenter ∈ t.altitude i₁ :=
begin
obtain ⟨i₂, i₃, h₁₂, h₂₃, h₁₃⟩ :
∃ (i₂ i₃ : fin 3), i₁ ≠ i₂ ∧ i₂ ≠ i₃ ∧ i₁ ≠ i₃, by dec_trivial,
rw [orthocenter_eq_monge_point, t.altitude_eq_monge_plane h₁₂ h₁₃ h₂₃],
exact t.monge_point_mem_monge_plane h₂₃
end
Yakov Pechersky (Aug 20 2020 at 14:12):
Often, in doing cases for fin 2
lets say, I've done:
{ rcases i with _|_|i,
{ sorry }, /-- proof for case 0 --/
{ sorry }, /-- proof for case 1 --/
{ exact absurd (nat.lt_of_succ_lt_succ (nat.lt_of_succ_lt_succ i_is_lt)) (not_lt_of_le (nat.zero_le _)) } } / proof for case 2+ --/
Yakov Pechersky (Aug 20 2020 at 14:12):
Would this tactic help with that?
Johan Commelin (Aug 20 2020 at 14:13):
You could rewrite it a bit, and then it would help.
Johan Commelin (Aug 20 2020 at 14:13):
obtain \<rfl, rfl\> : i = 0 \or i = 1, by dec_trivial,
Johan Commelin (Aug 20 2020 at 14:13):
But there is also fin_cases
for such problems
Yakov Pechersky (Aug 20 2020 at 14:14):
Ah, right! I always forget about fin_cases
because it doesn't work for arbitrary fin (n + 2)
.
Johan Commelin (Aug 20 2020 at 14:14):
This tactic also solves a slew of other problems that have nothing to do with fin n
.
Johan Commelin (Aug 20 2020 at 14:15):
Neither does this one.
Last updated: Dec 20 2023 at 11:08 UTC