Zulip Chat Archive
Stream: metaprogramming / tactics
Topic: New tactic: `positivity`
Heather Macbeth (Jul 26 2022 at 21:53):
In Providence last week, @Mario Carneiro helped me write my first mathlib tactic, and it was merged this morning. docs#tactic.interactive.positivity solves goals of the form 0 < x
and 0 ≤ x
when the proof is "clear from the syntax of x
": that is, a sequence of applications of lemmas like add_pos
and mul_nonneg
and abs_nonneg
. So, e.g., from the test file,
example {a b : ℚ} (ha : 3 < a) (hb : 4 ≤ b) : 0 < 3 + a * b / 7 + b + 7 + 14 := by positivity
or from my follow-up golfing PR #15701,
example {α : Type*} [linear_ordered_field α] (x : α) : 0 < 2 * (1 + |x|) := by positivity
example (x y z : ℝ) (hx : x > 0) (hy : y > 0) (hz : z > 0) :
((x^3 - 1)^2*x^2*(y^2 + z^2))/((x^5+y^2+z^2)*(x^3*(x^2+y^2+z^2))) ≥ 0 :=
by positivity
Heather Macbeth (Jul 26 2022 at 21:54):
I hope this tactic will be useful in bigger inequality calculations, where annoying positivity side goals turn up over and over again. Please test it! Looking forward to feedback!
Eric Rodriguez (Jul 26 2022 at 22:03):
can it handle goals of the type x < 0
, where 0 < -x
(modulo neg_neg
) would be solved by positivity
? that would be a really neat addition to a really neat tactic :)
Mario Carneiro (Jul 26 2022 at 22:07):
that's... not positivity?
Eric Rodriguez (Jul 26 2022 at 22:12):
sure but it's not too far of an extension. if you want to call it negativity
that's fair
Kevin Buzzard (Jul 26 2022 at 22:44):
Aah, the desire by users for mission creep has begun early on this one!
Eric Rodriguez (Jul 26 2022 at 23:00):
neg_neg_iff.mp (by positivity)
doesn't have the same slickness to it, and it should be like 3 lines. (plus, is it even docs#neg_neg_iff? it definitely isn't neg_neg
, anyways)
Heather Macbeth (Jul 27 2022 at 02:48):
@Eric Rodriguez Dealing with negativity is a reasonable idea, but in my opinion what makes it impracticable here is that you might prove (say) by showing that and , or by any other permutation. So there's branching. Whereas for positivity, the naive greedy algorithm works surprisingly often.
I would even claim that mathematicians are trained to reorganize paper proofs to favour getting positivity (rather than negativity) side goals, for this same reason.
Sebastien Gouezel (Jul 29 2022 at 10:22):
Is there an easy way to tweak positivity
so that it also solves the following goal:
lemma foo (a b : ℝ) (ha : 0 ≤ a) (hb : a < b) : 0 < b :=
by positivity
(which doesn't work currently)?
Or maybe it's too bad from a computational point of view as it could lead to too much branching?
Jireh Loreaux (Jul 29 2022 at 14:01):
A few ideas:
- Maybe you could limit the branching by passing an integer which limits the depth of the transitivity goals (e.g., syntax could be
with 2
), and the default is no transitivity. - I guess the other problem might be: how do you know which intermediate value (in the example,
a
) to compare? Perhaps the user could supply the intermediate term, this would also limit branching. In this case, the user might even be able to pass a list of intermediate values.
Yes, there is significant mission creep!
Sebastien Gouezel (Jul 29 2022 at 14:23):
My idea was rather to cycle through the context. Suppose you want to check positivity of b
. Then you cycle through the context, looking for inequalities foo < b
or foo ≤ b
. And when you find one, apply recursively positivity
to check if foo
is positive or nonnegative. I agree that the depth should be limited to 1 or 2 (for instance to avoid cycles when you have both a ≤ b
and b ≤ a
in the context, and also to keep reasonable complexity).
Heather Macbeth (Jul 29 2022 at 14:26):
Indeed, this is the obvious limitation of the tactic in its current form. I discussed it with @Mario Carneiro last week and he seemed to think that building a full digraph of the information in the order hypotheses was not completely intractable. @Jeremy Avigad also mentioned Polya (cc @Rob Lewis) for this kind of problem, although I don't completely understand its scope.
Heather Macbeth (Jul 29 2022 at 14:27):
Another possibility (but slow) is to run linarith
as the "base case" of the tactic.
Sebastien Gouezel (Jul 29 2022 at 15:30):
Heather Macbeth said:
Another possibility (but slow) is to run
linarith
as the "base case" of the tactic.
For me, one of the main interests of positivity
is that it is pretty quick, so calling linarith
doesn't look like a good idea here.
Jeremy Avigad (Jul 29 2022 at 19:48):
Polya (https://link.springer.com/article/10.1007/s10817-015-9356-y) was a much more complicated (and hence slower) thing. The idea was to reason forward and iteratively derive more and more facts about expressions of interest. Here "expressions of interest" meant more or less all subterms of expressions in the hypotheses and conclusion.
In the case of a positivity tactic, one could make a list of all expressions of interest and iteratively derive new comparisons with 0 with rules like "if a is positive and b is nonnegative, then a + b is positive," "if a is positive and a <= b, then b is positive," "if a is negative then -a is positive", "if a is positive and b is nonpositive then a - b is positive," "abs a" is nonnegative, "a^2 is nonnegative", etc. You keep going until you reach your goal or you run out of rules to apply.
Polya was actually designed to handle arbitrary real inequalities, so more generally it would look for comparisons between any pair of expressions of interest. Linear arithmetic was called repeatedly as a subroutine. Rob and I had a Python implementation that worked pretty well and Rob wrote a proof-producing version in Lean 3 as part of his dissertation, but I think it was too hard to make the latter efficient.
Heather Macbeth (Jul 29 2022 at 23:41):
Jeremy Avigad said:
Linear arithmetic was called repeatedly as a subroutine.
I guess, as Sébastien commented, this rules out using an exact analogue of the Polya idea for the purposes of positivity
. (The dream is that positivity
be a default discharger for field_simp
, mono
, apply_rules
, etc -- so it has to be fast.). But this idea about "expressions of interest" is interesting and perhaps reusable.
Jeremy Avigad said:
The idea was to reason forward and iteratively derive more and more facts about expressions of interest. Here "expressions of interest" meant more or less all subterms of expressions in the hypotheses and conclusion.
Heather Macbeth (Jul 29 2022 at 23:43):
@Mario Carneiro Could you refresh my memory about what you were saying in Providence, about what is feasible in terms of the web of transitivity implications among hypotheses?
Mario Carneiro (Jul 30 2022 at 00:29):
The idea is to populate the context with 0 <= x
or 0 < x
for all applicable x
which appear in inequalities in the hypotheses. So it's doing forward reasoning: if you know 0 <= a
and you find h : a < b
then add 0 < b
, then continue until saturation. Then do the rest of the proof as normal
Mario Carneiro (Jul 30 2022 at 00:30):
this doesn't get stuck in loops because if you already know a positivity fact about x
then you only deduce a new one if it's better than what you already have
Moritz Doll (Aug 02 2022 at 19:14):
It seems that positivity
does not try to use finset.prod_nonneg
, i.e., this fails:
import algebra.big_operators.order
import tactic.positivity
variables {ι R : Type*} [ordered_comm_semiring R] {f : ι → R} {s : finset ι}
example (h0 : ∀ (i : ι), i ∈ s → 0 ≤ f i) : 0 ≤ s.prod (λ (i : ι), f i) :=
begin
positivity,
end
But I want to say that positivity
is a huge quality of life improvement: I am working on the Schwartz space and it would be so annoying finding all the proofs for nonnegativity myself (they are usually not hard, but it is just distracting).
Heather Macbeth (Aug 02 2022 at 19:33):
@Moritz Doll Thanks for the idea! finset.prod_nonneg
should be an easy addition, similar to docs#tactic.positivity_sqrt or or docs#tactic.positivity_mul, do you want me to add it or would you like to try yourself?
Heather Macbeth (Aug 02 2022 at 19:36):
Actually, maybe it's not so easy, because it doesn't reduce to another goal doable by positivity
(i.e. ∀ (i : ι), i ∈ s → 0 ≤ f i
is not itself a goal doable by positivity).
Heather Macbeth (Aug 02 2022 at 19:37):
Maybe it's possible if we throw in a fin_cases
on i ∈ s
.
Moritz Doll (Aug 02 2022 at 19:37):
my work-around was (finset.prod_nonneg (λ _ _, by positivity))
, I have no idea whether this is easy to do in tactics
Heather Macbeth (Aug 02 2022 at 19:41):
Moritz Doll said:
my work-around was
(finset.prod_nonneg (λ _ _, by positivity))
, I have no idea whether this is easy to do in tactics
I'm surprised this works! What about a situation when by positivity
works for all the goals but for a different reason each time?
E.g., say s
is range n
, i.e. , and f 0 = 0
, f 1 = x
, f 2 = x + x
, f 3 = x + x + x
, ... ? Say we have hx : 0 ≤ x
as a hypothesis. Then by positivity
works to show that each f i
is positive, but the term proof is always different (hx
for f 1
, add_nonneg hx hx
for f 2
, add_nonneg (add_nonneg hx hx) hx
for f 3
, etc.)
Heather Macbeth (Aug 02 2022 at 19:42):
But it seems plausible that this would for any numeric n
:
(finset.prod_nonneg (λ i hi, fin_cases i; by positivity))
Moritz Doll (Aug 02 2022 at 19:43):
it does not work in the example above, but in my real-world problem
Heather Macbeth (Aug 02 2022 at 19:43):
It'd be interesting to have that real-world problem simplified to a test case, if you get the time.
Moritz Doll (Aug 02 2022 at 19:46):
import algebra.big_operators.order
import tactic.positivity
import analysis.normed_space.basic
variables {ι R : Type*} [normed_ring R] {f : ι → R} {s : finset ι}
example : 0 ≤ s.prod (λ (i : ι), ∥f i∥) :=
begin
positivity,
end
example : 0 ≤ s.prod (λ (i : ι), ∥f i∥) :=
begin
refine (finset.prod_nonneg (λ _ _, by positivity)),
end
positivity does not do anything fancy here of course
Moritz Doll (Aug 02 2022 at 19:47):
but it is so tedious in actual proofs to golf it to
example : 0 ≤ s.prod (λ (i : ι), ∥f i∥) :=
(finset.prod_nonneg (λ _ _, norm_nonneg _))
Heather Macbeth (Aug 02 2022 at 19:50):
I think maybe the naively-implemented finset.prod_nonneg
tactic extension would work here. I can try it, or if you're interested in trying it I can help you :)
Mario Carneiro (Aug 02 2022 at 21:42):
I think that Moritz's solution is the correct one for preserving the concept of "syntactically obvious nonnegativity" @Heather Macbeth ; you wouldn't expect such a proof to go by cases on all the values in the set. The only exception to this would be if the new hypothesis i \in s
implies a nonnegativity assumption like 0 <= i
that is then used to prove the goal is nonnegative; although positivity
does pick up hypotheses it doesn't do any fancy reduction on them besides that norm_num stuff for 3 <= i
Heather Macbeth (Aug 02 2022 at 22:15):
Mario Carneiro said:
The only exception to this would be if the new hypothesis
i \in s
implies a nonnegativity assumption like0 <= i
that is then used to prove the goal is nonnegative; althoughpositivity
does pick up hypotheses it doesn't do any fancy reduction on them besides that norm_num stuff for 3 <= i
I've already seen cases where positivity
fails because one has a goal or hypothesis which is defeq to something relevant to positivity
, but not syntactically equal:
https://github.com/leanprover-community/mathlib/pull/15701#discussion_r935073747
The hypothetical situation where i \in s
dsimps to e.g. 7 ≤ i
would be another such. But I agree this is not in scope for positivity
.
Mario Carneiro (Aug 02 2022 at 22:25):
One option would be to give positivity
a discharger
config option which would let people put some simp
call there
Heather Macbeth (Aug 02 2022 at 22:26):
Mario Carneiro said:
One option would be to give
positivity
adischarger
config option which would let people put somesimp
call there
Following on from our previous discussion, probably linarith
would be a popular discharger, if the option were available.
Heather Macbeth (Aug 02 2022 at 23:36):
Mario Carneiro said:
One option would be to give
positivity
adischarger
config option which would let people put somesimp
call there
@Mario Carneiro I'm not sure this one is a classic "discharger". The simp call would happen in the middle of the tactic, not at the end.
Mario Carneiro (Aug 02 2022 at 23:37):
yes, that's what dischargers are usually for
Mario Carneiro (Aug 02 2022 at 23:37):
if you can do it before calling the tactic, you wouldn't need the discharger
Heather Macbeth (Aug 02 2022 at 23:37):
Here are a couple of examples of what it would need to do: one adapted from the discussion on #15701 I referenced, one adapted from Moritz' situation.
import algebra.group_with_zero.power
import algebra.big_operators.order
import data.fin.interval
import tactic.positivity
open_locale big_operators
example : 0 < ∏ (x : fin 10) in finset.Ioi 2, (x:ℕ) ^ 3 :=
begin
apply finset.prod_pos,
intros i hi,
simp [finset.mem_range, fin.lt_iff_coe_lt_coe] at hi,
positivity,
end
example {α : Type*} [linear_ordered_field α] {x ε : α} (hx : 0 < 2 * (1 + |x|))
(ε_pos : (λ (ε : α), 0 < ε) ε) :
0 < ε / (2 * (1 + |x|)) :=
begin
simp at ε_pos,
positivity,
end
Mario Carneiro (Aug 02 2022 at 23:38):
but this finsum example requires doing the simplification on a hypothesis that hasn't been introduced yet before calling positivity
Mario Carneiro (Aug 02 2022 at 23:39):
I'm imagining that you would use simp [finset.mem_range, fin.lt_iff_coe_lt_coe] at *
as the discharger here
Mario Carneiro (Aug 02 2022 at 23:40):
and it would be called whenever something new is added to the context (so once at the start and once after the finsum theorem application)
Heather Macbeth (Aug 02 2022 at 23:47):
That's different, then, from what I was imagining with having linarith as a "discharger": for that one (which I think is more commonly useful?) you would add a linarith
call at this line:
https://github.com/leanprover-community/mathlib/blob/777082da584ee55c55d6f2da1ec428004d9e0999/src/tactic/positivity.lean#L145
Mario Carneiro (Aug 03 2022 at 00:00):
yeah I noticed that as well. linarith
wants to operate on the goal, so it would make sense to run that on every subterm
Mario Carneiro (Aug 03 2022 at 00:01):
but gosh that sounds expensive
Heather Macbeth (Aug 03 2022 at 00:03):
What might make sense is to have a self-replacing tactic, positivity!
, which tries running linarith
on every subterm where it gets stuck, and then reports "Try this:
have : *** := by linarith,
have : *** := by linarith,
positivity,
Mario Carneiro (Aug 03 2022 at 00:04):
also, you could do like linarith itself and have positivity [a, b, c]
be shorthand for have := a, have := b, have := c, positivity
Mario Carneiro (Aug 03 2022 at 00:05):
eh actually that's awkward since it would have to be positivity [show ***, by linarith, ...]
Heather Macbeth (Aug 03 2022 at 00:07):
mono
actually uses the same syntax for a different purpose: you use it to ask for things and they then are left later as goals.
Heather Macbeth (Aug 03 2022 at 00:08):
So the mono-inspired syntax for
have : a < b := by linarith,
have : c < d := by linarith,
positivity
would be
positivity with [a < b, c < d]; linarith
where linarith
is proving a < b
and c < d
.
Heather Macbeth (Aug 03 2022 at 00:09):
(Although I'm not sure I like this mono
syntax, precisely because it conflicts with the linarith
syntax!)
Mario Carneiro (Aug 03 2022 at 00:14):
yeah, I would write that as positivity [(_ : a < b), (_ : c < d)]
instead
Heather Macbeth (Aug 03 2022 at 00:17):
At that point, not so much less less wordy than
have : a < b := _, have : c < d := _, positivity
Heather Macbeth (Aug 03 2022 at 00:20):
Which is to say: I rarely use linarith [***]
, in practice I just do have := ***, linarith
which lets me check that the term I'm constructing is well-formed. I expect that in the same way I would rarely use the syntax positivity [***]
if it were implemented.
Heather Macbeth (Aug 03 2022 at 00:20):
But, other people's workflow might be different from mine!
Mario Carneiro (Aug 03 2022 at 00:40):
does that actually work? The goals probably come out in the wrong order that way
Heather Macbeth (Aug 03 2022 at 00:43):
To make this more concrete: We're talking about more ergonomic syntax for
import tactic.positivity
import tactic.linarith
example {a b c : ℚ} (h1 : 0 < 2 * a) (h2 : 0 < b) (h3 : b < c) :
0 < a * b * c :=
begin
have : 0 < a := by linarith,
have : 0 < c := by linarith,
positivity
end
Right?
Mario Carneiro (Aug 03 2022 at 00:46):
I mean with the underscores
Heather Macbeth (Aug 03 2022 at 00:46):
This does work (although I don't necessarily think it's an improvement):
have : 0 < a := _,
have : 0 < c := _,
positivity,
all_goals { linarith }
Mario Carneiro (Aug 03 2022 at 00:46):
I would expect that positivity [(_ : a < b), (_ : c < d)]
would close the main goal and yield the subgoals a < b
and c < d
in that order
Heather Macbeth (Aug 03 2022 at 00:50):
This seems like a reasonable behaviour to me, but maybe we should wait to hear from people who expect they would use that syntax variant if it were implemented.
Moritz Doll (Aug 03 2022 at 18:39):
Heather Macbeth said:
Moritz Doll Thanks for the idea!
finset.prod_nonneg
should be an easy addition, similar to docs#tactic.positivity_sqrt or or docs#tactic.positivity_mul, do you want me to add it or would you like to try yourself?
I am very tempted to look into tactic-writing more seriously, but I think I have to be responsible and not overload myself with Lean projects. If you don't get around doing it, then I might look into it in a few weeks.
Moritz Doll (Aug 06 2022 at 02:51):
What is the reason that positivity
implements these meta definitions and not use flags similar to continuity
or ext
?
Mario Carneiro (Aug 06 2022 at 02:53):
because it's not just throwing a list of lemmas at the goal, it has custom methods for how to combine < or <= subgoals
Mario Carneiro (Aug 06 2022 at 02:54):
in that respect it's a bit more architecturally similar to norm_num
, which also uses custom tactics in its extension point
Kevin Buzzard (Aug 06 2022 at 09:25):
Note that every new tactic or extension of tactics written in Lean 3 makes the port to lean 4 harder.
Moritz Doll (Aug 06 2022 at 09:53):
To be honest, I think there should be a deadline for new meta-programming in mathlib and it should have probably been over a few week ago.
Mario Carneiro (Aug 06 2022 at 09:54):
That's silly. We can port tactics 20 times faster than we can come up with interesting new ones
Mario Carneiro (Aug 06 2022 at 09:55):
I'd rather not stand in the way of people with good ideas
Patrick Massot (Aug 06 2022 at 09:55):
What happened to all the tactics that were ported in Providence?
Mario Carneiro (Aug 06 2022 at 09:56):
about 70% of them got merged and the other 30% still need review or have outstanding issues
Mario Carneiro (Aug 06 2022 at 09:56):
as always, reviews are great
Mario Carneiro (Aug 06 2022 at 09:57):
there haven't been too many submitted since then, though
Patrick Massot (Aug 06 2022 at 09:58):
Maybe it would be nice to update the list of targets. I guess there is no remaining easy ones, but you flag the easiest medium ones.
Kevin Buzzard (Aug 06 2022 at 10:21):
Maybe medium is the new easy? You can rebrand!
Kevin Buzzard (Aug 06 2022 at 10:23):
Re: porting tactics. I definitely trust Mario's judgement on this over my own so am happy to retract my original claim (I guess it's literally true but he's suggesting it's not true enough to be relevant)
Moritz Doll (Sep 11 2022 at 04:37):
Some more extensions of positivity that I would like to see:
(a) has.abs
: should be very similar to the norm, I don't have any applications for that at the moment, so I would leave that for someone else
(b) rpow
: this I might actually try to do myself, but if someone else wants to do it I would be really happy, too
Yaël Dillies (Sep 11 2022 at 10:47):
(a) docs#tactic.positivity_abs
(b) I had already started an extension for the missing exponentiations, so I will finish it
Eric Wieser (Sep 11 2022 at 12:22):
Mario Carneiro said:
because it's not just throwing a list of lemmas at the goal, it has custom methods for how to combine < or <= subgoals
Is the custom method ever more than "pick between tagged lemmas that have arguments of the right type" could achieve?
Yaël Dillies (Sep 11 2022 at 12:44):
That sure would make it much more user-friendly. The core tactic does very much look like norm_num
, but the extensions are no more than lemma tagging in essence.
Yaël Dillies (Sep 11 2022 at 12:46):
The only advantage I can see with this custom approach is that we get to decide in what order lemmas are tried and when to weaken 0 < a
to 0 ≤ a
(this to avoid branching). Not sure whether that noticeably improves performance.
Yaël Dillies (Sep 11 2022 at 12:53):
#16462 for real powers
Jireh Loreaux (Sep 11 2022 at 13:54):
note: some things don't work for strict positivity on ℝ≥0 because we don't have a linear_ordered_semifield
instance, so this should be added to some generic TODO list.
Yaël Dillies (Sep 11 2022 at 13:56):
docs#nnreal.canonically_linear_ordered_semifield :grinning:
Yaël Dillies (Sep 11 2022 at 13:56):
I did some homework over the holidays.
Jireh Loreaux (Sep 11 2022 at 14:08):
Nice, I had just checked this a week or so ago and filed it away in the back of my head as something to do! Is this a new feature akin to Yaël search? Yaël mindread?
Yaël Dillies (Sep 11 2022 at 14:10):
That actually dates from #16307, 10 days ago.
Yaël Dillies (Sep 11 2022 at 14:11):
Yaël comes with many functions and capabilities, but most importantly can be extended using a comprehensive metaprogramming framework.
Mario Carneiro (Sep 12 2022 at 01:33):
Eric Wieser said:
Mario Carneiro said:
because it's not just throwing a list of lemmas at the goal, it has custom methods for how to combine < or <= subgoals
Is the custom method ever more than "pick between tagged lemmas that have arguments of the right type" could achieve?
Yes, in particular they will generally encode a particular order of evaluation which minimizes the need to search for proofs that are either useless or will be subsumed by a stronger result. Maybe more important than the performance is that it allows the user to predict when the tactic will fail and what to do if it does fail, which I consider a very important property of tactics which is lost in the more kitchen-sink style tactics.
Yaël Dillies (Sep 12 2022 at 15:45):
Here's a PR that aims at slightly improving the capabilities of positivity
: #16483
Yaël Dillies (Sep 12 2022 at 15:48):
On its own, it is a light postprocessing to handle goals of the form a ≠ 0
and 0 ≠ a
by first trying to prove 0 < a
. But my greater hope is to fully handle nonzeroness assumptions because
- They are often side goals of order lemmas
- They fit in the same framework as nonnegativity and positivity assumptions
Yaël Dillies (Sep 16 2022 at 14:20):
And here's the full PR: #16529 (+431, -149)
Yaël Dillies (Sep 16 2022 at 14:20):
Now, all those tests work out of the box.
Yaël Dillies (Sep 24 2022 at 17:41):
Thanks again for this tactic. Now that my positivity
extensions went through, I could update SRL to use it and it makes proofs much more readable.
Heather Macbeth (Sep 24 2022 at 19:20):
Nice! These are exactly the kinds of use cases I had in mind.
Heather Macbeth (Sep 24 2022 at 19:30):
@Yaël Dillies Would you be interested in trying out my work-in-progress tactic ineq_congr
on parts of this argument? It should deal with some of the mul_le_mul_of_nonneg_left
and similar.
Heather Macbeth (Sep 24 2022 at 19:38):
This is supposed to be a tactic which turns a goal state
a b c x y : ℝ
h : 3 < a
⊢ c + (a + b ^ 2) * x ≤ c + (a + b ^ 2) * y
to the goal state
a b c x y : ℝ
h : 3 < a
⊢ x ≤ y
(so it's kind of a cousin of tactic#mono)
Yaël Dillies (Sep 24 2022 at 20:09):
Absolutely! I could feel that my uses of positivity
were what you expected so I was hoping your fancy new tactic would help further.
Heather Macbeth (Sep 24 2022 at 20:10):
Great. Looking forward to your experiments!
Yaël Dillies (Sep 24 2022 at 20:11):
My hope a year ago was to get SRL merged rather quickly (what a dream...). Now I see it's a perfect test case for automation, especially those two ugly files that pile up calculations.
Yaël Dillies (Sep 24 2022 at 20:12):
So I am now seeking to shorten the proof significantly by writing more automation (hence the many positivity
extensions I PRed recently).
Heather Macbeth (Sep 24 2022 at 20:12):
What does the Isabelle version look like? Do they have some of this automation already?
Yaël Dillies (Sep 24 2022 at 21:07):
It's an unfair comparison because the version of SRL they formalised only needs a watered down version of those calculations.
Yaël Dillies (Sep 24 2022 at 21:17):
This Isabelle lemma roughly corresponds to our two files chunk
and increment
Yaël Dillies (Sep 24 2022 at 21:28):
By the way, is ineq_congr
meant to be "fancy"? That is, will it leave a possibly changing goal and thus should only be followed by other "fancy" tactics like simp
and linarith
?
Heather Macbeth (Sep 24 2022 at 21:36):
I don't follow, sorry, what do you mean by "a possibly changing goal"?
Jireh Loreaux (Sep 25 2022 at 00:20):
I think Yaël means that "fancy" tactics have output which might change with different versions of mathlib (like simp
with a changing simp
set), and we generally recommend that such tactics should only be followed by other fancy tactics. Whereas refine
would be a "simple" tactic, and a tactic sequence like simp, refine
would be prone to breakage.
Yaël Dillies (Sep 25 2022 at 15:23):
I don't know what instantiating mvars is, sorry!
Mario Carneiro (Sep 25 2022 at 15:39):
Yes, that's almost certainly an instantiated metavariable issue.
Metavariables are the ?m_1
terms that show up inside expressions. They represent parts of the proof that have not been completed yet. When you do something to determine the value of the metavariable, it is "assigned", but this doesn't go looking everywhere for occurrences of that metavariable to replace them with the resulting value, both because it would be too expensive and because it's not even possible in a functional language like lean, the expr
you have is immutable even in the face of changes to the assignments. Instead, these assignments are just stored in some state on the side (it's in the tactic_state
), and the pretty printer will print the value of an assigned metavariable instead of ?m_1
if it sees one, so it provides the appearance of having replaced the value everywhere.
There is a function instantiate_mvars
which given an expression will replace all assigned metavars with their assignments. The recurring bug that you find in tactics that forget to call this function is that if you pattern match on an expression hoping that it is of the form x = y
, but it is actually of the form ?m_1
where ?m_1 := x = y
, then the match will fail and the user will get a confusing error message saying it's not an equality when it clearly is. The solution is to call instantiate_mvars
before the match.
You might think we can try to preserve the property that all goals have instantiated mvars, but it's too computationally expensive to do so all the time (it requires a traversal of the expression), so instead we just do it at whnf
and when matching expressions.
Damiano Testa (Sep 25 2022 at 15:44):
Mario's answer clarified some doubts that I had. If you want to see how Floris's suggestions fixed a similar bug that I had, see here. In particular, if you match on a target
, you probably want to instantiate_mvars
first (or, if I understand correctly what Mario says, maybe simply whnf
it first).
Damiano Testa (Sep 25 2022 at 15:55):
If I were to guess, the issue is
Try adding t ← instantiate_mvars t,
in-between those two lines. Maybe this fixes your issue.
Mario Carneiro (Sep 25 2022 at 15:57):
Yes, that looks like the right place. (or the golfing version t <- target >>= instantiate_mvars,
)
Damiano Testa (Sep 25 2022 at 15:57):
Edit: this is incorrect, see Mario's comment below.
(You might get away with simply t ← whnf t
, if I understood correctly what Mario said.)
Mario Carneiro (Sep 25 2022 at 15:58):
no, whnf
wouldn't be the right thing to do here since it would unfold things like 0 <= t
to nat.le 0 t
which would make the match fail
Heather Macbeth (Sep 25 2022 at 16:16):
@Yaël Dillies Is the word "possible" missing from this line?
Yaël Dillies (Sep 25 2022 at 16:17):
That's , indeed! :grinning:
Yaël Dillies (Sep 25 2022 at 16:19):
Mario Carneiro said:
Yes, that looks like the right place. (or the golfing version
t <- target >>= instantiate_mvars,
)
Does that mean that we basically never use target
but rather always target >>= instantiate_mvars
? Does this exist as a def then?
Mario Carneiro (Sep 25 2022 at 16:20):
no, it's only if you want to do matching on the target that you would do this
Mario Carneiro (Sep 25 2022 at 16:20):
and it's not just the target that you might want to match, the same issue can happen with the types of hypotheses in the local context
Yaël Dillies (Sep 25 2022 at 16:21):
Hmm, that happens quite a lot in the core positivity
tactics, right?
Mario Carneiro (Sep 25 2022 at 16:21):
yes, but you only have to do it once since instantiate_mvars
recursively traverses the term
Yaël Dillies (Sep 25 2022 at 16:22):
If you call on the target it doesn't traverse the hypotheses, though?
Mario Carneiro (Sep 25 2022 at 16:23):
instantiate_mvars
just works on an expr
Mario Carneiro (Sep 25 2022 at 16:23):
it doesn't change the goal state
Yaël Dillies (Sep 25 2022 at 16:25):
I thought as much
Damiano Testa (Sep 26 2022 at 06:54):
I opened a PR to fix this issue: #16647.
Damiano Testa (Sep 26 2022 at 06:55):
With the current master
version of positivity
, this happens:
import tactic.positivity
-- works
example : 0 ≤ 0 :=
by refine le_trans (by positivity) le_rfl
-- fails
example : 0 ≤ 0 :=
by apply le_trans (by positivity) le_rfl
Both work after the PR. EDIT: Actually, in this compacted form, the apply
still does not work.
@Yaël Dillies: I hope that this fixes your case as well! Let me know if it does not.
Damiano Testa (Sep 26 2022 at 07:05):
Ok, this is what actually happens after the current PR:
-- works after the PR, does not work before the PR
example : 0 ≤ 0 :=
begin
apply le_trans _ le_rfl,
positivity,
end
-- fails, before and after the PR
example : 0 ≤ 0 :=
by apply le_trans (by positivity) le_rfl
Damiano Testa (Sep 26 2022 at 07:09):
Is there a way to make the last example work, or is this precisely a place where the difference between apply
and refine
should show up?
Yaël Dillies (Sep 26 2022 at 07:40):
Can you try whether rw
does any better?
Damiano Testa (Sep 26 2022 at 07:42):
Do you mean rw (by <something involving positivity>)
?
Damiano Testa (Sep 26 2022 at 07:45):
In fact, if you have an example of something that currently does not work, I am happy to try it with the fix in the PR.
Yaël Dillies (Sep 26 2022 at 08:00):
No I mean use rw
instead of positivity
in the example that still fails to see whether this is an inherent limitation due to apply
or whether positivity
is still botched.
Yaël Dillies (Sep 26 2022 at 08:01):
Or any other tactic that we know to not have problems with metavariables
Damiano Testa (Sep 26 2022 at 08:04):
Ok, I understand now what you had in mind.
This is the response:
-- fails
example : 0 ≤ 0 :=
by apply le_trans (by rw ← add_zero 0) le_rfl
-- works
example : 0 ≤ 0 :=
begin
apply le_trans _ le_rfl,
rw ← add_zero 0,
end
So I am guessing that the inlined version with apply ... (by positivity) ...
is not meant to work.
Yaël Dillies (Oct 05 2022 at 15:59):
Just a heads up that positivity
now reasons on assumptions of type a ≠ 0
as well thanks to #16529. This is very useful when a function is positive on both positive and negative inputs, like abs
.
Last updated: Dec 20 2023 at 11:08 UTC