Zulip Chat Archive
Stream: general
Topic: -T50000 challenge
Kevin Buzzard (May 16 2019 at 23:17):
I just tried compiling mathlib with the timeout flag set to T50000 and it doesn't compile -- there are some deterministic timeouts.
Why do this? Well, when making the perfectoid project there were times when we were really battling against timeouts; for example check out
which should be a one line term mode proof but Lean would time out at -T100000 (the default time-out value) so we had to tread really carefully to make the proof compile. At the time I thought that we were pushing Lean to the edge -- but since Mario responded in the timeout thread and showed us some more tricks I realise now that actually we were simply not getting Lean to unify efficiently. @Mario Carneiro I don't know whether a judicious : _
fixes that proof, but definitely a judicious bunch of let
s makes the proof compile instantly -- I've made this happen myself.
Compiling mathlib with -T50000
might show up some more instances where people have written code which takes several seconds to compile instead of several tenths of a second because of similar issues. The first issue I ran into is
mathlib-community-master/src/category_theory/equivalence.lean:124:33: error: (deterministic) timeout mathlib-community-master/src/category_theory/equivalence.lean:124:30: error: (deterministic) timeout mathlib-community-master/src/category_theory/equivalence.lean:124:30: error: (deterministic) timeout mathlib-community-master/src/category_theory/equivalence.lean:123:2: error: (deterministic) timeout mathlib-community-master/src/category_theory/equivalence.lean:122:23: error: (deterministic) timeout
with code
def adjointify_η : 𝟭 C ≅ F ⋙ G := η ≪≫ iso_whisker_left F ((left_unitor G).symm ≪≫ iso_whisker_right ε.symm G) ≪≫ iso_whisker_right η.symm (F ⋙ G)
Is there some way of making Lean unify this more quickly?
NB the easiest way to play along at home is to open mathlib in VS Code, File -> Preferences -> Settings, search for Lean and find the timeout value and change it from 100000 to 50000, and then add and remove a character from equivalence.lean to make it recompile.
Floris van Doorn (May 16 2019 at 23:32):
Yeah, I can imagine that that definition is a stress test on Lean. So for functors the associator (F ⋙ G) ⋙ H ≅ F ⋙ (G ⋙ H
is true by reflexivity and the unitors 1 ⋙ F ≅ F
are true by reflexivity as long as F
is of the form ⟨_, _, _, _⟩
(usually this is the case if F
is anything other than a variable). In the category theory library we can use/abuse this, by not explicitly writing down these associators and unitors, but at the cost of making Lean do (a lot of) extra work. We maybe should decide that we want to avoid doing these rules silently. However, this would change the definitions of natural isomorphisms we are defining, which might make future proofs more finicky. See also the discussion below this comment:
https://github.com/leanprover-community/mathlib/pull/1018#discussion_r283978240
In this particular case I think there are two silent associators and one silent left unitor (and 1 non-silent one). We should probably include them explicitly. @Scott Morrison
Scott Morrison (May 16 2019 at 23:34):
I agree this is a problem, and perhaps it indicates we should go back to your original construction for adjointify
, which explicitly provided the components, at the expense of needing to check naturality.
Scott Morrison (May 16 2019 at 23:34):
Actually, what we really should do is both.
Scott Morrison (May 16 2019 at 23:35):
Define adjointify
so that it is obviously natural, but inserting unitors and associators so elaboration fast
Scott Morrison (May 16 2019 at 23:35):
and then give a simp lemma that says the components are what you want them to be.
Kevin Buzzard (May 16 2019 at 23:36):
Out of interest, when we compiled the perfectoid project at -T50000 there were problems in four places. Two were fixed with Mario's : _
trick, one was a use
which we changed to existsi
, and one was embedding f
which had a potential overload and was fixed by changing it to _root_.embedding f
. In all four cases elaboration time dropped dramatically from "takes several seconds to compile" to "compiles instantly".
Kevin Buzzard (May 16 2019 at 23:37):
But is this associator thing yet another kind of problem?
Floris van Doorn (May 16 2019 at 23:37):
Actually, what we really should do is both.
I wanted to also suggest that. That is basically what I did in the Lean 2 HoTT library:
https://github.com/leanprover/lean2/blob/master/hott/algebra/category/functor/equivalence.hlean#L82-L102
I defined the new η
first on components, and to show naturality I showed that it was equal to the map of a natural transformation, defined fully explicit.
Kevin Buzzard (May 16 2019 at 23:39):
What I feel I learnt from this experiment was that long elaboration times are not always inevitable -- sometimes persuading Lean to elaborate the same thing slightly differently can have a massively beneficial effect and perhaps even teach us something.
Floris van Doorn (May 16 2019 at 23:43):
But is this associator thing yet another kind of problem?
Yes, but we're basically abusing the Lean type checker. We have an expression (F ⋙ ((G ⋙ F) ⋙ G)
which we need to match with (F ⋙ G) ⋙ (F ⋙ G)
. We should really rewrite one expression to the other (using natural isomorphisms), but we proof it "by reflexivity". This is true, but only if Lean unfolds all occurrences of ⋙
(and likely gets a huge term in the process).
It's roughly similar to proving 1000 + 1000 = 2000
by refl
instead of norm_num
.
Kevin Buzzard (May 17 2019 at 07:04):
Oh that's a very clear explanation Floris, thanks!
Kevin Buzzard (May 17 2019 at 07:05):
The next one is this, from src/data/complex/exponential.lean:456:0
:
lemma cos_add : cos (x + y) = cos x * cos y - sin x * sin y := begin rw [← domain.mul_left_inj (@two_ne_zero' ℂ _ _ _), ← domain.mul_left_inj (@two_ne_zero' ℂ _ _ _)], simp only [mul_add, add_mul, mul_sub, sub_mul, exp_add, div_mul_div, div_add_div_same, mul_assoc, (div_div_eq_div_mul _ _ _).symm, mul_div_cancel' _ (@two_ne_zero' ℂ _ _ _), sin, cos], apply complex.ext; simp [mul_add, add_mul, exp_add]; ring end
Times out at -T50000.
Mario Carneiro (May 17 2019 at 08:17):
This one is a bit perplexing. Consider the following proof of sin_add
:
lemma two_sin : 2 * sin x = (exp (-x * I) - exp (x * I)) * I := mul_div_cancel' _ two_ne_zero' lemma two_cos : 2 * cos x = exp (x * I) + exp (-x * I) := mul_div_cancel' _ two_ne_zero' lemma sin_add : sin (x + y) = sin x * cos y + cos x * sin y := begin rw [← domain.mul_left_inj (@two_ne_zero' ℂ _ _ _), two_sin, neg_add, add_mul, exp_add, add_mul, exp_add, eq_comm, mul_add, ← mul_assoc, two_sin, mul_left_comm, two_sin, ← domain.mul_left_inj (@two_ne_zero' ℂ _ _ _), mul_add, mul_left_comm, two_cos, ← mul_assoc, two_cos], ring end
The rw
part of the proof is instantaneous (less than 150ms if you replace ring
by sorry
). The ring
tactic is also fast (a fraction of a second). The kernel is also quickly able to check the proof. Yet the elaborator spends a whopping 30s on... something... coming from the ring
tactic, despite the fact that ring
produces fully elaborated proof terms.
Mario Carneiro (May 17 2019 at 08:23):
Bizarrely, this takes a long time...
lemma sin_add : sin (x + y) = sin x * cos y + cos x * sin y := begin rw [← domain.mul_left_inj (@two_ne_zero' ℂ _ _ _), two_sin, neg_add, add_mul, exp_add, add_mul, exp_add, eq_comm, mul_add, ← mul_assoc, two_sin, mul_left_comm, two_sin, ← domain.mul_left_inj (@two_ne_zero' ℂ _ _ _), mul_add, mul_left_comm, two_cos, ← mul_assoc, two_cos], generalize: exp (x * I) = a, generalize: exp (-x * I) = b, generalize: exp (y * I) = c, generalize: exp (-y * I) = d, ring end
because generalize
takes 6s
Kevin Buzzard (May 17 2019 at 08:35):
I'm glad that this slightly frivolous challenge is throwing up interesting questions!
Rob Lewis (May 17 2019 at 11:32):
The
rw
part of the proof is instantaneous (less than 150ms if you replacering
bysorry
). Thering
tactic is also fast (a fraction of a second). The kernel is also quickly able to check the proof. Yet the elaborator spends a whopping 30s on... something... coming from thering
tactic, despite the fact thatring
produces fully elaborated proof terms.
This is really strange. There's something going on in the is_def_eq
call in add_atom
. The profiler says ring
is spending 6 seconds there. But each call is a fraction of a millisecond and there are only a couple dozen calls.
Rob Lewis (May 17 2019 at 11:33):
I used this to profile:
meta def time_defeq (e e' : expr) : tactic unit := do pe ← pp e, pe' ← pp e', timeit ("defeq\n" ++ to_string pe ++ "\n" ++ to_string pe'++"\n") (sleep 1 >> is_def_eq e e')
ring
spends 6500 ms in time_defeq
, 18 in pp
, and 1 in timeit
.
Rob Lewis (May 17 2019 at 11:35):
Hmm, but this profiling is wrong, increasing the sleep time doesn't change the profile time of timeit
?
Keeley Hoek (May 17 2019 at 11:40):
ah, timeit
doesn't time tactics
Keeley Hoek (May 17 2019 at 11:40):
I think you're timing how long it takes to build the monad or something
Keeley Hoek (May 17 2019 at 11:40):
(I think)
Keeley Hoek (May 17 2019 at 11:41):
Isn't there a tactic version
Rob Lewis (May 17 2019 at 11:41):
You're right, it's timetac
, not timeit
. Getting my timers mixed up.
Rob Lewis (May 17 2019 at 11:44):
Would it be safe to set complex.exp
to be irreducible at the end of complex/exponential.lean
?
Rob Lewis (May 17 2019 at 11:44):
This is the culprit. is_def_eq
is unfolding that and getting lost.
Mario Carneiro (May 17 2019 at 11:44):
yeah, that makes sense
Mario Carneiro (May 17 2019 at 11:45):
We should probably back off on is_def_eq
as well though
Mario Carneiro (May 17 2019 at 11:45):
use some reducibility setting
Rob Lewis (May 17 2019 at 11:48):
I'm not sure if syntactic eq is enough there, it might be. But I guess a weaker is_def_eq
probably is.
Mario Carneiro (May 17 2019 at 11:57):
it is enough for this goal, but of course not for some goals
Rob Lewis (May 17 2019 at 11:57):
Hmm, no, this goes back to the discussion we had a few months ago about ring
and linarith
handling defeq atoms. Weakening that check means linarith
fails to prove id x ≥ x
.
Mario Carneiro (May 17 2019 at 11:58):
but I definitely want to avoid a cost for unfolding defeq when it's not even necessary. assumption
has the same problem
Mario Carneiro (May 17 2019 at 11:59):
it's triggering because it wants to find out if exp x =?= exp (-x)
and other variations
Rob Lewis (May 17 2019 at 12:00):
Instead of using is_def_eq
less, maybe we should be marking more things irreducible
.
Mario Carneiro (May 17 2019 at 12:00):
This is what reducibility settings are for
Mario Carneiro (May 17 2019 at 12:00):
like how rw
tries refl
but not really hard in case it's not true
Mario Carneiro (May 17 2019 at 12:02):
I guess that's not enough to get id x >= x
though
Mario Carneiro (May 17 2019 at 12:03):
because id
is semireducible
Mario Carneiro (May 17 2019 at 12:04):
how about giving linarith
and ring
a reducibility config option? I think the default should be unfold reducible
Rob Lewis (May 17 2019 at 12:05):
Exactly. Same thing in the example here: https://leanprover.zulipchat.com/#narrow/stream/144837-PR-reviews/topic/.23878.20.20remove.20dependencies . Matching up to defeq lets ring
and linarith
unify two different coercions, which is sensible to do. But I'm pretty sure those coercions are semireducible.
Rob Lewis (May 17 2019 at 12:05):
That would work.
Rob Lewis (May 17 2019 at 12:06):
Linarith already has a bunch of config options. I worry this one would be pretty obscure.
Rob Lewis (May 17 2019 at 12:07):
On the other hand, they were both using syntactic eq for a while with no issues. It wouldn't be a big loss to go half a step back and add a config option.
Rob Lewis (May 17 2019 at 12:08):
Regardless, I think we should also be a bit more aggressive about marking things irreducible.
Kevin Buzzard (May 17 2019 at 15:46):
So this one has actually now been fixed by the latest PR! Nice!
The next one is I think one of @Reid Barton 's:
instance stone_cech.t2_space : t2_space (stone_cech α) := begin rw t2_iff_ultrafilter, rintros g ⟨x⟩ ⟨y⟩ u gx gy, apply quotient.sound, intros γ tγ h₁ h₂ f hf, resetI, change stone_cech_extend hf ⟦x⟧ = stone_cech_extend hf ⟦y⟧, refine tendsto_nhds_unique u.1 _ _, { exact stone_cech_extend hf }, all_goals { refine le_trans (map_mono _) ((continuous_stone_cech_extend hf).tendsto _), assumption } end
Times out at -T50000. It's in src/topology/stone_cech.lean
, line 253.
Reid Barton (May 17 2019 at 15:47):
I think Patrick sent me a faster version of this proof at some point, let me check whether it got merged...
Reid Barton (May 17 2019 at 15:50):
Indeed this is still the old version
Kevin Buzzard (May 17 2019 at 15:53):
Did Patrick figure out why it was slow?
Kevin Buzzard (May 17 2019 at 15:55):
I think it's kind of important that we understand the various reasons that code ends up running slowly, because if we're not careful then slow code can turn into code which doesn't compile.
Reid Barton (May 17 2019 at 15:55):
I don't know.
Rob Lewis (May 17 2019 at 15:57):
The problem is solving the very last goal g ≤ nhds ⟦y⟧
, the second branch of the all_goals
. It succeeds with the assumption gy : g ≤ nhds (quot.mk setoid.r y)
but first it tries gx : g ≤ nhds (quot.mk setoid.r x)
and takes ages to fail.
Rob Lewis (May 17 2019 at 15:59):
Did we not have a variant of assumption
that takes a reducibility setting?
Rob Lewis (May 17 2019 at 16:04):
Oh, this is a little more subtle, the goal and gy
don't unify with only reducible.
Kevin Buzzard (May 17 2019 at 16:05):
The other issue is when when I'm writing code, if something turned out super-slow then it makes it very painful to continue writing after that point. One thing I've learnt from this exercise is that sometimes if you understand what's going on then you can actually fix things up and the code compiles quickly and the pain goes away.
Rob Lewis (May 17 2019 at 16:06):
The easy patch is
instance stone_cech.t2_space : t2_space (stone_cech α) := begin rw t2_iff_ultrafilter, rintros g ⟨x⟩ ⟨y⟩ u gx gy, apply quotient.sound, intros γ tγ h₁ h₂ f hf, resetI, change stone_cech_extend hf ⟦x⟧ = stone_cech_extend hf ⟦y⟧, refine tendsto_nhds_unique u.1 _ _, { exact stone_cech_extend hf }, all_goals { refine le_trans (map_mono _) ((continuous_stone_cech_extend hf).tendsto _) }, { exact gx }, { exact gy } end
Rob Lewis (May 17 2019 at 16:06):
I'm not sure what can be made irreducible to speed up the failure to unify gx
.
Rob Lewis (May 17 2019 at 16:08):
attribute [irreducible] nhds
works but I suspect that will break things.
Reid Barton (May 17 2019 at 16:22):
I made a PR with Patrick's proof: https://github.com/leanprover-community/mathlib/pull/1042
Rob Lewis (May 17 2019 at 16:33):
It looks like making nhds
irreducible at the end of topology/order.lean
is safe, and this salvages the original proof.
Rob Lewis (May 17 2019 at 16:33):
Meaning, the original proof is basically instant.
Rob Lewis (May 17 2019 at 16:34):
This is probably a good abstraction barrier too. I assumed it was broken more, but it seems like not.
Rob Lewis (May 17 2019 at 16:34):
Is Patrick's change preferable to the old one if the old one is equally fast?
Kevin Buzzard (May 17 2019 at 17:22):
The next one is in data/polynomial.lean
and it's card_nth_roots
.
lemma card_nth_roots {α : Type*} [integral_domain α] (n : ℕ) (a : α) : (nth_roots n a).card ≤ n := by letI := classical.prop_decidable; exact if hn : n = 0 then if h : (X : polynomial α) ^ n - C a = 0 then by simp only [nat.zero_le, nth_roots, roots, h, dif_pos rfl, card_empty] else with_bot.coe_le_coe.1 (le_trans (card_roots h) (by rw [hn, pow_zero, ← @C_1 α _ _, ← is_ring_hom.map_sub (@C α _ _)]; exact degree_C_le)) else by rw [← with_bot.coe_le_coe, ← degree_X_pow_sub_C (nat.pos_of_ne_zero hn) a]; exact card_roots (X_pow_sub_C_ne_zero (nat.pos_of_ne_zero hn) a)
It's a whopping 1770 lines in, and given the amount that Kenny complained about timings for polynomials it's a miracle that Chris managed to make it so long without going insane ;-) Is there an official or unofficial max length for mathlib files? For some reason I thought we were unkeen on going over 1000 lines.
Kevin Buzzard (May 17 2019 at 17:25):
ha ha this lemma is true because the zero polynomial is defined to have no roots :D
Kevin Buzzard (May 17 2019 at 17:27):
How do people actually debug these things? I throw them out and then Rob or Mario says something like "eew this specific thing is happening". I have no idea how to check what is going on at all, other than just setting -T50000 on VS Code and then randomly inserting lines of the form sorry end #exit
until I find the problematic line, and even that tells me much less than the things the CS people come up with within minutes.
Rob Lewis (May 17 2019 at 17:33):
I mean, that's basically it. Sometimes certain lines look suspicious and you start looking there. assumption
and convert
are likely candidates for this particular kind of problem.
Rob Lewis (May 17 2019 at 17:33):
set_option profiler true
can show if a particular tactic is misbehaving.
Kevin Buzzard (May 17 2019 at 17:34):
Aah I'd forgotten about the profiler.
I am very familiar with some set_option
things -- the ones I use relatively regularly like pp.all true
or implict or proofs or the one that turns on simp lemmas, but the ones I've only used a couple of times I forget about; there should be a list of options ordered by usefulness rather than alphabetically ;-)
Kevin Buzzard (May 17 2019 at 17:41):
Well, the #exit strategy seems to indicate that it's the rw [hn, pow_zero, ← @C_1 α _ _, ← is_ring_hom.map_sub (@C α _ _)]
line
Rob Lewis (May 17 2019 at 17:42):
Yep, and in particular, the map_sub
.
Kevin Buzzard (May 17 2019 at 17:47):
And the profiler informs us
polynomial.lean:1771:6: information parsing took 27.7ms polynomial.lean:1771:6: information type checking of card_nth_roots took 0.0746ms polynomial.lean:1771:6: information decl post-processing of card_nth_roots took 0.00477ms
and then there's a very long pause and it adds
polynomial.lean:1771:6: information elaboration of card_nth_roots took 12.8s
Kevin Buzzard (May 17 2019 at 17:47):
(and that's with sorry end)) else sorry #exit
just after the rewrite)
Chris Hughes (May 17 2019 at 17:47):
I think this is another reason in favour of bundled homs.
Sebastien Gouezel (May 17 2019 at 17:48):
In the same direction, I would be happy to make continuous
irreducible (as it is a Pi type, it is complicated to unify).
Mario Carneiro (May 17 2019 at 17:48):
I keep convincing myself this has already happened
Kevin Buzzard (May 17 2019 at 17:48):
11363ms 100.0% _private.3200700535.rw_goal._lambda_4 11354ms 99.9% tactic.to_expr 11354ms 99.9% tactic.interactive.to_expr' 9ms 0.1% tactic.rewrite_target 8ms 0.1% tactic.rewrite_core 8ms 0.1% tactic.rewrite 1ms 0.0% tactic.assert
if this helps
Mario Carneiro (May 17 2019 at 17:49):
@Sebastien Gouezel Due to a bug in lean, marking continuous
as irreducible won't fix the issue with apply
Sebastien Gouezel (May 17 2019 at 17:51):
Even in Lean 3.5c? :)
Mario Carneiro (May 17 2019 at 17:51):
I haven't looked into it in detail, but I think it should not be hard to fix
Kevin Buzzard (May 17 2019 at 17:53):
is_ring_hom.map_sub : ∀ {α β : Type u_1} [_inst_1 : ring α] [_inst_2 : ring β] (f : α → β) [_inst_3 : is_ring_hom f] {x y : α}, f (x - y) = f x - f y
Chris is saying that is_ring_hom
should be bundled?
Kevin Buzzard (May 17 2019 at 17:54):
Is that one of those PR's where you break 20 files and then fix them all and then PR it and pray that it gets accepted quickly?
Kevin Buzzard (May 17 2019 at 17:54):
It should do some nice damage to the perfectoid project too ;-)
Kevin Buzzard (May 17 2019 at 18:24):
class is_linear_map (α : Type u) {β : Type v} {γ : Type w} [ring α] [add_comm_group β] [add_comm_group γ] [module α β] [module α γ] (f : β → γ) : Prop := (add : ∀x y, f (x + y) = f x + f y) (smul : ∀(c : α) x, f (c • x) = c • f x)
That doesn't look bundled to me. What is a bundled example of a property of maps?
Johan Commelin (May 17 2019 at 18:25):
@Kevin Buzzard There is also linear_map
Johan Commelin (May 17 2019 at 18:25):
is_linear_map
is kind of deprecated
Kevin Buzzard (May 17 2019 at 18:25):
structure linear_map (α : Type u) (β : Type v) (γ : Type w) [ring α] [add_comm_group β] [add_comm_group γ] [module α β] [module α γ] := (to_fun : β → γ) (add : ∀x y, to_fun (x + y) = to_fun x + to_fun y) (smul : ∀(c : α) x, to_fun (c • x) = c • to_fun x)
Aah yes -- immediately afterwards, in fact! So we have both? Will one be removed one day? Or are they both useful?
Kevin Buzzard (May 17 2019 at 18:26):
is_linear_map
is kind of deprecated
I see.
Kevin Buzzard (May 17 2019 at 18:28):
Well there are 499 instances of is_ring_hom
in mathlib...
Kevin Buzzard (May 17 2019 at 18:29):
Do you think we can fix this with a regular expression?
Johan Commelin (May 17 2019 at 18:30):
There were more issues. I think Chris spent some time looking into this
Kevin Buzzard (May 17 2019 at 18:30):
More seriously, is it the sort of thing that can be done half-heartedly? Someone makes ring_hom
, and then every now and then when someone is bored they change a few problematic is_ring_hom
s to ring_hom
s? Or doesn't life work like that?
Kevin Buzzard (May 17 2019 at 18:43):
I guess more generally, is it now believed by the community that is_ring_hom
should be deprecated and replaced by a bundled ring_hom
, and the only reason it hasn't been done is that it is a non-trivial thing to do which will involve changing 47 files? If so, is there a roadmap for getting it done? Can one start by just adding the definition and trying to change one file? Or does this then involve changing the 5 files it depends on, and then this breaks all the remaining 40-odd files?
Kevin Buzzard (May 17 2019 at 18:44):
Does it have to be done quickly? If there is a ring_hom
branch then is it the case that if it's left for a week then it inevitably has merge conflicts?
Kevin Buzzard (May 17 2019 at 18:45):
Or actually is the jury still out on whether it should be done at all?
Kevin Buzzard (May 17 2019 at 18:45):
Sorry to ask such basic questions; I have 0 experience maintaining large projects.
Johan Commelin (May 17 2019 at 18:46):
See also #717
Kevin Buzzard (May 17 2019 at 18:49):
Oh so field and and ring and group homs all go hand in hand. I'm sorry, I never look at the PRs for mathlib, I only ever scan the ones that get merged.
Kevin Buzzard (May 17 2019 at 18:49):
Thanks for the link!
Kevin Buzzard (May 17 2019 at 18:50):
I can now see Kenny's logic -- do field homs first, because that will be the smallest number of changes.
Johan Commelin (May 17 2019 at 18:51):
and monoid homs and add_monoid homs and add_group homs and mul_homs and add_homs and .....
Johan Commelin (May 17 2019 at 18:52):
We need https://www.xkcd.com/208/
Kevin Buzzard (May 17 2019 at 19:24):
OK I opened an issue for this one: #1044 . That way hopefully I can understand better what is going on here.
Kevin Buzzard (May 17 2019 at 19:38):
OK, next one is one of the jewels in the crown of mathlib -- prod_filter_range_p_mul_q_div_two_eq_prod_product
.
Kevin Buzzard (May 17 2019 at 19:39):
It's pretty long:
lemma prod_filter_range_p_mul_q_div_two_eq_prod_product : ((range ((p * q) / 2).succ).filter (coprime (p * q))).prod (λ x, if (x : zmodp q hq).1 ≤ q / 2 then ((x : zmodp p hp), (x : zmodp q hq)) else -((x : zmodp p hp), (x : zmodp q hq))) = (((range p).erase 0).product ((range (q / 2).succ).erase 0)).prod (λ x, ((x.1 : zmodp p hp), (x.2 : zmodp q hq))) := have hpqpnat : (((⟨p * q, mul_pos hp.pos hq.pos⟩ : ℕ+) : ℕ) : ℤ) = (p * q : ℤ), by simp, have hpqpnat' : ((⟨p * q, mul_pos hp.pos hq.pos⟩ : ℕ+) : ℕ) = p * q, by simp, have hpq1 : ((⟨p * q, mul_pos hp.pos hq.pos⟩ : ℕ+) : ℕ) % 2 = 1, from nat.odd_mul_odd hp1 hq1, have hpq1' : p * q > 1, from one_lt_mul hp.pos hq.gt_one, have hhq0 : ∀ a : ℕ, coprime q a → a ≠ 0, from λ a, imp_not_comm.1 $ by simp [hq.coprime_iff_not_dvd] {contextual := tt}, have hpq0 : 0 < p * q / 2, from nat.div_pos (succ_le_of_lt $ one_lt_mul hp.pos hq.gt_one) dec_trivial, have hinj : ∀ a₁ a₂ : ℕ, a₁ ∈ (range (p * q / 2).succ).filter (coprime (p * q)) → a₂ ∈ (range (p * q / 2).succ).filter (coprime (p * q)) → (if (a₁ : zmodp q hq).1 ≤ q / 2 then ((a₁ : zmodp p hp).1, (a₁ : zmodp q hq).1) else ((-a₁ : zmodp p hp).1, (-a₁ : zmodp q hq).1)) = (if (a₂ : zmodp q hq).1 ≤ q / 2 then ((a₂ : zmodp p hp).1, (a₂ : zmodp q hq).1) else ((-a₂ : zmodp p hp).1, (-a₂ : zmodp q hq).1)) → a₁ = a₂, from λ a b ha hb h, have ha' : a ≤ (p * q) / 2 ∧ coprime (p * q) a, by simpa [lt_succ_iff] using ha, have hapq' : a < ((⟨p * q, mul_pos hp.pos hq.pos⟩ : ℕ+) : ℕ) := lt_of_le_of_lt ha'.1 (div_lt_self (mul_pos hp.pos hq.pos) dec_trivial), have hb' : b ≤ (p * q) / 2 ∧ coprime (p * q) b, by simpa [lt_succ_iff, coprime_mul_iff_left] using hb, have hbpq' : b < ((⟨p * q, mul_pos hp.pos hq.pos⟩ : ℕ+) : ℕ) := lt_of_le_of_lt hb'.1 (div_lt_self (mul_pos hp.pos hq.pos) dec_trivial), have val_inj : ∀ {p : ℕ} (hp : nat.prime p) (x y : zmodp p hp), x.val = y.val ↔ x = y, from λ _ _ _ _, ⟨fin.eq_of_veq, fin.veq_of_eq⟩, have hbpq0 : (b : zmod (⟨p * q, mul_pos hp.pos hq.pos⟩)) ≠ 0, by rw [ne.def, zmod.eq_zero_iff_dvd_nat]; exact λ h, not_coprime_of_dvd_of_dvd hpq1' (dvd_refl (p * q)) h hb'.2, have habneg : ¬((a : zmodp p hp) = -b ∧ (a : zmodp q hq) = -b), begin rw [← int.cast_coe_nat a, ← int.cast_coe_nat b, ← int.cast_coe_nat a, ← int.cast_coe_nat b, ← int.cast_neg, ← int.cast_neg, zmodp.eq_iff_modeq_int, zmodp.eq_iff_modeq_int, @int.modeq.modeq_and_modeq_iff_modeq_mul _ _ p q ((coprime_primes hp hq).2 hpq), ← hpqpnat, ← zmod.eq_iff_modeq_int, int.cast_coe_nat, int.cast_neg, int.cast_coe_nat], assume h, rw [← hpqpnat', ← zmod.val_cast_of_lt hbpq', zmod.le_div_two_iff_lt_neg hpq1 hbpq0, ← h, zmod.val_cast_of_lt hapq', ← not_le] at hb', exact hb'.1 ha'.1, end, have habneg' : ¬((-a : zmodp p hp) = b ∧ (-a : zmodp q hq) = b), by rwa [← neg_inj', neg_neg, ← @neg_inj' _ _ (-a : zmodp q hq), neg_neg], suffices (a : zmodp p hp) = b ∧ (a : zmodp q hq) = b, by rw [← mod_eq_of_lt hapq', ← mod_eq_of_lt hbpq']; rwa [zmodp.eq_iff_modeq_nat, zmodp.eq_iff_modeq_nat, nat.modeq.modeq_and_modeq_iff_modeq_mul ((coprime_primes hp hq).2 hpq)] at this, by split_ifs at h; simp * at *, have hmem : ∀ a : ℕ, a ∈ (range (p * q / 2).succ).filter (coprime (p * q)) → (if (a : zmodp q hq).1 ≤ q / 2 then ((a : zmodp p hp).1, (a : zmodp q hq).1) else ((-a : zmodp p hp).1, (-a : zmodp q hq).1)) ∈ ((range p).erase 0).product ((range (succ (q / 2))).erase 0), from λ x, have hxp : ∀ {p : ℕ} (hp : nat.prime p), (x : zmodp p hp).val = 0 ↔ p ∣ x, from λ p hp, by rw [zmodp.val_cast_nat, nat.dvd_iff_mod_eq_zero], have hxpneg : ∀ {p : ℕ} (hp : nat.prime p), (-x : zmodp p hp).val = 0 ↔ p ∣ x, from λ p hp, by rw [← int.cast_coe_nat x, ← int.cast_neg, ← int.coe_nat_inj', zmodp.coe_val_cast_int, int.coe_nat_zero, ← int.dvd_iff_mod_eq_zero, dvd_neg, int.coe_nat_dvd], have hxplt : (x : zmodp p hp).val < p := (x : zmodp p hp).2, have hxpltneg : (-x : zmodp p hp).val < p := (-x : zmodp p hp).2, have hneglt : ¬(x : zmodp q hq).val ≤ q / 2 → (x : zmodp q hq) ≠ 0 → (-x : zmodp q hq).val ≤ q / 2, from λ hx₁ hx0, by rwa [zmodp.le_div_two_iff_lt_neg hq hq1 hx0, not_lt] at hx₁, by split_ifs; simp [zmodp.eq_zero_iff_dvd_nat hq, (x : zmodp p hp).2, coprime_mul_iff_left, lt_succ_iff, h, *, hp.coprime_iff_not_dvd, hq.coprime_iff_not_dvd, (x : zmodp p hp).2, (-x : zmodp p hp).2] {contextual := tt}, prod_bij (λ x _, if (x : zmodp q hq).1 ≤ (q / 2) then ((x : zmodp p hp).val, (x : zmodp q hq).val) else ((-x : zmodp p hp).val, (-x : zmodp q hq).val)) hmem (λ a ha, by split_ifs; simp [*, prod.ext_iff] at *) hinj (surj_on_of_inj_on_of_card_le _ hmem hinj (@nat.le_of_add_le_add_right (q / 2 + (p / 2).succ) _ _ (calc card (finset.product (erase (range p) 0) (erase (range (succ (q / 2))) 0)) + (q / 2 + (p / 2).succ) = (p * q) / 2 + 1 : by rw [card_product, card_erase_of_mem (mem_range.2 hp.pos), card_erase_of_mem (mem_range.2 (succ_pos _)), card_range, card_range, pred_succ, ← add_assoc, ← succ_mul, succ_pred_eq_of_pos hp.pos, odd_mul_odd_div_two hp1 hq1, add_succ] ... = card (range (p * q / 2).succ) : by rw card_range ... = card ((range (p * q / 2).succ).filter (coprime (p * q)) ∪ ((range (p * q / 2).succ).filter (λ x, ¬coprime p x)).erase 0 ∪ (range (p * q / 2).succ).filter (λ x, ¬coprime q x)) : congr_arg card (by simp [finset.ext, coprime_mul_iff_left]; tauto) ... ≤ card ((range (p * q / 2).succ).filter (coprime (p * q))) + card (((range (p * q / 2).succ).filter (λ x, ¬coprime p x)).erase 0) + card ((range (p * q / 2).succ).filter (λ x, ¬coprime q x)) : le_trans (card_union_le _ _) (add_le_add_right (card_union_le _ _) _) ... = _ : by rw [card_erase_of_mem, card_range_p_mul_q_filter_not_coprime hp hq hp1 hq1 hpq, mul_comm p q, card_range_p_mul_q_filter_not_coprime hq hp hq1 hp1 hpq.symm, pred_succ, add_assoc]; simp [range_succ, hp.coprime_iff_not_dvd, hpq0])))
Floris van Doorn (May 17 2019 at 19:39):
I opened a PR to fix the first T50000-challenge issue: https://github.com/leanprover-community/mathlib/pull/1045
Kevin Buzzard (May 17 2019 at 19:40):
It's in data/zmod/quadratic_reciprocity.lean
, line 291.
Kevin Buzzard (May 17 2019 at 19:41):
@Floris van Doorn I wasn't sure whether you thought it was worth fixing. I can see the appeal of proving something by rfl
.
Kevin Buzzard (May 17 2019 at 19:41):
This number theory one is a key lemma in the proof of quadratic reciprocity.
Mario Carneiro (May 17 2019 at 19:42):
Of course some proofs take a long time because they are actually big proofs
Kevin Buzzard (May 17 2019 at 19:42):
Right!
Kevin Buzzard (May 17 2019 at 19:42):
That of course could be what's going on here.
Kevin Buzzard (May 17 2019 at 19:42):
Hmm, it's really high time I left the office. I'll maybe look at this one on the subway.
Mario Carneiro (May 17 2019 at 19:42):
One option is to break it up into subparts
Kevin Buzzard (May 17 2019 at 19:43):
@Chris Hughes would that be feasible?
Kevin Buzzard (May 17 2019 at 19:45):
I guess another issue is that for several of the other timeouts, it has been manifestly clear that changing mathlib to fix the timeout was a good idea. Here that's not so clear to me, if it's simply one long proof that nobody is interested in the subparts of.
Kevin Buzzard (May 17 2019 at 19:45):
The proof of quadratic reciprocity that we have is the standard counting proof, basically looking at the lattices points in Z^2 with 0<=x<p/2 and 0<=y<q/2 and then counting them in a slightly funny way
Kevin Buzzard (May 17 2019 at 19:46):
It might really be the case that these lemmas are of no use anywhere else.
Kevin Buzzard (May 17 2019 at 19:47):
I saw these lemmas when I was still at high school and I don't think I've ever seen them in any other context, and I'm a professional number theorist.
Johan Commelin (May 17 2019 at 19:59):
Well... speed-ups are always nice (-;
Johan Commelin (May 17 2019 at 19:59):
This file is probably compiled quite often... everytime something in the root of the hierarchy changes
Mario Carneiro (May 17 2019 at 20:27):
If you break a theorem into two parts it doesn't make it go faster
Mario Carneiro (May 17 2019 at 20:28):
although you will be able to avoid the timeout
Mario Carneiro (May 17 2019 at 20:28):
However, another reason to break up long proofs is if you want to step through it or modify it... it's the same issue with long files
Johan Commelin (May 17 2019 at 20:29):
... and the same issue with long papers...
Kevin Buzzard (May 17 2019 at 20:29):
elaboration of prod_filter_range_p_mul_q_div_two_eq_prod_product took 22.6s
Johan Commelin (May 17 2019 at 20:30):
That's still faster than I was when I first read that proof (-;
Kevin Buzzard (May 17 2019 at 20:31):
There's only a few to go.
Kevin Buzzard (May 17 2019 at 20:31):
[PS Chris' proof is basically all in term mode, so one would have to think about how to break it up]
Kevin Buzzard (May 17 2019 at 20:33):
The next one is the open mapping theorem. Again this might just be because it's a long proof.
Kevin Buzzard (May 17 2019 at 20:34):
It's nearly 150 lines -- exists_preimage_norm_le
in analysis/normed_space/banach.lean
, line 29
Kevin Buzzard (May 17 2019 at 20:36):
I guess it's one variant of the open mapping theorem. Sebastien says that the proof falls naturally into two halves but it's still formalised as one big proof.
Kevin Buzzard (May 17 2019 at 20:38):
At least this one is in tactic mode but again it's difficult for me to tell whether there's something bad in the proof or whether it's just long, and it's also difficult for me to tell whether it's worth breaking up the proof into smaller chunks. @Sebastien Gouezel do you have an opinion on this? Your proof takes a long time to elaborate, apparently. Is this just because it's long?
Kevin Buzzard (May 17 2019 at 20:39):
I'm amazed you could write any code after it :D
Kevin Buzzard (May 17 2019 at 20:40):
elaboration of exists_preimage_norm_le took 18.9s
. That's a long time to wait after each keypress ;-)
Kevin Buzzard (May 17 2019 at 20:43):
Sebastien is also responsible for GH_dist_le_of_approx_subsets
in topology/metric_space/gromov_hausdorff.lean
, which is the next timeout.
Kevin Buzzard (May 17 2019 at 20:43):
That one is only about 67 lines long -- so there might be something funny going on in that one.
Sebastien Gouezel (May 17 2019 at 20:46):
You can write such a proof interactively by sorrying big chunks of it, and then putting back everything together once every bit compiles. And you can also buy a faster computer :)
Sebastien Gouezel (May 17 2019 at 20:49):
I am sure this can be made faster, but I don't know where the bottlenecks are. A tutorial on optimization, based on this one example of exists_preimage_norm_le
, would be much appreciated. In particular, I can't tell if typeclass inference takes a lot of time, or unification, or it's just that everything is nontrivial.
Sebastien Gouezel (May 17 2019 at 20:58):
For instance, in the middle of the proof, there is a calc block
∥f (d⁻¹ • x) - y∥ = ∥d⁻¹ • f x - (d⁻¹ * d) • y∥ : by rwa [lin.smul, inv_mul_cancel, one_smul] ... = ∥d⁻¹ • (f x - d • y)∥ : by rw [mul_smul, smul_sub] ... = ∥d∥⁻¹ * ∥f x - d • y∥ : by rw [norm_smul, norm_inv]
The first line takes 235ms, the second one 167ms and the third one 3ms. I don't understand why there would be a factor 100 between the first and the third line!
Kevin Buzzard (May 17 2019 at 22:31):
And we finish where we started, with some category theory. Explicitly, polynomial ring being adjoint to the forgetful functor on line 42 of category_theory/instances/CommRing/adjunctions.lean
.
noncomputable def adj : polynomial_ring ⊣ (forget : CommRing ⥤ Type u) := adjunction.mk_of_hom_equiv { hom_equiv := λ α R, { to_fun := λ f, f ∘ X, inv_fun := λ f, ⟨eval₂ int.cast f, by apply_instance⟩, left_inv := λ f, subtype.ext.mpr $ funext $ λ p, begin have H0 := λ n, (congr (int.eq_cast' (f.val ∘ C)) (rfl : n = n)).symm, have H1 := λ p₁ p₂, (@is_ring_hom.map_add _ _ _ _ f.val f.2 p₁ p₂).symm, have H2 := λ p₁ p₂, (@is_ring_hom.map_mul _ _ _ _ f.val f.2 p₁ p₂).symm, apply mv_polynomial.induction_on p; intros; simp only [*, eval₂_add, eval₂_mul, eval₂_C, eval₂_X, eq_self_iff_true, function.comp_app, hom_coe_app'] at * end, right_inv := by tidy }, hom_equiv_naturality_left_symm' := λ X' X Y f g, subtype.ext.mpr $ funext $ λ p, begin apply mv_polynomial.induction_on p; intros; simp only [*, eval₂_mul, eval₂_add, eval₂_C, eval₂_X, comp_val, equiv.coe_fn_symm_mk, hom_coe_app, polynomial_ring_map_val, eq_self_iff_true, function.comp_app, add_right_inj, types_comp] at * end }
Kevin Buzzard (May 17 2019 at 22:39):
I wonder if it's the two simp lines which are making this one hurt.
Kevin Buzzard (May 17 2019 at 22:42):
That intros;
before the first simp only
-- after that there are three goals. Has the simp statement been carefully crafted to close all of them? [that's what it does]
Kevin Buzzard (May 17 2019 at 22:48):
Maybe this is the point of simp
anyway? Maybe it's supposed to take a lot of time but we only have to compile once so who cares?
Kevin Buzzard (May 17 2019 at 22:51):
That is a pretty compact proof really. Automation is doing a bunch of diagram-chasing here.
Scott Morrison (May 18 2019 at 01:30):
I think it would clearer if we ran three separate simp only
s here, to discharge the three goals. Combining them all into one longer list seems unnecessarily golf-y.
Scott Morrison (May 19 2019 at 00:05):
Okay, #1049 fulfills the -T50000 challenge for category_theory/instances/CommRing/adjunctions.lean
.
Scott Morrison (May 19 2019 at 00:05):
Feel like running it all again with a lower threshold? :-)
Scott Morrison (May 19 2019 at 00:06):
Given that pretty much every instance that went over 50000 came from at least somewhat smelly code, perhaps we should review this regularly.
Scott Morrison (May 19 2019 at 00:07):
Re #1049, @Johan Commelin, could you have a look at this and see if you're happy? I think you wrote the original file.
matt rice (May 19 2019 at 07:38):
Hmm, there is this function https://github.com/leanprover-community/lean/blob/master/src/util/memory.cpp#L127-L140
I wonder if it might be possible to build a top-5 old_peak-get_peak_rss() delta, list of components from there or something.
Kevin Buzzard (May 19 2019 at 07:41):
Feel like running it all again with a lower threshold? :-)
Heh, I tried compiling the version of mathlib from just before I started this thread, and a version from yesterday, and the newer version seemed to take slightly longer to compile! So I'm not sure I've done any good with this project anyway :-/
Scott Morrison (May 19 2019 at 09:02):
Progress...?
Mario Carneiro (May 19 2019 at 09:04):
it could just be new material. We're always fighting the tide of general library growth
Sebastien Gouezel (May 19 2019 at 10:23):
I could decrase the compilation of GH_dist_le_of_approx_subsets
from 50s to 5s on my computer by replacing two linarith
calls by explicit computations. See #1052.
Sebastien Gouezel (May 19 2019 at 10:24):
I tried first to avoid problems by telling linarith
which equations it should look at, but I don't understand the syntax. Consider for instance
example (a : ℝ) (b : ℝ) (ha : a > 0) (hb : 0 ≤ b) : 0 ≤ a/2 + b := begin linarith end
works, but if I use instead linarith ha hb
it fails. While the docstring says linarith h1 h2 h3 will only use hypotheses h1, h2, h3.
Mario Carneiro (May 19 2019 at 10:34):
From my experience ring
and linarith
aren't usually bottlenecks; it's usually the simp
ing they do in the preprocessing stage
Keeley Hoek (May 19 2019 at 10:36):
I think Sebastien said that it got bad after the is_def_eq
changes
Keeley Hoek (May 19 2019 at 10:36):
What about ring
and ring!
Mario Carneiro (May 19 2019 at 10:36):
I like that
Keeley Hoek (May 19 2019 at 10:37):
That way you when you're angry that linarith
can't solve id x = x
you can take it out on lean when you press !
Keeley Hoek (May 19 2019 at 10:38):
:D
Rob Lewis (May 19 2019 at 10:47):
The !
notation is better than another config option. In both ring
and linarith
, it will take a little bit of work to propagate that flag from the tactic call to the place where it's actually used. But it's a refactor that's worth doing.
Rob Lewis (May 19 2019 at 10:48):
More generally, though. If this change to linarith
causes this kind of slowdown, there's probably something missing an irreducible
attribute.
Rob Lewis (May 19 2019 at 10:56):
As for restricting linarith
to only certain hypotheses: iirc, it won't use the goal if you do this. It will just try to find a contradiction with ha, hb
. apply le_of_not_gt, intro hc, linarith ha hb hc
would work. I remember noticing this a while back and thought I changed it but maybe I never got around to it.
Mario Carneiro (May 19 2019 at 11:00):
What about using linarith h1 h2 \|-
for this?
Rob Lewis (May 19 2019 at 11:03):
That would be ideal. It takes slightly more work to parse because it's no longer a list of names.
Rob Lewis (May 19 2019 at 11:10):
By the way, any idea what's up with Travis here? https://github.com/leanprover-community/mathlib/pull/1043 I'm certain the original PR compiled, and I don't see why Mario's changes should change that.
Mario Carneiro (May 19 2019 at 11:12):
you could just use the loc
parser
Mario Carneiro (May 19 2019 at 11:13):
It timed out. I restarted the build
Rob Lewis (May 19 2019 at 11:18):
you could just use the
loc
parser
That takes an at
, right? linarith at h1 h2 |-
sounds strange. But yes, that can be adapted.
Rob Lewis (May 19 2019 at 11:39):
There's something entirely different going on in the GH_dist_le_of_approx_subsets
proof. Even if you clear everything irrelevant from the context, linarith
(with no arguments) still fails. With the same goal and context in a separate proof right above, it succeeds.
Sebastien Gouezel (May 19 2019 at 11:49):
attribute [irreducible] Hausdorff_dist
makes it instantaneous. I will add this and keep the linarith
calls, instead of the current version of #1052.
Sebastien Gouezel (May 19 2019 at 12:05):
Done. This is more satisfactory, as the proofs with linarith
are clearly better, and moreover this was just a symptom of a problem that would have shown up again later on.
Rob Lewis (May 19 2019 at 12:08):
As for the other issue here: the type of δ0
is actually a metavariable. The refine
call at the very first line never instantiates it. We'll have to add calls to instantiate_mvars
in linarith
. I think we saw something similar in ring
a while back.
Sebastien Gouezel (May 19 2019 at 12:12):
Where it the other issue? The original proof (with several calls to linarith
) works fine just by making Hausdorff_dist
irreducible and no other change.
Sebastien Gouezel (May 19 2019 at 12:13):
In particular, linarith
is able to use δ0
just fine.
Rob Lewis (May 19 2019 at 12:15):
I started looking at your changes to see what needed to be irreducible. The first thing I tried was clearing all the irrelevant stuff:
have I5 : Hausdorff_dist (Fl '' s) (Fr '' (range Φ)) ≤ ε2/2 + δ, { refine Hausdorff_dist_le_of_mem_dist _ _ _, { clear I4 I3 I2 I1 Il Ir Fr Fl _inst_7 this sne hs hs' H h Dxs _inst hxs xα xs Φ s ε1 ε3 _inst_6 _inst_5 _inst_4 _inst_3 _inst_2 _inst_1 β α, linarith }
This fails, even though it works elsewhere with the exact same context and goal.
Rob Lewis (May 19 2019 at 12:16):
It works if you revert ε2 δ, intros,
before linarith
, and printing the raw format of the type of δ0
shows it's a metavar.
Rob Lewis (May 19 2019 at 12:17):
But that's curious that linarith
manages to use it in the improved proof. It must get instantiated somewhere.
Rob Lewis (May 19 2019 at 12:29):
Yep. At the linarith
call in the line refine Hausdorff_dist_le_of_mem_dist (by linarith) _ _
, the type of δ0
has been instantiated. If you change this to refine Hausdorff_dist_le_of_mem_dist _ _ _
, linarith
fails on the first goal, because type of δ0
is still an mvar.
Rob Lewis (Jul 09 2019 at 09:33):
I'm running the -T50000 test again for the fun of it. category_theory/adjunction/limits.lean
fails in a number of places (and takes 60 seconds to compile on my fast desktop). This is presumably due to tidy
, but it doesn't seem healthy. Similar issues in algebraic_geometry/presheafed_space
(as noted at https://leanprover.zulipchat.com/#narrow/stream/116395-maths/topic/presheafed.20space.20category), which takes a whopping 125 sec, mainly on line 64.
Rob Lewis (Jul 09 2019 at 09:38):
Also trouble with card_nth_roots
at data/polynomial.lean:1782
, but this one isn't related to tidy
.
Rob Lewis (Jul 09 2019 at 09:50):
I see the issue with card_nth_roots
came up last time we ran this test and I thought we fixed it. There's an easy enough workaround, although I'm not sure why it works.
Rob Lewis (Jul 09 2019 at 09:52):
algebra/CommRing/adjunctions.lean
, which Scott fixed last time, is failing again. It's not horribly slow so this might be just over the edge.
Rob Lewis (Jul 09 2019 at 10:03):
field_theory/splitting_field.lean:91
, exists_multiset_of_splits
goes over, but there's no obvious culprit that I see.
Johan Commelin (Jul 09 2019 at 10:52):
Did the recent reordering of tidy tactics make this worse?
Johan Commelin (Jul 09 2019 at 10:52):
I think ext
was moved a bit down in the list.
Rob Lewis (Jul 09 2019 at 11:37):
That's possible.
Rob Lewis (Jul 09 2019 at 11:38):
The issue in splitting_field
is type class search for is_ring_hom.id
. Adding local attribute [instance, priority 100000] is_ring_hom.id
fixes that file.
Rob Lewis (Jul 09 2019 at 11:41):
I have no idea how to fix the category theory stuff, but something has to be done. It's not fair for such a huge percentage of mathlib's compile time to be spent on those two files.
Rob Lewis (Jul 09 2019 at 13:08):
To echo Bernie Sanders for a second: the top .18% of mathlib takes 13.5% of the build time. The assoc'
field of category_of_PresheafedSpaces
alone is taking around a minute. The proof that tidy
finds still takes around a minute to check, and it's pretty innocent looking: intros W X Y Z f g h, dsimp at *, simp at *, refl
. There's a bit of unnecessary work done (intros, simp, refl
should be fine), but that simp call is crazy. I wonder what the simp set is here -- it changes (f ≫ g) ≫ h = f ≫ g ≫ h
to something that looks much messier, but gets closed by refl
. Replacing it with the output of squeeze_simp
gives a different result, but it's closed by split; refl
, thankfully very quickly.
Johan Commelin (Jul 09 2019 at 15:16):
I think Scott won't have much time for Lean the next two weeks... so we might need to fix things ourselves.
Rob Lewis (Jul 09 2019 at 15:29):
I have a commit on my office computer that shaves two minutes off of presheafed_space
. But I had to leave early and didn't push it yet, and no way am I working on this on my laptop. It's a patch, not a real fix -- just getting the scripts from tidy
and fixing the worst parts.
Johan Commelin (Jul 09 2019 at 16:26):
@Rob Lewis How much % is 2 minutes?
Rob Lewis (Jul 09 2019 at 16:43):
I think all of mathlib took 26 min.
Johan Commelin (Jul 09 2019 at 16:49):
Ok, that is quite a significant speedup
Kevin Buzzard (Mar 28 2020 at 20:14):
So I thought I'd try the T50000 challenge again. This is the game where you compile mathlib but only let Lean take half as long as the default amount of time before it times out. The first problem we run into is quite interesting. I should say right now that the below would have taken me much longer to put together had it not been for the wonderful new tooling. The last time the file src/set_theory/pgame.lean
changed was on Thursday (two days ago) in #2222. After the change:
git checkout 30146a027b6f3e207f95571c0802d647f613b38a leanproject get-cache rm -f src/set_theory/pgame.olean date; lean --make src/set_theory/pgame.lean ; date
pgame.lean
takes 11 seconds to compile. Note: this is after the change was made to the file on Thursday.
Now fast forward to master today:
git checkout master leanproject get-cache rm -f src/set_theory/pgame.olean date; lean --make src/set_theory/pgame.lean ; date
and the same pgame.lean
file now takes more than five minutes to compile (and times out on T50000). What is going on here?
Scott Morrison (Mar 28 2020 at 21:37):
(deleted)
Scott Morrison (Mar 28 2020 at 22:09):
In any case, I think I better investigate... Both solve_by_elim
and pgame
have my fingerprints on them, so this is probably somehow my fault. :-)
Kevin Buzzard (Mar 28 2020 at 22:54):
Maybe ad53e0b7d90aa6a3a8fbcab9f9bb2af769f82b19 is the first bad commit
Scott Morrison (Mar 28 2020 at 23:50):
and in any case, I think this shows that we really need to introduce a -T limit in CI.
Mario Carneiro (Mar 28 2020 at 23:54):
I wonder if it would be possible to hack lean so that it reports the smallest -T value that would have compiled the input successfully? Then we can keep an eye on that number for regressions without blocking the build necessarily
Mario Carneiro (Mar 28 2020 at 23:55):
and we can have some current target, and set -T to something like 20% more than that so that anything that goes too far is rejected immediately
Bryan Gin-ge Chen (Mar 29 2020 at 00:08):
If the profiler were more accurate (lean#58) we could revive the build time bot.
Scott Morrison (Mar 29 2020 at 02:37):
@Kevin Buzzard, fixed in #2275. This file now compiles with -T6000
again.
Kevin Buzzard (Mar 29 2020 at 09:20):
Great work Scott! Signed, the build time bot
Scott Morrison (Mar 29 2020 at 11:49):
I fixed a bunch more in #2281. We're almost there!
Bryan Gin-ge Chen (May 27 2020 at 08:52):
#2276 adds -T100000
to the build step in our CI. This is the limit that both vscode-lean and emacs lean-mode use. Do we all agree it's a good idea?
There's also a checklist there of the 7 defs and theorems that currently don't compile with -T50000
(as well as one giant proof which used to fail). Anyone want to take a shot at them (in another PR)?
Scott Morrison (May 27 2020 at 09:41):
I think we should include -T100000
in the CI. I'm trying to fix the newly introduced -T50000
problems in category theory now, but it would be great if these could be caught in the PR process.
Scott Morrison (May 27 2020 at 10:03):
Okay, I don't think I have much enthusiasm for fixing the remaining T50000
problems under category_theory
. I did one, as #2840, but the only fix in the other two cases is going to be writing out boring proofs that tidy
is handling just fine.
Scott Morrison (May 27 2020 at 10:03):
My vote is that we declare -T100000
the goalposts.
Bryan Gin-ge Chen (May 27 2020 at 12:01):
OK, I just put #2276 on the queue. This will probably lead to a (trivial) merge conflict in #2840 due to the FIXME comments.
Bhavik Mehta (Dec 11 2020 at 18:03):
I believe https://github.com/leanprover-community/mathlib/blob/a372dfce1f660bf6590ff29a32f9aa2a141a81ad/src/category_theory/limits/shapes/binary_products.lean#L696 is fixed, but I'm not really sure how to check to be sure, doing lean --make src/category_theory/limits/shapes/binary_products.lean -T50000
works for me though
Bryan Gin-ge Chen (Dec 11 2020 at 19:38):
Great! Yeah, I don't know that we want to enforce -T50000
, though it might be interesting to open up a draft PR again and check the failures.
I think there are some other slow proofs that are probably higher priority, e.g. #4832.
Eric Wieser (Dec 15 2020 at 20:28):
I don't know what the associated T
value is, but I got some speedups in #5383
Last updated: Dec 20 2023 at 11:08 UTC