Zulip Chat Archive

Stream: general

Topic: mathlib repo GitHub actions queue


Bryan Gin-ge Chen (Apr 07 2021 at 16:00):

With our recent growth :tada: we're starting to run into limits on the number of concurrent GitHub actions workflows. In light of this, I've started canceling workflows on "outdated" commits in some branches. Note that even after these cancellations we still have a bunch of queued workflows.

I don't know that we need to do too much right now to address this, but maybe something to keep in mind.

Eric Rodriguez (Apr 07 2021 at 16:04):

Is there any way to tell GHA that we don't want them to run just yet (per PR)? For example, on my WIP PR there's absolutely zero need a workflow right now, but it'd still be taking up a slot.

Adam Topaz (Apr 07 2021 at 16:09):

Maybe there's a way to set it up so the CI runs on the latest commit only when the awaiting-review tag is added?

Bryan Gin-ge Chen (Apr 07 2021 at 16:22):

It might be possible to adjust the scripts somehow, but it might be easiest now just to manually cancel redundant or unnecessary workflows (only if you want, this is totally optional!). If you click on the yellow circle next to a commit and then on "Details" next to any of the items that start with "continuous integration / ...", you will get to a page that will let you cancel the main (longest-running) workflow for that commit.

Bryan Gin-ge Chen (Apr 21 2021 at 14:38):

I've canceled a few jobs again on outdated commits just now, from the branches mul_vec_smul_assoc and structure_sheaf_basic_opens_iso.

edit and fin_succ_succ_above_succ

Bryan Gin-ge Chen (Apr 21 2021 at 20:06):

Hmm, this is somewhat concerning: this bors batch just timed out and failed, not (just) because the CI time was too long, but I think also because some of its jobs were queued for a while before they started. The total time allowed is 6 hours and currently CI runs for around 4.5, so if the queue is stuck for a while this becomes more of a possibility.

I don't know of a way to tell GitHub actions to run certain jobs with a higher priority.

Johan Commelin (Apr 21 2021 at 20:13):

I guess that technically we could make a staging1 and staging2. Where the regular build gets a PR batch from staging1 into staging2 and then passing leanchecker you move from staging2 to master.

Bryan Gin-ge Chen (Apr 21 2021 at 20:15):

Can you elaborate? I'm not sure I understand the idea.

Bryan Gin-ge Chen (Apr 21 2021 at 20:28):

Ugh, the current staging job has been queued for half an hour, so it might also end up timing out as well.

Johan Commelin (Apr 21 2021 at 20:40):

@Bryan Gin-ge Chen I was thinking, if we type bors merge, then bors puts the PR on a queue that will end up being merged into a staging branch (maybe premaster?). For this, it only needs to run half of CI.
And then regularly, we run leanchecker on premaster, and merge it into master.

Johan Commelin (Apr 21 2021 at 20:41):

That would split CI into two pieces.

Johan Commelin (Apr 21 2021 at 20:41):

We would still check that master passes leanchecker. But maybe leanchecker doesn't need to be run on every PR and/or bors staging queue

Bryan Gin-ge Chen (Apr 21 2021 at 20:54):

Hmm, I think this could help a bit, but I'm not sure how we would manage premaster vs master. Also, I think running leanchecker on all PRs is something that has been valuable for us. I haven't been following all the details, but haven't there been some leanchecker failures recently that were caught in PRs?

Sebastien Gouezel (Apr 21 2021 at 21:10):

leanchecker is extremely slow since the nsmul and gsmul refactoring, for no good reason we can see, so disabling it until we understand what is going on might be reasonable.

Sebastien Gouezel (Apr 21 2021 at 21:12):

Would it help if we decided that everyone should have his branches on his own mathlib fork (with CI active on the fork)? This would only work out if oleans could also be uploaded for the forks, but it would split the workload a lot (and it could even scale if there are more contributors). CI on mathlib would then be only for bors PRs to master.

Eric Wieser (Apr 21 2021 at 21:15):

Does bors rebuild the same commit after a timeout, or split the batch?

Eric Wieser (Apr 21 2021 at 21:16):

Because if the former, our olean cache should at least save us from a second timeout

Bryan Gin-ge Chen (Apr 21 2021 at 21:16):

It split the batch.

Mario Carneiro (Apr 22 2021 at 05:45):

Sebastien Gouezel said:

leanchecker is extremely slow since the nsmul and gsmul refactoring, for no good reason we can see, so disabling it until we understand what is going on might be reasonable.

This comment is somewhat worrying to me. If leanchecker doesn't like the refactor, then maybe the refactor was a bad idea and/or forgot to mark something with the right reducibility setting and so on. I would rather back out the refactor than stop checking proofs with an external verifier

Bryan Gin-ge Chen (Apr 22 2021 at 06:04):

Also concerning: the "Build mathlib" job in the current staging batch took almost 4h17m (the previous longest build I saw was around 3h47m). Since it takes around an hour to run tests + lint, we don't have much more time available before we hit bors's 6 hour limit.

Mario Carneiro (Apr 22 2021 at 06:07):

Will the -T50000 challenge help here?

Mario Carneiro (Apr 22 2021 at 06:07):

or is the time budget going elsewhere

Bryan Gin-ge Chen (Apr 22 2021 at 06:08):

I'm not sure. Maybe looking at what pops up in the #CI thread will help.

Kevin Buzzard (Apr 22 2021 at 06:15):

I'm staring at a stupid int module diamond in the liquid project and I know full well that the gsmul refactor was the right thing to do

Kevin Buzzard (Apr 22 2021 at 06:15):

All I did was took the dual of a finite free Z-module.

Mario Carneiro (Apr 22 2021 at 06:21):

I'm looking again at the nsmul PR and I don't see anything that has specifically been indicated as taking a while. Part of the issue with diagnosing this is that the information we have is incredibly unfocused: we know that overall mathlib build time is rising but not which part of mathlib is contributing to the rise

Mario Carneiro (Apr 22 2021 at 06:23):

Obviously if you ask me leanchecker taking an hour is a ridiculous figure indicating that something has gone horribly wrong, it's doing the analogue of what metamath does in seconds

Mario Carneiro (Apr 22 2021 at 06:23):

anything more than 10 minutes seems crazy

Johan Commelin (Apr 22 2021 at 06:31):

Does it help if we generate logs of what exactly leanchecker is spending it's time on? (Gabriel showed us how to do that.)
I can generate such logs for several commits (say 10, or so) from the last month.

Kevin Buzzard (Apr 22 2021 at 06:58):

Can this problem be solved by throwing money at it, by the way? (ie at GitHub or whoever is doing the thing that takes 4.5 hours)

Scott Morrison (Apr 22 2021 at 07:04):

@Bryan Gin-ge Chen, do you still have your(?) visualisations of the time cost of various parts of mathlib?

Johan Commelin (Apr 22 2021 at 07:07):

This already takes longer than I expected

time lean --recursive --export=mathlib.txt src/

real    3m40.672s
user    3m32.014s
sys     0m7.948s

Bryan Gin-ge Chen (Apr 22 2021 at 07:21):

Scott Morrison said:

Bryan Gin-ge Chen, do you still have your(?) visualisations of the time cost of various parts of mathlib?

I have this notebook which gets its data from https://tqft.net/lean/mathlib/?C=M;O=D (some of the logs there are broken, it seems).

Scott Morrison (Apr 22 2021 at 07:23):

Ah, I see, once I stopped my build robot we lost that.

Bryan Gin-ge Chen (Apr 22 2021 at 07:23):

I could probably adapt it to parse the HTML from the #CI build bot, but it might be easier just to have that bot spit out a JSON file with the timings.

Johan Commelin (Apr 22 2021 at 07:41):

I'm now running leanchecker +leanprover-community/lean:3.29.0 -v mathlib.txt 2>&1 | ts -i -m %.s on latest master.

Johan Commelin (Apr 22 2021 at 07:50):

Is leanchecker something that can be easily parallelized?

Mario Carneiro (Apr 22 2021 at 08:08):

It can be, but we shouldn't assume external typecheckers are threaded

Johan Commelin (Apr 22 2021 at 08:15):

Sure, we shouldn't. But it still helps if in the current situation we can divide the time by 16 or 32 just by parallelizing.

Sebastien Gouezel (Apr 22 2021 at 08:15):

leanchecker timings for the commits just before #7084 and just after could be enough to see where the increase is located.

Johan Commelin (Apr 22 2021 at 08:16):

And it seems to me that in principal this is something that can be parallelized in extrema

Johan Commelin (Apr 22 2021 at 08:16):

Sebastien Gouezel said:

leanchecker timings for the commits just before #7084 and just after could be enough to see where the increase is located.

I'll do those next.

Sebastien Gouezel (Apr 22 2021 at 08:16):

Thanks!

Johan Commelin (Apr 22 2021 at 08:16):

It's still running the one on current master.

Johan Commelin (Apr 22 2021 at 08:17):

But I guess I'll just make 2 more copies of mathlib. Since it's using only a single thread anyway :sweat_smile:

Johan Commelin (Apr 22 2021 at 09:21):

Running 3 processes of leanchecker next to each other just ate up 32GB of memory :sad:

Johan Commelin (Apr 22 2021 at 09:21):

Is that to be expected? Or does this hint at a memory leak?

Johan Commelin (Apr 22 2021 at 09:31):

Sorry, I take that back. It was 3 copies of single-threaded lean --recursive --export=mathlib.txt src/

Johan Commelin (Apr 22 2021 at 09:31):

Still, I'm surprised that would take up so much RAM

Johan Commelin (Apr 22 2021 at 10:05):

How many lines of output does leanchecker have, roughly?

Johan Commelin (Apr 22 2021 at 10:06):

I'm creating roughly 30.000 lines of output in roughly 6 minutes, at the moment.

Johan Commelin (Apr 22 2021 at 10:06):

Line 26000 stands out:

6.202454 26000 nat.inv_pos_of_nat

Johan Commelin (Apr 22 2021 at 10:07):

(My terminal is printing every 1000th line, and everything is being written to a file.)

Johan Commelin (Apr 22 2021 at 10:07):

But apparently nat.inv_pos_of_nat is taking 6.2 leanchecker seconds.

Johan Commelin (Apr 22 2021 at 10:08):

The typical line is

0.000029 27000 quotient_group.quotient.div_inv_monoid._proof_3

Jannis Limperg (Apr 22 2021 at 10:32):

Bryan Gin-ge Chen said:

I could probably adapt it to parse the HTML from the #CI build bot, but it might be easier just to have that bot spit out a JSON file with the timings.

What do you need? Exporting file-level timings should be no problem since I have them in the database anyway.

Bryan Gin-ge Chen (Apr 22 2021 at 10:42):

File-level timings for each commit in JSON format would be easiest for me to work with. Here's an example of what Scott was generating before (the 3 numbers are real, user and sys times from this script).

Sebastien Gouezel (Apr 22 2021 at 10:46):

Johan Commelin said:

But apparently nat.inv_pos_of_nat is taking 6.2 leanchecker seconds.

You are really talking about docs#nat.inv_pos_of_nat? That takes 22.1ms to elaborate on my computer? Looks crazy!

Mario Carneiro (Apr 22 2021 at 10:46):

No I can see why

Mario Carneiro (Apr 22 2021 at 10:50):

lemma inv_pos_of_nat {n : } : 0 < ((n : α) + 1)⁻¹ :=
inv_pos.2 $ add_pos_of_nonneg_of_pos n.cast_nonneg zero_lt_one

The issue is in n.cast_nonneg, which proves

nat.cast_nonneg :  {α : Type u_1} [_inst_1 : ordered_semiring α] (n : ), 0  n

when the current context is about a linear_ordered_field. This involves proving that the two nat.cast expressions are the same, which requires unfolding nat.cast until it uses projections, as well as unfolding all of the typeclass constructors in this stack:

    (@coe.{1 u_1+1} nat α
       (@coe_to_lift.{1 u_1+1} nat α
          (@nat.cast_coe.{u_1} α
             (@mul_zero_class.to_has_zero.{u_1} α
                (@mul_zero_one_class.to_mul_zero_class.{u_1} α
                   (@monoid_with_zero.to_mul_zero_one_class.{u_1} α
                      (@semiring.to_monoid_with_zero.{u_1} α
                         (@ring.to_semiring.{u_1} α
                            (@ordered_ring.to_ring.{u_1} α
                               (@linear_ordered_ring.to_ordered_ring.{u_1} α
                                  (@linear_ordered_comm_ring.to_linear_ordered_ring.{u_1} α

That is, the kernel is going to have to look at linear_ordered_comm_ring.mk with its 40 fields, and linear_ordered_ring.mk with its 39 fields and so on

Sebastien Gouezel (Apr 22 2021 at 10:51):

So you're saying it's bad because of old style structures?

Mario Carneiro (Apr 22 2021 at 10:51):

leo was always making a big deal about "packing and unpacking" and I think this is it

Sebastien Gouezel (Apr 22 2021 at 10:59):

I wonder if we shouldn't make our algebraic spine flatter. I mean, have:

  • monoid extending has_mul and has_one
  • add_monoid extending has_add and has_zero
  • semiring extending monoid and add_monoid
  • ring extending semiring
  • field extending ring
    and all the other ones built on these ones. The fact that a semiring is a monoid with zero would be registered with an instance, not direct inheritance. In this way, when you need to access the basic fields the path is much shorter than what we have currently. (And also, no need for old style structures since there are no common fields in the spine I described).

Mario Carneiro (Apr 22 2021 at 11:00):

Well part of the issue is that typeclass inference is finding these inconveniently long paths, even when short paths exist

Sebastien Gouezel (Apr 22 2021 at 11:00):

(with group extending monoid and add_group extending add_monoid, and instances from ring to add_group)

Mario Carneiro (Apr 22 2021 at 11:01):

old structures are optimized by having lots of "shortcut" instances, but shortcut instances don't work with lean 3 typeclass inference

Anne Baanen (Apr 22 2021 at 11:02):

(I expect this wouldn't help too much for the reasons Mario describes, but if we do it, we would also need to insert add_sub_monoid and div_inv_monoid at the appropriate places.)

Mario Carneiro (Apr 22 2021 at 11:02):

If typeclass inference was able to perform path contraction on the paths it finds that would help a lot

Mario Carneiro (Apr 22 2021 at 11:03):

for example, we have a way to register "whenever you find a path A.to_B (B.to_C x) use A.to_C x instead"

Mario Carneiro (Apr 22 2021 at 11:04):

even if A.to_C is not part of normal typeclass inference

Johan Commelin (Apr 22 2021 at 11:08):

Johan Commelin said:

How many lines of output does leanchecker have, roughly?

wc -l mathlib_f2984d51615166e96632e8efdfe7875575ad76ec.time
142221 mathlib_f2984d51615166e96632e8efdfe7875575ad76ec.time

Johan Commelin (Apr 22 2021 at 11:09):

https://gist.github.com/jcommelin/d1f5e95708ce61abb827216fb6463afc <- todays mathlib master

Mario Carneiro (Apr 22 2021 at 11:09):

Another reason why this particular example is bad is that nat.cast is not a projection, and so it is triggering a different heuristic. Normally when you have something like A.proj (B.to_A (D.to_B x)) = A.proj (C.to_A (D.to_C x)) it will do:

A.proj (B.to_A (D.to_B x)) =?= A.proj (C.to_A (D.to_C x))
B.proj (D.to_B x) =?= A.proj (C.to_A (D.to_C x))
D.proj x =?= A.proj (C.to_A (D.to_C x))
D.proj x =?= C.proj (D.to_C x)
D.proj x =?= D.proj x

However, when it is a defined function instead of a projection, it does this:

A.f (B.to_A (D.to_B x)) =?= A.f (C.to_A (D.to_C x))
B.to_A (D.to_B x) =?= C.to_A (D.to_C x)
A.mk (D.to_B x).proj1 (D.to_B x).proj2 ... =?= A.mk (D.to_C x).proj1 (D.to_C x).proj2 ...

and those proofs can get huge

Johan Commelin (Apr 22 2021 at 11:10):

I'm now starting the run on the pre_7084 commit.

Eric Wieser (Apr 22 2021 at 11:11):

Sebastien Gouezel said:

The fact that a semiring is a monoid with zero would be registered with an instance, not direct inheritance.

Is there any difference? For old_structure_cmd, isn't extend just a shorthand for "write this instance for me"?

Johan Commelin (Apr 22 2021 at 11:21):

cat mathlib_f2984d51615166e96632e8efdfe7875575ad76ec.time | grep "^[^0]" | wc -l
851

There's 851 declarations that take more than 1s on my machine (today's master).

Sebastien Gouezel (Apr 22 2021 at 11:29):

Anne Baanen said:

(I expect this wouldn't help too much for the reasons Mario describes, but if we do it, we would also need to insert add_sub_monoid and div_inv_monoid at the appropriate places.)

My point is exactly to not insert these at the appropriate places. Then, when Lean has to check that two fields of semirings are the same (in the bad situation that it doesn't use good heuristics and really wants to unfold everything), then the shortest the path the better.

Of course, one would have instances from semiring to div_inv_monoid, to be able to use theorems on div_inv_monoids when one has a ring, but it wouldn't be baked into the definition of semirings, to have shorter paths.

Eric Wieser (Apr 22 2021 at 11:30):

Isn't "path shortness" determined by instance priority / ordering?

Eric Wieser (Apr 22 2021 at 11:30):

I don't think it distinguishes between "extends vs instance"

Eric Wieser (Apr 22 2021 at 11:31):

Other than the fact that the extends instances come first

Eric Wieser (Apr 22 2021 at 11:31):

(which I think means they actually have _lower_ priority in the case of a tie?)

Mario Carneiro (Apr 22 2021 at 11:32):

I think we might find a way out of this by setting priorities very carefully

Mario Carneiro (Apr 22 2021 at 11:33):

for example, the more levels of the hierarchy are jumped, the higher the priority

Mario Carneiro (Apr 22 2021 at 11:35):

We still have the problem of shortcut instances though if we throw in all the composites. It is clear to me that having all pairwise composites is unambiguously the right solution to how to express the terms here so that they are both visually smaller and also less work for lean, but typeclass inference is really stupid about exploring exponentially many paths in the presence of a transitively closed graph of coercions

Mario Carneiro (Apr 22 2021 at 11:37):

(We can also be smarter about it and get away with n log n composites and still gain most of the benefits, for example with a simple spine like sebastien suggested. But total number of definitions in the environment has never been something we have attempted to optimize for, and I don't think there have been performance issues as a result, so I don't see any reason not to just compute the whole graph.)

Johan Commelin (Apr 22 2021 at 11:44):

I made R compute some stats for the run on today's master:

     Min.   1st Qu.    Median      Mean   3rd Qu.      Max.
  0.00001   0.00002   0.00002   0.02096   0.00002 122.06022

Johan Commelin (Apr 22 2021 at 11:44):

sd: 0.510882

Mario Carneiro (Apr 22 2021 at 11:46):

Another piece of information that should be relatively easy to harvest is the number of proof steps in the theorem, i.e. lines in the export format. By dividing time by lines we can get a better sense for those theorems that have heavy refls in them

Mario Carneiro (Apr 22 2021 at 11:47):

What are the keys on the data you have Johan? Are they by theorem name?

Sebastien Gouezel (Apr 22 2021 at 11:54):

Can you tell us more about the Max? 2 minutes to check one declaration?

Johan Commelin (Apr 22 2021 at 11:55):

@Sebastien Gouezel I'm working on it. It's the first time I'm using R in 10 years...

Johan Commelin (Apr 22 2021 at 11:55):

Johan Commelin said:

https://gist.github.com/jcommelin/d1f5e95708ce61abb827216fb6463afc <- todays mathlib master

Here is the data

Johan Commelin (Apr 22 2021 at 11:55):

@Mario Carneiro :up:

Mario Carneiro (Apr 22 2021 at 11:57):

122.060217    76322 finset.sdiff_eq_self_iff_disjoint

Mario Carneiro (Apr 22 2021 at 11:58):

lemma sdiff_eq_self_iff_disjoint {s t : finset α} : s \ t = s  disjoint s t :=
by rw [sdiff_eq_self, subset_empty, disjoint_iff_inter_eq_empty]

not what I would have expected

Kevin Buzzard (Apr 22 2021 at 12:02):

Great, we have a system which takes two minutes to verify something which doesn't need a proof :-)

Mario Carneiro (Apr 22 2021 at 12:03):

Are we sure the data isn't skewed somehow? This one looks pretty suspicious

Mario Carneiro (Apr 22 2021 at 12:04):

like maybe it is attributing work done on the previous theorem, or the thread got paged out at the wrong moment

Mario Carneiro (Apr 22 2021 at 12:06):

0.000014    76309 uniform_continuous₂.uniform_continuous
0.000017    76310 Hausdorffification.of
0.000018    76311 finset.subset.antisymm_iff
0.000016    76312 finset.inter_subset_inter
0.000018    76313 finset.sdiff_inter_self
0.000018    76314 finset.superset.trans
0.000016    76315 sdiff_inf_self_left
0.000015    76316 finset.sdiff_inter_self_left
0.000014    76317 superset.equations._eqn_1
0.000018    76318 sdiff_le_sdiff
0.000015    76319 finset.sdiff_subset_sdiff
0.000019    76320 finset.sdiff_empty
0.000018    76321 finset.sdiff_eq_self
122.060217    76322 finset.sdiff_eq_self_iff_disjoint
0.000099    76323 compact_space_Icc.equations._eqn_1
0.000021    76324 equiv.prod_congr_left._proof_1
0.000018    76325 equiv.prod_congr_left._proof_2
0.000015    76326 equiv.prod_congr_left
0.000015    76327 fintype.exists_univ_list
0.000020    76328 equiv.perm.prod_extend_right._proof_1
0.000018    76329 equiv.perm.prod_extend_right._proof_2
0.000018    76330 equiv.perm.prod_extend_right
0.000016    76331 equiv.perm.prod_extend_right_apply_ne

I can see that it's not working on just one area, it has finset.basic and equiv.basic in there mixed with compact_space_Icc from topology.algebra.ordered

Mario Carneiro (Apr 22 2021 at 12:07):

So maybe it is actually multithreaded and the 122 seconds were spent on compact_space_Icc or something else

Yakov Pechersky (Apr 22 2021 at 12:08):

Or the GC hit

Mario Carneiro (Apr 22 2021 at 12:08):

This is C++, is there a GC?

Mario Carneiro (Apr 22 2021 at 12:08):

I think exprs are reference counted in lean

Johan Commelin (Apr 22 2021 at 12:08):

It really doesn't look to be multithreaded. But there could be other things happening.

Mario Carneiro (Apr 22 2021 at 12:09):

If it's not multithreaded then why the weird order? Is the export file like that?

Johan Commelin (Apr 22 2021 at 12:09):

Ooh, yes, it might be the export file. But I find that file unreadable.

Mario Carneiro (Apr 22 2021 at 12:10):

sigh. proof export isn't supposed to be this inscrutable

Johan Commelin (Apr 22 2021 at 12:11):

Here is a gist with 50 lines of context around sdiff_eq_self_iff_disjoint:
https://gist.github.com/5e86ede6ccf08595ef70978385678f00

Johan Commelin (Apr 22 2021 at 12:15):

The bunch of decls that took more than 10s on my machine:

21235   34.55023                                       bounded_continuous_function.mk_of_compact
32831   19.57642                                                     AddCommGroup.large_category
45873   58.12832                                                     X_in_terms_of_W._main._pack
46114   12.56731     category_theory.monoidal.Mon_functor_category_equivalence.functor._proof_41
47135   21.54670                                     euclidean_domain.gcd._main.equations._eqn_1
47328   14.08090                                                      iterated_deriv_within_univ
47517   13.20958                                                  continuous_linear_map.coe_fst'
48630   11.19849                                                             nat.primes.coe_pnat
49481   21.34276                                                          mvpfunctor.inhabited_M
49883   10.11784                                                      cfilter.to_filter._proof_3
50917   10.36216                                     mul_action.injective_of_quotient_stabilizer
51014   15.61487                                   intermediate_field.findim_fixed_field_eq_card
55342   19.34196                                                     path.trans.equations._eqn_1
56753   17.47420                                                      AddCommMon.add_comm_monoid
56968   12.00058                         category_theory.triangulated.triangle_rotation_unit_iso
60132   22.02939     AddCommGroup.category_theory.forget₂.category_theory.creates_limit._proof_2
64847   29.19659                                  function.injective.ordered_comm_ring._proof_24
67914   14.51483                        category_theory.with_terminal.is_iso_of_from_star._aux_2
72856   21.21235                                                                     rat.abs_def
76322  122.06022                                               finset.sdiff_eq_self_iff_disjoint
99483   16.29542                      category_theory.monad.forget_creates_colimits.new_cocone_ι
105394  14.19519                                         order_dual.semilattice_inf_bot._proof_3
106988  36.30792 Module.category_theory.monoidal_category.tensor_unit.comm_ring.equations._eqn_1
108881  11.93040                         category_theory.limits.prod.diag_map_fst_snd_comp_assoc
111950  11.38775   category_theory.limits.has_binary_biproduct.has_colimit_pair.equations._eqn_1
114456  55.18284                                            pos_num.ldiff._main.equations._eqn_3
121654  10.49787                               with_zero.canonically_ordered_add_monoid._proof_9
142182  16.71473            category_theory.functor.left_adjoint_of_equivalence.equations._eqn_1

Mario Carneiro (Apr 22 2021 at 12:16):

I think the file really is that disorganized

Johan Commelin (Apr 22 2021 at 12:17):

The 122s is probably a hiccup. Although the server load is currently negligible. But maybe 37 people decide to look up my personal homepage at that exact time...

Mario Carneiro (Apr 22 2021 at 12:18):

I see two #DEFs in the file around your snippet, one to <something>.of and one to something.antisymm_iff. If I had to guess based on the other file I would say Hausdorffification.of and finset.subset.antisymm_iff

Johan Commelin (Apr 22 2021 at 12:18):

The data from just before #7084: https://gist.github.com/8a34e58f355df8c79349630e4f4dbe8b

Mario Carneiro (Apr 22 2021 at 12:19):

which means that two completely unrelated files are just mashed together here, meaning that the single threaded leanchecker is jumping around like a multithreaded checker would

Johan Commelin (Apr 22 2021 at 12:20):

Aha:

19933   33.04352   bounded_continuous_function.metric_space._proof_4
44039   57.23696 measure_theory.simple_func.semilattice_inf._proof_5
65492   13.15512                              nat.of_digits_one_cons
73812  121.33800                    equiv.perm.sign_prod_congr_right
103809  14.16039                            int.lor.equations._eqn_4
111199  53.86009                              is_local_extr_on.inter
138539  15.00112          times_cont_diff_within_at.equations._eqn_1

Johan Commelin (Apr 22 2021 at 12:20):

That looks suspicious. Again a >120s but totally different decl.

Johan Commelin (Apr 22 2021 at 12:21):

So I guess we need to run the export single-threadedly?

Mario Carneiro (Apr 22 2021 at 12:21):

However, it also means that the critical heavy refl proof step could be associated to any proof currently in progress, not necessarily the next #DEF

Mario Carneiro (Apr 22 2021 at 12:21):

That would be one way to fix it

Johan Commelin (Apr 22 2021 at 12:22):

Should I try: lean -j1 --recursive --export=mathlib.txt src/?

Mario Carneiro (Apr 22 2021 at 12:23):

Ideally there would be a way to construct this file in a multithreaded way without having five files getting mashed up. But all the IDs are sequential?

Johan Commelin (Apr 22 2021 at 12:24):

It starts like this:

1 #NS 0 u
2 #NS 0 eq
3 #NS 0 α
1 #UP 1
0 #ES 1
4 #NS 0 a
1 #EV 0
5 #NS 0 ᾰ
2 #EV 1
3 #ES 0
4 #EP #BD 5 2 3
5 #EP #BD 4 1 4
6 #EP #BI 3 0 5
6 #NS 2 refl
7 #EC 2 1
8 #EA 7 2
9 #EA 8 1
10 #EA 9 1
11 #EP #BD 4 1 10
12 #EP #BI 3 0 11
#IND 2 2 6 1 6 12 1
#QUOT
7 #NS 0 comm_group_with_zero
8 #NS 7 coe_norm_unit
9 #NS 0 decidable_eq
10 #NS 0 decidable_rel
11 #NS 0 not
12 #NS 0 false
#IND 0 12 3 0
13 #EP #BD 4 3 3
14 #EC 12
15 #EP #BD 5 1 14
16 #EL #BD 4 3 15
#DEF 11 13 16
13 #NS 0 decidable
14 #NS 0 p
2 #US 0
17 #ES 2
18 #EP #BD 14 3 17
15 #NS 13 is_false

Mario Carneiro (Apr 22 2021 at 12:25):

seems reasonable

Johan Commelin (Apr 22 2021 at 12:25):

If you say so

Mario Carneiro (Apr 22 2021 at 12:25):

it's building eq

Mario Carneiro (Apr 22 2021 at 12:25):

this is init.core

Johan Commelin (Apr 22 2021 at 12:26):

I'm now running it with -j1. Let's see if the result will differ significantly.

Mario Carneiro (Apr 22 2021 at 12:26):

we probably won't see any multithreading effects this early in the file

Johan Commelin (Apr 22 2021 at 12:29):

Snap... why am I seeing two processes with -j1 in top??

Johan Commelin (Apr 22 2021 at 12:30):

One must be a child of the other. They are using exactly the same amount of CPU and RAM. I don't understand linux

Sebastian Ullrich (Apr 22 2021 at 12:32):

Use htop :) (and then F5 (Tree))

Johan Commelin (Apr 22 2021 at 12:33):

Then it puts 45 processes with 0.0% CPU at the top of hte list...

Johan Commelin (Apr 22 2021 at 12:34):

Aha, I found it, and indeed one is a child of the other.

Kevin Buzzard (Apr 22 2021 at 12:38):

Wooah check out htop! It's like top but upgraded to the 1990s!

Sebastien Gouezel (Apr 22 2021 at 12:45):

Mario Carneiro said:

Another reason why this particular example is bad is that nat.cast is not a projection, and so it is triggering a different heuristic. Normally when you have something like A.proj (B.to_A (D.to_B x)) = A.proj (C.to_A (D.to_C x)) it will do:

A.proj (B.to_A (D.to_B x)) =?= A.proj (C.to_A (D.to_C x))
B.proj (D.to_B x) =?= A.proj (C.to_A (D.to_C x))
D.proj x =?= A.proj (C.to_A (D.to_C x))
D.proj x =?= C.proj (D.to_C x)
D.proj x =?= D.proj x

However, when it is a defined function instead of a projection, it does this:

A.f (B.to_A (D.to_B x)) =?= A.f (C.to_A (D.to_C x))
B.to_A (D.to_B x) =?= C.to_A (D.to_C x)
A.mk (D.to_B x).proj1 (D.to_B x).proj2 ... =?= A.mk (D.to_C x).proj1 (D.to_C x).proj2 ...

and those proofs can get huge

Still, how come main Lean has no problem checking this in milliseconds, while it takes seconds to leanchecker? Are they using different heuristics to check rfl?

Sebastien Gouezel (Apr 22 2021 at 12:49):

Eric Wieser said:

Sebastien Gouezel said:

The fact that a semiring is a monoid with zero would be registered with an instance, not direct inheritance.

Is there any difference? For old_structure_cmd, isn't extend just a shorthand for "write this instance for me"?

I don't really understand the inner workings. But what I can say for sure is that, when optimizing for the nsmul and gsmul refactors, I have seen a bunch of proofs where the 1 in a comm_ring was written as composing all the direct inheritance path through ring, semiring, monoid_with_zero, monoid, has_one. I guess just removing monoid_with_zero from this path would give some gain. I'll try to do some experiments in this direction.

Mario Carneiro (Apr 22 2021 at 12:51):

Still, how come main Lean has no problem checking this in milliseconds, while it takes seconds to leanchecker? Are they using different heuristics to check rfl?

I think so. The elaborator and kernel use separate reducibility hints: @[irreducible] and @[reducible] act on the elaborator, while the kernel uses definition height and opaque and abbrev

Mario Carneiro (Apr 22 2021 at 12:53):

There isn't much front end support for changing the kernel reducibility settings to something other than the default, so it may well be the case that lean knows a bunch of heuristics that we've taught it and the kernel is floundering without those hints

Johan Commelin (Apr 22 2021 at 13:01):

export with -j1 is running for > 35 minutes now... :smiley:

Johan Commelin (Apr 22 2021 at 13:10):

diff mathlib.txt mathlib.txt.bu | wc -l
1638141

Mario Carneiro (Apr 22 2021 at 13:11):

how many lines are they?

Mario Carneiro (Apr 22 2021 at 13:12):

I imagine the diff is going to be "almost everything" because of the intermingling

Johan Commelin (Apr 22 2021 at 13:13):

They are about ~ 20.000.000 lines

Mario Carneiro (Apr 22 2021 at 13:13):

both?

Johan Commelin (Apr 22 2021 at 13:13):

1sec

Mario Carneiro (Apr 22 2021 at 13:13):

I would guess that they end up at exactly the same number of lines, unless lean nondeterminism is worse than I thought

Johan Commelin (Apr 22 2021 at 13:14):

  20268388 ../mathlib_post_7084/mathlib.txt     -- -j1
  20268430 ../mathlib_post_7084/mathlib.txt.bu  -- regular

Mario Carneiro (Apr 22 2021 at 13:14):

heh

Mario Carneiro (Apr 22 2021 at 13:15):

Actually this unsequenced output would cause problems with determinism if we ever wanted to cache those files

Mario Carneiro (Apr 22 2021 at 13:15):

because I would bet that even running "regular" twice in a row will result in completely different files

Johan Commelin (Apr 22 2021 at 13:15):

0.391000     3605 rat.num_denom_cases_on._main

Is this surprising?

Mario Carneiro (Apr 22 2021 at 13:16):

That's an autogenerated theorem?

Johan Commelin (Apr 22 2021 at 13:16):

I should probably check that -j1 is deterministic. But first I'm running leanchecker

Mario Carneiro (Apr 22 2021 at 13:16):

well I guess it contains some user stuff, let me check

Mario Carneiro (Apr 22 2021 at 13:16):

@[elab_as_eliminator] def {u} num_denom_cases_on {C :   Sort u}
   :  (a : ) (H :  n d, 0 < d  (int.nat_abs n).coprime d  C (n /. d)), C a
| n, d, h, c H := by rw num_denom'; exact H n d h c

Mario Carneiro (Apr 22 2021 at 13:17):

why is it always the 1 line definitions

Mario Carneiro (Apr 22 2021 at 13:18):

no this one should be fast

Mario Carneiro (Apr 22 2021 at 13:18):

there are no hard unification problems here

Johan Commelin (Apr 22 2021 at 13:18):

So maybe there is still something that creates noise?

Mario Carneiro (Apr 22 2021 at 13:19):

what's around it in the list

Sebastien Gouezel (Apr 22 2021 at 13:20):

Mario Carneiro said:

Still, how come main Lean has no problem checking this in milliseconds, while it takes seconds to leanchecker? Are they using different heuristics to check rfl?

I think so. The elaborator and kernel use separate reducibility hints: @[irreducible] and @[reducible] act on the elaborator, while the kernel uses definition height and opaque and abbrev

I thought that the kernel was always checking the proof of a lemma once it was done, even in interactive mode, so this would mean the elaborator is feeding to the kernel a proof that is easy enough to check. Do you mean that, in main Lean, it checks some decorated proof with additional hints that are not stored in the olean and therefore not available to leanchecker? Or just that leanchecker does not use these hints? (Or am I wrong to believe that the kernel checks every proof even in interactive mode?)

Mario Carneiro (Apr 22 2021 at 13:22):

I thought that the kernel was always checking the proof of a lemma once it was done, even in interactive mode, so this would mean the elaborator is feeding to the kernel a proof that is easy enough to check.

Yes, it is done, but proofs are not provided for defeq tasks, so a proof using a heavy refl is hard for the elaborator and hard for the kernel, and even after it is diced and pickled and given to the external checker it's still hard

Johan Commelin (Apr 22 2021 at 13:23):

Mario Carneiro said:

what's around it in the list

I can only see this when it's done. The current script saves everything to a file, but prints all intermediate rows that contain 000.

Mario Carneiro (Apr 22 2021 at 13:23):

Do you mean that, in main Lean, it checks some decorated proof with additional hints that are not stored in the olean and therefore not available to leanchecker? Or just that leanchecker does not use these hints?

Reducibility hints (like @[irreducible]) are definitely in the olean (pretty much everything has to be in the olean because this is the only way data from one lean file gets to another), but they may not be in the export file

Mario Carneiro (Apr 22 2021 at 13:24):

Looking at https://github.com/leanprover-community/lean/blob/master/doc/export_format.md I don't see anything that suggests that it is possible to apply any attributes to anything

Mario Carneiro (Apr 22 2021 at 13:26):

this export format is super bare-bones, it needs some love

Johan Commelin (Apr 22 2021 at 13:28):

Here is something I don't understand. Random part of the intermediate output:

0.000018    34000 set.Iic.lattice._proof_6
0.000014    35000 associates.bounded_lattice._proof_5
0.000018    36000 computability.encoding
0.000016    37000 padic_norm_e.add_eq_max_of_ne'
0.000015    38000 CommGroup.comm_group.to_group.category_theory.bundled_hom.parent_projection
0.000014    39000 equiv.d_array_equiv_fin._match_1
0.000018    40000 tactic.abel.normalize_mode
0.000015    41000 fintype.fintype_prod_right

Johan Commelin (Apr 22 2021 at 13:28):

This suggests that it should zip through 1000 decls in about 0.015 seconds. But in practice it takes much longer.

Mario Carneiro (Apr 22 2021 at 13:28):

that still looks pretty random

Johan Commelin (Apr 22 2021 at 13:29):

@Mario Carneiro There's 999 unprinted decls between every 2 rows.

Mario Carneiro (Apr 22 2021 at 13:30):

There is some deviation in the samples, right? You go through fast definitions a lot faster than slow definitions, so if you pick a random definition it's probably fast

Johan Commelin (Apr 22 2021 at 13:30):

I see. So my sample size is just way too small.

Mario Carneiro (Apr 22 2021 at 13:31):

well it's more like there is a consistent bias with this sampling method, since you aren't sampling by time

Johan Commelin (Apr 22 2021 at 13:32):

I'm picking the 145 rows whose row number ends in "000". The chance that one of those is slow is apparently quite low.

Mario Carneiro (Apr 22 2021 at 13:33):

if you were to print the definition that lean is working on once a second, you would get much more slow definitions

Johan Commelin (Apr 22 2021 at 13:34):

Aha, so I should pipe tee to watch instead of to grep "000 "

Johan Commelin (Apr 22 2021 at 13:45):

Bingo, a slow proof with index 0 mod 1000:

2.419336    80000 preorder_hom.complete_lattice._proof_8

Mario Carneiro (Apr 22 2021 at 13:52):

src#preorder_hom.complete_lattice

Johan Commelin (Apr 22 2021 at 13:53):

Now which one is _proof_8?

Mario Carneiro (Apr 22 2021 at 13:53):

theorem preorder_hom.complete_lattice._proof_8 :  {α : Type u_1} [_inst_1 : preorder α] {β : Type u_2} [_inst_4 : complete_lattice β] (a b : α →ₘ β),
  a  b  a :=
λ {α : Type u_1} [_inst_1 : preorder α] {β : Type u_2} [_inst_4 : complete_lattice β], lattice.inf_le_left

Mario Carneiro (Apr 22 2021 at 13:54):

I guess this is coming from .. (_ : lattice (α →ₘ β))

Johan Commelin (Apr 22 2021 at 13:54):

Ouch, I just noticed in htop that leanchecker is still hopping between different threads. Even though there is only one thread active at a given time. Not sure if that is an issue.

Mario Carneiro (Apr 22 2021 at 13:55):

you mean different cores? I think the OS will normally do that

Johan Commelin (Apr 22 2021 at 14:03):

leanchecker finished on post_7084 in about 50 minutes. https://gist.github.com/c54d4dda4b22db228c3e91ce5d9ae804

Johan Commelin (Apr 22 2021 at 14:04):

> summary(x[,1]);
     Min.   1st Qu.    Median      Mean   3rd Qu.      Max.
  0.00001   0.00002   0.00002   0.02079   0.00002 121.66725
> sd(x[,1]);
[1] 0.5177284

Johan Commelin (Apr 22 2021 at 14:05):

20611   34.59937                             bounded_continuous_function.metric_space._proof_8
32460   14.75494                                    SemiRing.has_forget_to_AddCommMon._proof_1
32550   15.53428           Ring.category_theory.forget₂.category_theory.creates_limit._proof_3
45565   57.96984                                    Module.forget₂_AddCommGroup_is_equivalence
45868   10.62882 category_theory.monoidal.Mon_functor_category_equivalence.inverse_map_hom_app
46825   21.59894                                      additive.add_left_cancel_monoid._proof_4
47124   13.88599                                               times_cont_diff_bump.support_eq
47211   14.46456                             local_homeomorph.mdifferentiable.mfderiv._proof_3
48034   11.88654                                                          polynomial.roots_one
48975   20.88203      CommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
49289   10.00095                                            rbtree.fold._main.equations._eqn_1
50327   11.78787                                                     polynomial.smul_eval_smul
50424   14.23475                                    intermediate_field.to_subalgebra_injective
54832   18.30227                                                             hash_map.cases_on
56143   18.56595                                                     subring.to_submonoid_mono
56455   17.70191                                                        quotient_group.coe_inv
59635   23.99585                                                           AddGroup.limit_cone
63901   11.02492                                                       order.ideal.top_carrier
64361   22.64404                          function.injective.linear_ordered_comm_ring._proof_5
67257   16.05250                          inner_product_space.of_core.to_normed_space._proof_1
72118   20.85422                                        category_theory.limits.is_colimit.of_π
75663  121.66725                                                           sdiff_inf_self_left
98402   20.69904                                                        power_series.coeff_map
104327  14.18050                                       order_dual.semilattice_inf_bot._proof_4
105831  48.03556                                       hyperreal.infinite_pos_mul_infinite_neg
107723  12.05554                                                pi_tensor_product.lift.unique'
110800  11.50262                      add_equiv.add_equiv_of_unique_of_unique.equations._eqn_1
113312  54.91094              formal_multilinear_series.comp_partial_sum_target_tendsto_at_top
120427  10.48020       has_finite_inter.finite_inter_closure_has_finite_inter.equations._eqn_1
140717  16.62972                                            ordnode.find_max'.equations._eqn_1

Johan Commelin (Apr 22 2021 at 14:05):

What's the best thing to try next?

Johan Commelin (Apr 22 2021 at 14:05):

Shall I run the -j1 export again? To see if it is deterministic?

Mario Carneiro (Apr 22 2021 at 14:14):

@[simp] lemma sdiff_inf_self_left : y \ (y  x) = y \ x := by rw [inf_comm, sdiff_inf_self_right]

what is it with one line functions, that's the fifth one line bad guy

Mario Carneiro (Apr 22 2021 at 14:14):

What's the vicinity of sdiff_inf_self_left? That should tell us if it is respecting file divisions

Mario Carneiro (Apr 22 2021 at 14:15):

0.000018    75656 is_associative.drec_on
0.000017    75657 uniform_continuous₂.uniform_continuous
0.000017    75658 Hausdorffification.of
0.000016    75659 finset.subset.antisymm_iff
0.000017    75660 finset.inter_subset_inter
0.000017    75661 finset.sdiff_inter_self
0.000016    75662 finset.superset.trans
121.667253    75663 sdiff_inf_self_left
0.000101    75664 finset.sdiff_inter_self_left
0.000024    75665 superset.equations._eqn_1
0.000015    75666 sdiff_le_sdiff
0.000014    75667 finset.sdiff_subset_sdiff
0.000014    75668 finset.sdiff_empty
0.000014    75669 finset.sdiff_eq_self
0.000014    75670 finset.sdiff_eq_self_iff_disjoint
0.000014    75671 compact_space_Icc.equations._eqn_1
0.000014    75672 equiv.prod_congr_left._proof_1
0.000020    75673 equiv.prod_congr_left._proof_2
0.000018    75674 equiv.prod_congr_left
0.000016    75675 fintype.exists_univ_list

Oh hey I remember this place

Mario Carneiro (Apr 22 2021 at 14:16):

You can see sdiff_eq_self_iff_disjoint, the previous bad guy, is only 6 definitions later

Mario Carneiro (Apr 22 2021 at 14:17):

so clearly the culprit is somewhere around here and these one liners are taking the blame

Mario Carneiro (Apr 22 2021 at 14:17):

but the files are still mixed up :(

Kevin Buzzard (Apr 22 2021 at 14:22):

So in one run finset.sdiff_eq_self_iff_disjoint took 2 minutes, and in another run it was sdiff_inf_self_left?

Kevin Buzzard (Apr 22 2021 at 14:23):

Now we need @Floris van Doorn to come along and say something like "this is because there is an @ character at position 66 on line 132 in an unrelated Lean file" or something.

Johan Commelin (Apr 22 2021 at 14:25):

Kevin Buzzard said:

So in one run finset.sdiff_eq_self_iff_disjoint took 2 minutes, and in another run it was sdiff_inf_self_left?

That might have been because of multithreading in the previous runs.

Johan Commelin (Apr 22 2021 at 14:25):

Mario Carneiro said:

but the files are still mixed up :(

I have no idea how to fix this.

Johan Commelin (Apr 22 2021 at 14:46):

diff mathlib.txt mathlib.txt.1 | wc -l
0

Johan Commelin (Apr 22 2021 at 14:46):

So it seems like with -j1 the results are deterministic.

Johan Commelin (Apr 22 2021 at 14:47):

Now I'm regenerating the pre_7084 using -j1.

Johan Commelin (Apr 22 2021 at 15:00):

So there is a difference between running the export without -j or with -j. But the parameter to -j doesn't matter. :oops:
Because I ran the export script on pre_7084 with -j14 before. And now I ran it with -j1. But the diff is 0 on the two outputs.

Johan Commelin (Apr 22 2021 at 15:02):

In particular, here are the stats for pre_7084:
https://gist.github.com/779d29ce1724ce82ed8588caf5a0490d

> summary(x[,1]);
     Min.   1st Qu.    Median      Mean   3rd Qu.      Max.
  0.00001   0.00002   0.00002   0.01072   0.00002 121.33800
> sd(x[,1]);
[1] 0.4194959
>
> y <- subset(x, t > 10.0);
> print(nrow(y));
[1] 7
> print(y[,c("t", "d")]);
               t                                                   d
19933   33.04352   bounded_continuous_function.metric_space._proof_4
44039   57.23696 measure_theory.simple_func.semilattice_inf._proof_5
65492   13.15512                              nat.of_digits_one_cons
73812  121.33800                    equiv.perm.sign_prod_congr_right
103809  14.16039                            int.lor.equations._eqn_4
111199  53.86009                              is_local_extr_on.inter
138539  15.00112          times_cont_diff_within_at.equations._eqn_1

Johan Commelin (Apr 22 2021 at 15:02):

And as a reminder, the summary stats for post_7084:

> summary(x[,1]);
     Min.   1st Qu.    Median      Mean   3rd Qu.      Max.
  0.00001   0.00002   0.00002   0.02079   0.00002 121.66725
> sd(x[,1]);
[1] 0.5177284

Johan Commelin (Apr 22 2021 at 15:03):

What's weird: the 121s is on a totally different decl.

Johan Commelin (Apr 22 2021 at 15:03):

The mean time almost doubled, but the box-plot doesn't change.

Johan Commelin (Apr 22 2021 at 15:04):

So the fast 75% remains the same. But the slow 25% just got way slower, basically.

Johan Commelin (Apr 22 2021 at 15:07):

Johan Commelin said:

What's weird: the 121s is on a totally different decl.

I'm not sure what to do with this. It seems that I'm not generating reliable data.

Johan Commelin (Apr 22 2021 at 15:09):

Standard deviation bumped from 0.41 to 0.51.

Eric Wieser (Apr 22 2021 at 15:09):

Can you plot some histograms?

Johan Commelin (Apr 22 2021 at 15:10):

I'm really bad a working with such datasets. No idea how to do that.

Johan Commelin (Apr 22 2021 at 15:11):

Here's the data
pre: https://gist.github.com/779d29ce1724ce82ed8588caf5a0490d
post: https://gist.github.com/c54d4dda4b22db228c3e91ce5d9ae804

Rémy Degenne (Apr 22 2021 at 15:19):

It's not only the 120s getting allocated to different lemmas. The 30s ones change as well. For example, in the "pre" gist, bounded_continuous_function.metric_space._proof_4 is 33s and the other _proof_i are fast; in the "post" gist, bounded_continuous_function.metric_space._proof_8 is 34s and the other are fast.

Johan Commelin (Apr 22 2021 at 15:29):

No idea how to fix that.

Rémy Degenne (Apr 22 2021 at 15:29):

and I looked at a few declarations with 0.1xxx seconds in one gist: the 15 or so examples I looked at are fast in the other file, and I don't see a slower declaration around the same line. This works both ways (slow in pre is fast in post, and reversely).

Bryan Gin-ge Chen (Apr 22 2021 at 15:38):

Since I'm seeing a lot of sdiff above, it makes me wonder if I screwed something up badly in #6775, which added an intermediate "generalized Boolean algebra" class.

Mario Carneiro (Apr 22 2021 at 15:44):

histograms.png

Mario Carneiro (Apr 22 2021 at 15:48):

Here's a histogram of pre (yellow) and post (blue). The vast majority are in the first plot, which has all the 0.0001 second definitions. Those are cut out in the second plot, where the bulk are around 0.5-4 seconds, and the shift in the mean is visible. Excluding those and switching to a log scale is the third plot, and there are a lot more blue ones falling in this class as well

Eric Wieser (Apr 22 2021 at 15:52):

Dammit, you beat me to it

Mario Carneiro (Apr 22 2021 at 15:56):

This doesn't cover the order in the file though. Maybe it's possible to find a pattern by looking at how the peaks shift around

Mario Carneiro (Apr 22 2021 at 15:56):

but really, lean needs to be fixed so that it isn't producing such a crappy data set

Eric Wieser (Apr 22 2021 at 15:59):

A similar graph with everything on a log plot
histograms.png

Rémy Degenne (Apr 22 2021 at 16:04):

Funny how there is a clearly separated mode for slow declarations. It looks like the plot one would expect if there was only one main cause for slowness, and the declaration is slow iff it involves that cause.

Johan Commelin (Apr 22 2021 at 16:08):

Mario Carneiro said:

but really, lean needs to be fixed so that it isn't producing such a crappy data set

hmmz... do we have any idea where to start looking for such a fix? To me that sounds like needle in a 230k line C++ haystack.

Eric Wieser (Apr 22 2021 at 16:08):

Here's one showing how the sequential time breaks down:
duration.png

No obvious pattern suggesting the system is running some task messing with our timing

Johan Commelin (Apr 22 2021 at 16:10):

Just for completeness: export files were generated using lean -j1 --recursive --export=mathlib.txt src/
These were then fed to leanchecker using

leanchecker +leanprover-community/lean:3.29.0 -v mathlib.txt 2>&1 \
  | nl -w8 -s' ' \
  | ts -i -m %.s \
  | tee mathlib.time \
  | grep "000 "

Eric Wieser (Apr 22 2021 at 16:19):

It's possible to export just up to a given file, right?

Eric Wieser (Apr 22 2021 at 16:19):

Perhaps we should try exporting a file that contains a 60s / 120s lemma, and see if it persists

Mario Carneiro (Apr 22 2021 at 16:20):

Rémy Degenne said:

Funny how there is a clearly separated mode for slow declarations. It looks like the plot one would expect if there was only one main cause for slowness, and the declaration is slow iff it involves that cause.

My bet is that this cause is unpacking an algebraic structure

Mario Carneiro (Apr 22 2021 at 16:21):

well every export file will contain init.core and a whole bunch of prelude files no matter what

Mario Carneiro (Apr 22 2021 at 16:22):

unless you find a way to make an interesting lemma from scratch using prelude

Johan Commelin (Apr 22 2021 at 18:45):

@Gabriel Ebner @Floris van Doorn do you have any clue what's going with leanchecker here?

Yaël Dillies (Apr 22 2021 at 19:35):

Is it useful/doable to do this?

Take all instances, the dependency graph between them, assign each of them a number so that an instance of number n only depends on instances on number < n (the maximal definition depth should do it, unless we also want this number to be unique to serve as an identifier), compute the transitive closure of this graph, save it in a file. Then, every time Lean need to come up with an instance I given that it has instances I_1, ..., I_n at its disposal, it fetches the number of each of I, I_1, ..., I_n(say d, d_1, ..., d_nand for each I_i checks whether one of the instances whose number is n is actually the one we want. If we decide upon unique identifiers (in which case we could simply take it to be the order in which the compiler processes the instances), then the check is even simpler, you just have to look up one value.

Thus, Lean avoids getting in a rabbit hole by following the route of a too weak instance. But maybe Lean sometimes automatically generates high instances from lower ones? In that case, my idea doesn't quite work, unless we also ask first for Lean to come up with all instances it can, and slide down the dependencies from them.

Johan Commelin (Apr 22 2021 at 20:11):

What I know is that the algorithm for typeclass search is completely overhauled in Lean 4, and it's now performing much better and solves a lot of issues that were present in Lean 3.

Johan Commelin (Apr 22 2021 at 20:12):

I don't think it makes much sense to do a complete overhaul of the Lean 3 algorithm. It's better to see if we can just speed up leanchecker somehow, and focus the rest of our energy on mathport.

Sebastien Gouezel (Apr 22 2021 at 20:18):

My preference would be to disable leanchecker for now because of its shortcomings, but I understand Mario is strongly against this. This means we shouldn't merge #7255, though, because it again adds 30 minutes to the leanchecker time, and then most bors builds wouldn't finish.

Bryan Gin-ge Chen (Apr 22 2021 at 20:19):

It'd be OK if the mathlib build times were <4h as they were before, but it seems that they also increased by about half an hour recently.

Sebastien Gouezel (Apr 22 2021 at 20:21):

When PRs only touch leaves of the hierarchy, everything is fine, but we have a lot of PRs touching basic files (at least one in each batch, say), so most of the time things are rebuilt from scratch.

Eric Wieser (Apr 22 2021 at 20:31):

Can we switch to one of the lean-checker alternatives?

Eric Wieser (Apr 22 2021 at 20:31):

Are they faster?

Oliver Nash (Apr 22 2021 at 21:04):

Does Leanchecker consume the olean files? Could we feed it these as they are generated and so parallelise it with the build?

Bryan Gin-ge Chen (Apr 22 2021 at 21:11):

No, leanchecker works off a text file which is generated by lean --export.

Scott Morrison (Apr 22 2021 at 22:34):

Hausdorffification.of certainly sounds suspicious to me. :-) Anything if ification in the name is surely doing some rfl.

Johan Commelin (Apr 23 2021 at 06:23):

I think there are several things that we should think about:

  1. Can we make lean + leanchecker generate accurate timing data? There is good reason to believe that the timing data that I generated yesterday is crappy.
  2. Are there ways in which we can speed up lean and/or leanchecker without changing mathlib's .lean files?
  3. Are there ways in which we can speed up by rewriting some proofs in mathlib?
  4. Are there faster alternatives to leanchecker?
  5. How bad would it be to disable leanchecker until we have fixed (1)-(3)?

Scott Morrison (Apr 23 2021 at 07:06):

Re: 4) I've been trying to run trepplein, running ./target/universal/stage/bin/trepplein -s ../mathlib/mathlib.txt , so far to no avail. It starts up, rapidly consumes 8gb of ram, and then sits there silently producing no output. (My amateur diagnosis is that mathlib is too big for it... :-)

Unfortunately trepplein doesn't seem to have a mode in which is just prints the names of the declarations as it processes them: you can either print the whole type signature (way too verbose!) or nothing.

Scott Morrison (Apr 23 2021 at 07:19):

I'm running [tc](https://github.com/dselsam/tc) now.

Sebastien Gouezel (Apr 23 2021 at 07:21):

What about disabling leanchecker on the PRs, but running it once a day and letting it create an issue if it sees a problem? Like the action we have every day on LFTCM2020 trying to merge mathlib master.

Johan Commelin (Apr 23 2021 at 07:25):

I think that is similar to my premaster proposal. And I think it could be a good intermediate solution.

Yaël Dillies (Apr 23 2021 at 07:57):

Sebastien Gouezel said:

When PRs only touch leaves of the hierarchy, everything is fine, but we have a lot of PRs touching basic files (at least one in each batch, say), so most of the time things are rebuilt from scratch.

Can we thus act smart and make Bors pack together the PRs that touch basic files?

Johan Commelin (Apr 23 2021 at 07:59):

That doesn't seem very easy. We would have to manually assign priorities based on how "basic" we think PRs are. And if you bors merge a PR with high priority, it will cancel the running batch if it has lower priority.

Johan Commelin (Apr 23 2021 at 08:00):

Priorities are the only way to influence how bors puts things in different batches, as far as I can see.

Scott Morrison (Apr 23 2021 at 08:30):

Actually, trepplein is awesome:

$ time ./target/universal/stage/bin/trepplein -s ../mathlib/mathlib.txt
-- successfully checked 145490 declarations

real    13m0.611s
user    14m10.181s
sys 0m10.675s

Scott Morrison (Apr 23 2021 at 08:31):

On the other hand tc seems to be really struggling with the material in ring_quot...

Johan Commelin (Apr 23 2021 at 08:31):

Is there some hidden option to make it output some timing data?

Scott Morrison (Apr 23 2021 at 08:32):

I don't think so. It's written in scala, which I used to speak, so I guess I can try to insert this. Gabriel will be more efficient. :-)

Oliver Nash (Apr 23 2021 at 08:36):

If we do switch to trepplein would it be worth digging up an old issue that leanchecker caught and demonstrating that trepplein would also flag it?

Yaël Dillies (Apr 23 2021 at 09:09):

Johan Commelin said:

Priorities are the only way to influence how bors puts things in different batches, as far as I can see.

Hmm... That sounds like we could open a feature request.

Johan Commelin (Apr 23 2021 at 09:19):

If you give the feature request a high priority it might end up in a batch that they start working on early :rofl:

Sebastien Gouezel (Apr 23 2021 at 10:15):

Scott Morrison said:

On the other hand tc seems to be really struggling with the material in ring_quot...

Is this before or after #7120 has been merged?

Scott Morrison (Apr 23 2021 at 10:39):

Interesting, tc eventually got past that, but now fails:

DEF(27959): .category_theory.sieve.generate
IND(654): .category_theory.presieve.singleton[.v,.u]
IDeclError (TypeCheckError (TypeExpected (Pi: (Local <.#_system.5>) -> (App: '.set' @ [(App: '.quiver.hom' @ [(Local <.#_system.5>),(App: '.category_theory.category_struct.to_quiver' @ [(Local <.#_system.5>),(App: '.category_theory.category.to_category_struct' @ [(Local <.#_system.5>),(Local <.#_system.6>)])]),#0,(Local <.#_system.7>)])]))) "ensureSort")

Johan Commelin (Apr 23 2021 at 13:14):

Oliver Nash said:

If we do switch to trepplein would it be worth digging up an old issue that leanchecker caught and demonstrating that trepplein would also flag it?

Does anyone know where we can quickly find such issues?

Johan Commelin (Apr 23 2021 at 13:14):

https://github.com/leanprover-community/mathlib/issues?q=is%3Aissue+leanchecker is empty

Scott Morrison (Apr 23 2021 at 13:33):

Was it https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/leanchecker.20timeout

Scott Morrison (Apr 23 2021 at 13:34):

or https://leanprover.zulipchat.com/#narrow/stream/144837-PR-reviews/topic/.236375/near/228434456

Scott Morrison (Apr 23 2021 at 13:34):

or https://leanprover.zulipchat.com/#narrow/stream/144837-PR-reviews/topic/.236572.20crashes.20leanchecker

Johan Commelin (Apr 23 2021 at 13:35):

From the topmost post in that last one:

commit c8892e44 is the offending one.

Johan Commelin (Apr 23 2021 at 13:36):

That gives a pointer to a commit that we can try trepplein on. But we'll first need to see if that commit still exists somewhere.

Johan Commelin (Apr 23 2021 at 13:37):

https://github.com/leanprover-community/mathlib/pull/6572/commits/c8892e44fe95f8fe8b7892aa83b6ce0466fba8f5
github still seems to have it

Eric Wieser (Apr 23 2021 at 13:37):

That last one wasn't actually a bug in leanchecker at all though, it was a bug in the exported output

Johan Commelin (Apr 23 2021 at 13:37):

ooh :sad:

Eric Wieser (Apr 23 2021 at 13:37):

So should be caught by any checker

Johan Commelin (Apr 23 2021 at 13:38):

aah, so it's actually a good test case

Eric Wieser (Apr 23 2021 at 13:38):

In fact, the commit isn't enough to reproduce it - you need the corrupt olean files that were uploaded by that build

Johan Commelin (Apr 23 2021 at 13:38):

Right, so we need to use an old version of lean

Johan Commelin (Apr 23 2021 at 13:38):

Hopefully elan can find it for us.

Bryan Gin-ge Chen (Apr 23 2021 at 13:49):

If I'm not mistaken, I think the issue where Lean would accept bad oleans due to a hash collision should now be fixed since we now run lean --make src/ twice in succession in CI.

Bryan Gin-ge Chen (Apr 23 2021 at 13:51):

Well, not fixed, but sufficiently worked around!

Eric Wieser (Apr 23 2021 at 13:54):

How so?

Eric Wieser (Apr 23 2021 at 13:55):

I guess we could double check by amending that commit with an updated CI script

Bhavik Mehta (Apr 24 2021 at 16:55):

It seems like https://github.com/leanprover-community/mathlib/runs/2427448873 is taking over an hour to install elan - I don't claim to understand how the CI works but this seems wrong to me :| Is there something I can do about this?

Markus Himmel (Apr 24 2021 at 17:02):

This is probably just a display bug, see https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/CI.20hanging.20on.20trying.20to.20find.20oleans.20cache

Oliver Nash (Apr 24 2021 at 17:02):

I think that build is actually working on the "leanpkg build" step and a callback to update the view we're seeing has just been lost somewhere in the ether.

Ben Toner (Apr 24 2021 at 19:01):

Hi I'm Ben! I'm new so not sure this is helpful, but I modified leanchecker so that it outputs timing data. Code here.
It's the same command as it was previously was to run it: leanchecker -v export.out > leanchecker.txt 2>&1
In another terminal, you can run watch -n 10 sort --general-numeric-sort --reverse leanchecker.txt to get a live view of what's slow.
Output (for mathlib at f15887a18). leanchecker_sorted.txt leanchecker.txt

Top 20:

96.884151 s: lie_module.derived_series_le_lower_central_series
40.613059 s: has_fpower_series_at.comp
38.592329 s: cauchy_seq_finset_of_geometric_bound
13.360009 s: Algebra.forget₂_Ring_preserves_limits_aux.equations._eqn_1
12.736091 s: Algebra.forget₂_Ring_preserves_limits_aux
11.605639 s: mv_power_series.X_pow_dvd_iff
10.258607 s: AddCommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
9.939601 s: to_euclidean.equations._eqn_1
9.142622 s: CommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
8.893232 s: smooth_bump_covering.exists_immersion_findim
8.811549 s: AddCommMon.category_theory.forget₂.category_theory.creates_limit._proof_6
8.442576 s: CommMon.category_theory.forget₂.category_theory.creates_limit._proof_6
8.258426 s: to_euclidean
7.064644 s: bounded_continuous_function.add_comm_group._proof_8
6.992482 s: bounded_continuous_function.add_comm_group._proof_9
6.919239 s: bounded_continuous_function.add_comm_group._proof_10
6.351864 s: CommRing.category_theory.forget₂.category_theory.creates_limit._proof_3
6.115897 s: Top.presheaf.app_bijective_of_stalk_functor_map_bijective
6.065005 s: SemiRing.forget₂_AddCommMon_preserves_limits_aux.equations._eqn_1
5.467336 s: SemiRing.forget₂_AddCommMon_preserves_limits_aux

Oliver Nash (Apr 24 2021 at 19:07):

:scream: that’s my lemma at the top. I’ll take a look when I’m at my laptop later this evening. Thanks @Ben Toner

Johan Commelin (Apr 24 2021 at 19:16):

@Ben Toner That's great! Would you mind running this on the following two commits:

  • 3ec54df501359e925d42f84aaf703e853fe6ba1c (just before #7084)
  • 63801552184f2a519c95a45561300547837915cc (just after #7084)

Johan Commelin (Apr 24 2021 at 19:17):

Also, this seems like a very useful feature for leanchecker, so probably Gabriel wouldn't mind seeing this PRd to the community repo.

Ben Toner (Apr 24 2021 at 19:23):

@Johan Commelin Sure - it's running now. Pull request is here.

Johan Commelin (Apr 24 2021 at 19:30):

I'm looking forward to seeing the histograms generated with the new data!

Ben Toner (Apr 24 2021 at 19:34):

Do you want to be to make separate histograms for definitions, inductive definitions and axioms?

Johan Commelin (Apr 24 2021 at 19:38):

Well, right now we want to understand why the total thing got so much slower by PR #7084

Johan Commelin (Apr 24 2021 at 19:39):

So we want to get a feeling for which declarations got slower between those two commits.

Johan Commelin (Apr 24 2021 at 19:39):

Being able to split by type of decl could in principle be useful, but for me it doesn't seem an urgent feature.

Ben Toner (Apr 24 2021 at 19:55):

91.168317 s: lie_module.derived_series_le_lower_central_series
41.849018 s: has_fpower_series_at.comp
38.994233 s: cauchy_seq_finset_of_geometric_bound
10.968149 s: mv_power_series.X_pow_dvd_iff
6.603513 s: bounded_continuous_function.add_comm_group._proof_5
6.602266 s: bounded_continuous_function.add_comm_group._proof_4
6.484212 s: bounded_continuous_function.add_comm_group._proof_6
4.963744 s: Algebra.forget₂_Ring_preserves_limits_aux.equations._eqn_1
4.878469 s: Algebra.forget₂_Ring_preserves_limits_aux
3.275507 s: category_theory.monoidal.Mon_functor_category_equivalence._proof_1
3.260439 s: bounded_continuous_function.add_comm_group._proof_1
2.623107 s: int.bitwise_and
2.496212 s: ring.fractional_ideal.eq_one_div_of_mul_eq_one
2.455020 s: right_inverse_eq
2.418527 s: AddCommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
2.327386 s: int.bitwise_xor
2.305817 s: CommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
1.926131 s: category_theory.monoidal.CommMon_functor_category_equivalence.counit_iso._proof_8
1.770720 s: smooth_bump_covering.exists_immersion_findim
1.710779 s: to_euclidean

Kevin Buzzard (Apr 24 2021 at 20:34):

The 20th slowest thing used to be 1.7 second and now it's 5.4 seconds

Ben Toner (Apr 24 2021 at 20:54):

99.348087 s: lie_module.derived_series_le_lower_central_series
41.325001 s: has_fpower_series_at.comp
38.674662 s: cauchy_seq_finset_of_geometric_bound
18.162586 s: Algebra.forget₂_Ring_preserves_limits_aux
17.136728 s: Algebra.forget₂_Ring_preserves_limits_aux.equations._eqn_1
11.631498 s: mv_power_series.X_pow_dvd_iff
11.470218 s: to_euclidean
11.415529 s: smooth_bump_covering.exists_immersion_findim
10.795835 s: to_euclidean.equations._eqn_1
10.485176 s: CommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
10.044581 s: AddCommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
8.402438 s: AddCommMon.category_theory.forget₂.category_theory.creates_limit._proof_6
8.347118 s: SemiRing.limit_π_ring_hom.equations._eqn_1
8.283634 s: SemiRing.limit_π_ring_hom
7.914338 s: bounded_continuous_function.add_comm_group._proof_9
7.818749 s: CommMon.category_theory.forget₂.category_theory.creates_limit._proof_6
7.751567 s: bounded_continuous_function.add_comm_group._proof_10
7.586451 s: bounded_continuous_function.add_comm_group._proof_8
6.178683 s: SemiRing.forget₂_AddCommMon_preserves_limits_aux
5.924244 s: SemiRing.forget₂_AddCommMon_preserves_limits_aux.equations._eqn_1

Ben Toner (Apr 24 2021 at 20:57):

Differences from 3ec54df5 to 63801552 (with small relative or absolute differences filtered out): difference.txt
Top 20:

13.284117 s: Algebra.forget₂_Ring_preserves_limits_aux
12.172984 s: Algebra.forget₂_Ring_preserves_limits_aux.equations._eqn_1
9.759439 s: to_euclidean
9.644809 s: smooth_bump_covering.exists_immersion_findim
9.346480 s: to_euclidean.equations._eqn_1
8.179359 s: CommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
7.952717 s: AddCommMon.category_theory.forget₂.category_theory.creates_limit._proof_6
7.876578 s: SemiRing.limit_π_ring_hom.equations._eqn_1
7.872092 s: SemiRing.limit_π_ring_hom
7.626054 s: AddCommGroup.category_theory.forget₂.category_theory.creates_limit._proof_7
7.372604 s: CommMon.category_theory.forget₂.category_theory.creates_limit._proof_6
5.510193 s: SemiRing.forget₂_AddCommMon_preserves_limits_aux
5.177408 s: SemiRing.forget₂_AddCommMon_preserves_limits_aux.equations._eqn_1
4.339591 s: AddCommMon.category_theory.forget₂.category_theory.creates_limit
4.259126 s: AddCommGroup.category_theory.forget₂.category_theory.creates_limit
4.167444 s: CommMon.category_theory.forget₂.category_theory.creates_limit
4.081465 s: CommGroup.category_theory.forget₂.category_theory.creates_limit.equations._eqn_1
4.003845 s: CommGroup.category_theory.forget₂.category_theory.creates_limit
3.984336 s: AddCommMon.category_theory.forget₂.category_theory.creates_limit.equations._eqn_1
3.955816 s: euclidean.exists_pos_lt_subset_ball

Ben Toner (Apr 24 2021 at 21:07):

(Script to calculate differences is here.)

Kevin Buzzard (Apr 24 2021 at 22:02):

Thank you so much!

Kevin Buzzard (Apr 24 2021 at 22:42):

The totals seem to indicate that post 7084 leanchecker should now take just a couple of minutes longer.

Kevin Buzzard (Apr 24 2021 at 22:49):

The by convert trick decreases elaboration time of Algebra.forget₂_Ring_preserves_limits_aux from 2.64 seconds to 0.7 seconds.

In contrast, to_euclidean times out if you insert by convert :-)

Johan Commelin (Apr 26 2021 at 04:06):

@Ben Toner Thanks a lot for generating this data!

Johan Commelin (Apr 26 2021 at 04:07):

@Eric Wieser could you please generate some histograms again, to see what the pictures look like now?

Scott Morrison (Apr 26 2021 at 05:23):

It would be really great if we had this data regularly. Having a regularly updated "high score" chart to direct our attention to some combination of the slow stuff, and the newly-slow stuff, would be super helpful.

Johan Commelin (Apr 26 2021 at 06:04):

I just kick #7359 on the queue, so the top spot on the leaderboard should be changing a couple of hours from now (-;

Johan Commelin (Apr 26 2021 at 21:00):

@Gabriel Ebner @Rob Lewis do you think we should try to take more action here?

Scott Morrison (Apr 26 2021 at 23:12):

Could someone double check my finding that trepplein is way faster on mathlib than leanchecker?

Eric Rodriguez (Apr 26 2021 at 23:55):

whilst I was googling stuff to install, I found this in rust - may also be a contender?

Eric Rodriguez (Apr 27 2021 at 00:34):

okay, was trying to check but got:

Exception in thread "main" java.lang.IllegalArgumentException: expected
, got

        at trepplein.LinesParser.consume(parser.scala:45)
        at trepplein.LinesParser.lines(parser.scala:50)
        at trepplein.TextExportParser$.$anonfun$parseStream$5(parser.scala:229)
        at scala.collection.immutable.LazyList$.$anonfun$flatMapImpl$1(LazyList.scala:1049)
        at scala.collection.immutable.LazyList.scala$collection$immutable$LazyList$$state$lzycompute(LazyList.scala:259)
        at scala.collection.immutable.LazyList.scala$collection$immutable$LazyList$$state(LazyList.scala:252)
        at scala.collection.immutable.LazyList.isEmpty(LazyList.scala:269)
        at scala.collection.immutable.LazyList$.$anonfun$collectImpl$1(LazyList.scala:1031)
        at scala.collection.immutable.LazyList.scala$collection$immutable$LazyList$$state$lzycompute(LazyList.scala:259)
        at scala.collection.immutable.LazyList.scala$collection$immutable$LazyList$$state(LazyList.scala:252)
        at scala.collection.immutable.LazyList.isEmpty(LazyList.scala:269)
        at trepplein.main$.main(main.scala:140)
        at trepplein.main.main(main.scala)

Eric Rodriguez (Apr 27 2021 at 00:35):

(input: ./trepplein -s export.out)

Bryan Gin-ge Chen (Apr 27 2021 at 00:42):

I've used nanoda / nanoda-lib before; it's much quicker than leanchecker though I didn't do any timing. This was also before the recent changes that made leanchecker much slower.

Jannis Limperg (Apr 27 2021 at 02:39):

Scott Morrison said:

Could someone double check my finding that trepplein is way faster on mathlib than leanchecker?

Can confirm. On my system with recent master:

  • leanchecker: 34m
  • trepplein -s: 14m
  • trepplein (16 threads): 3m
  • tc: consumes >25G memory and dies

Mario Carneiro (Apr 27 2021 at 08:59):

@Jannis Limperg Could you test nanoda as well?

Eric Rodriguez (Apr 27 2021 at 09:37):

Eric Rodriguez said:

okay, was trying to check but got:

Exception in thread "main" java.lang.IllegalArgumentException: expected
, got

        at trepplein.LinesParser.consume(parser.scala:45)
        at trepplein.LinesParser.lines(parser.scala:50)
        at trepplein.TextExportParser$.$anonfun$parseStream$5(parser.scala:229)
        at scala.collection.immutable.LazyList$.$anonfun$flatMapImpl$1(LazyList.scala:1049)
        at scala.collection.immutable.LazyList.scala$collection$immutable$LazyList$$state$lzycompute(LazyList.scala:259)
        at scala.collection.immutable.LazyList.scala$collection$immutable$LazyList$$state(LazyList.scala:252)
        at scala.collection.immutable.LazyList.isEmpty(LazyList.scala:269)
        at scala.collection.immutable.LazyList$.$anonfun$collectImpl$1(LazyList.scala:1031)
        at scala.collection.immutable.LazyList.scala$collection$immutable$LazyList$$state$lzycompute(LazyList.scala:259)
        at scala.collection.immutable.LazyList.scala$collection$immutable$LazyList$$state(LazyList.scala:252)
        at scala.collection.immutable.LazyList.isEmpty(LazyList.scala:269)
        at trepplein.main$.main(main.scala:140)
        at trepplein.main.main(main.scala)

So this is actually because of Windows – leanchecker makes CRLF files, whilst turns out \n is Scala is platform-agnostic; strip out the CRLFs and it's totally fine

Eric Rodriguez (Apr 27 2021 at 09:38):

Timings so far: (measured with powershell Measure-Command, on a Ryzen 3600X)

leanchecker: 1.4407... hours
trepplein : 4.1545... minutes
trepplein -s : 16.712... minutes
nanoda (12 threads) : stack overflow
nanoda (ST): running

Jannis Limperg (Apr 27 2021 at 12:26):

I've edited my comment with nanoda timings. nanoda wins overall, but not by that much compared to trepplein. It doesn't overflow for me like it does for Eric. Might be Windows vs Linux coming into play.

Johan Commelin (Apr 27 2021 at 12:42):

Cool, so we could even run trepplein and nanoda sequentially, and we wouldn't even notice :rofl:

Eric Rodriguez (Apr 27 2021 at 13:24):

yep seems so; I can run em on WSL if people want any more timings, but I think the findings are pretty definitive :b

Yaël Dillies (May 16 2021 at 11:16):

I notice the queue is heavily cluttered at the moment. Stuff that was sent 2 hours ago is still not being processed. Those two actions have been running each for 5 hours: https://github.com/leanprover-community/mathlib/actions/runs/846421065 https://github.com/leanprover-community/mathlib/actions/runs/846475716
Anything to be worried about?

Julian Berman (May 16 2021 at 11:39):

GH Actions is partially down this morning so may be related: https://www.githubstatus.com/

Floris van Doorn (May 16 2021 at 18:41):

I just restarted some jobs that were still in the queue. The queue is now empty again.

Eric Rodriguez (May 16 2021 at 18:43):

there's a few that failed with an error about continuous issues (e.g. #7623, #7363); an empty commit will fix them up - just wondering if there's a better way to do this?

Gabriel Ebner (May 16 2021 at 18:48):

You can go to the actions page and click "re-run jobs".


Last updated: Dec 20 2023 at 11:08 UTC