Zulip Chat Archive

Stream: lean4

Topic: Profiler time bug


Jovan Gerbscheid (Apr 01 2024 at 22:42):

I've been having some trouble with the profiler not giving the correct time. in particular it seems like time spent on type class synthesis is not counted towards the total time. Minimal example:

import Mathlib
variable (A B : Set Nat)
set_option profiler true
set_option profiler.threshold 1

/-
typeclass inference of CoeT took 2.52ms
typeclass inference of CompleteLattice took 1.84ms
typeclass inference of CompleteLattice took 2.15ms
typeclass inference of Fintype took 346ms
elaboration took 1.6ms

-/
#synth Fintype (AB)

The elaboration supposedly only takes 1.6ms, but the type class inference itself took 346ms!?

Kevin Buzzard (Apr 01 2024 at 23:17):

Does set_option trace.profiler true help?

Also, instead of asking for time in seconds, you could try count_heartbeats in to see how many heartbeats are being taken.

Jovan Gerbscheid (Apr 02 2024 at 01:52):

Yes, I didn't know about trace.profiler, and it seems to give the correct time.

Jovan Gerbscheid (Apr 03 2024 at 15:01):

I assume what is going on is that Lean.profileit only counts time towards the innermost call of the function, instead of to all nested calls. That would explain why type class inference steals all the time of the main elaboration time. Is this intended?

Sebastian Ullrich (Apr 03 2024 at 15:06):

yes

Jovan Gerbscheid (Apr 03 2024 at 15:41):

In that case, should this be stated as a warning at the profiler option? Because I had always been using set_option profiler true to measure the amount of time takes by a command. But this isn't what it does...

Jovan Gerbscheid (Apr 03 2024 at 15:44):

It now says "activate nested traces with execution time over threshold", but I don't remember having read this actually.

Jovan Gerbscheid (Apr 03 2024 at 15:48):

Or is there a different option that actually measures the amount of time taken by a command? I currently use profileitM Exception "rw??" (← getOptions) do in my library rewrite tactic, but since it involves quite a bit of type class synthesis, I don't think this is the right function for profiling.

Sebastian Ullrich (Apr 03 2024 at 15:49):

You already said that trace.profiler works for you?

Jovan Gerbscheid (Apr 03 2024 at 15:50):

Ah, sorry yes.

Jovan Gerbscheid (Apr 03 2024 at 15:52):

But what about profiling library search? Do you agree that it should display the total time?

Sebastian Ullrich (Apr 03 2024 at 15:54):

Use withTraceNode then to opt into trace.profiler

Sebastian Ullrich (Apr 03 2024 at 15:57):

profiler is what the speedcenter uses but it likely will not be extended going forward in favor of trace.profiler - #3801

Jovan Gerbscheid (Apr 03 2024 at 16:28):

In the case of interactive library search, the profiling messages don't appear like usual messages, because the lean function is called from the widget, but instead they appear as lines in the output, which is a nice alternative. This doesn't happen with withTraceNode. But I now have a testing command to run the library search straight from lean, so this is not a problem anymore.

Jovan Gerbscheid (Apr 03 2024 at 16:29):

Sebastian Ullrich said:

It links to the mathlib PR instead of the Lean PR

Jireh Loreaux (Apr 03 2024 at 19:27):

lean4#3801

Sebastian Ullrich (Apr 03 2024 at 19:38):

I fear I will keep making this mistake until we decide to default four-digit references to lean4 :joy:


Last updated: May 02 2025 at 03:31 UTC