Zulip Chat Archive
Stream: lean4
Topic: Profiling Tactics
Tomaz Gomes (Feb 02 2023 at 23:43):
Hi! So, I am trying to profile and compare the performance
of some tactics that I wrote. For this purpose, I wrote another
small tactic that just reports the current time, then I invoke
it before and after using the tactic that I want to profile.
Something like:
syntax (name := reportTime) "reportTime" : tactic
@[tactic reportTime] def evalReport : Tactic := fun _ => do
let time ← IO.monoMsNow
logInfo s!"{time}ms"
example : True := by
reportTime
myTactic1
reportTime
myTactic2
reportTime
My question is whether this is an appropriate way to measure
the efficiency of each tactic. I'm wondering if there is some
sort of preprocessing before the tactic reportTime
runs,
that I would be missing by this approach. I also accept
suggestions of other methods to profile my tactics.
I have also been wanting to use the flag --profile
of lean
to obtain more information, but I couldn't find an explanation for
what each time reported means exactly. For instance, it is not
clear to me the difference between "elaboration" and "parsing",
and what the time report that is labeled "initialization" is
about.
Thanks!
Jannis Limperg (Feb 03 2023 at 13:58):
I believe your approach should accurately measure the time taken to elaborate each tactic. It excludes parsing since the whole by
block is parsed before reportTime
is run. But the parsing time should be negligible.
To explain the different phases:
- Parsing goes from
.lean
file toSyntax
, which is a concrete syntax tree. - Elaboration goes from
Syntax
toExpr
, which is an abstract syntax tree. Elaboration of commands etc. adds declaration objects to the environment. Elaboration of tactics ultimately construct a proof term, i.e. anExpr
. - Initialisation is a special (and, to me, slightly mysterious) thing that runs before anything else. It should not be relevant to you unless you use environment extensions.
Jakob von Raumer (Aug 10 2023 at 13:48):
What's the best tool to profile tactics right now? Best would be something that's machine independent. We make heavy use of aesop
and need some threshold above which we inline proofs
Jannis Limperg (Aug 10 2023 at 14:08):
I believe heartbeats are the only machine-independent performance metric we have. Can you just wrap each Aesop call into a pair of docs4#getNumHeartbeats and set a limit on the difference?
Last updated: Dec 20 2023 at 11:08 UTC