Zulip Chat Archive
Stream: new members
Topic: Performance difference between #eval and rfl
David Xu (May 03 2025 at 17:49):
import Mathlib.Tactic
#eval (20 : ℕ).divisors -- {1, 2, 4, 5, 10, 20} : Finset ℕ
#eval (20 : ℕ).divisors.card -- 6
#eval (if (4).divisors.card = 3 then 1 else 0) -- 1. 4 is divisible by the 3 numbers {1, 2, 4}
#eval Finset.Icc 1 10 -- {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
#eval (∑ a ∈ Finset.Icc 1 1000, if a.divisors.card = 3 then 1 else 0) -- 11
example : (∑ a ∈ Finset.Icc 1 1000, if a.divisors.card = 3 then 1 else 0) = 11 := by
rfl -- this line is slow
I was experimenting with the MiniF2F solutions by DeepSeek. Among the 439 files the valid/mathd_numbertheory_221.lean is the only file I cannot compile on my laptop. It is running a rfl on a goal that looks like the above, which is a bruteforce proof that there are 11 integers in 1...1000 that have exactly 3 divisors. (It seems like this happens iff the number is a perfect square.)
I think logically the #eval and rfl line are doing the same computation, but in practice #eval finishes instantly on my laptop, while rfl eats up the 16GB of RAM of my laptop and does not terminate after a few minutes.
If I understand it correctly, #eval will compile the code before running it, while in rfl the computation is done by the Lean kernel type checker, so there will be some performance difference, but I did not expect it to be so serious.
Do you know what data might be taking up the 16GB of RAM in the rfl proof?
Aaron Liu (May 03 2025 at 17:55):
#eval and rfl are not doing the same computation, because the #eval goes through the compiler, while the rfl goes through the kernel. If you want to prove this with the same computation that #eval does, you can do native_decide (or decide +native), but beware that you are now trusting the compiler too, which can be fed lies.
Aaron Liu (May 03 2025 at 17:56):
I suspect computing the divisors is what is taking so long
Aaron Liu (May 03 2025 at 17:58):
Divisors is actually pretty fast, but #reduce Finset.Icc 1 1000 reaches the recursion limit
Aaron Liu (May 03 2025 at 18:00):
Everything's pretty fast! What could be taking so long???
Aaron Liu (May 03 2025 at 18:05):
Setting trace.profiler reveals that kernel typechecking is what is taking so long, and I can't peek inside due to lack of trace options, so I'm stuck now...
Kevin Buzzard (May 04 2025 at 07:34):
We've seen that phenomenon in algebraic geometry files because of universe issues but that's almost certainly not what's happening here. I agree that it's frustrating when this is the bottleneck. Can you remove imports and reproduce in core lean only? If so you can ask in #lean4
Eric Wieser (May 04 2025 at 09:44):
Aaron Liu said:
Divisors is actually pretty fast, but
#reduce Finset.Icc 1 1000reaches the recursion limit
Are you sure it's not the printing that hits the limit?
Last updated: Dec 20 2025 at 21:32 UTC