Zulip Chat Archive

Stream: lean4

Topic: huge expressions


Arthur Paulino (Sep 19 2025 at 15:18):

Hi!
I'm trying the following

run_cmd
  let stt  get
  let const := stt.env.find?
    ``Std.Tactic.BVDecide.BVExpr.bitblast.blastUdiv.denote_blastDivSubtractShift_q
  let const := const.get!
  dbg_trace const.size

Where .size is a function that computes the number of nodes on the constant expressions (type + value).

For that theorem specifically, my computer doesn't finish. But on someone else's computer, running that as a compiled program consumes 50~70 GB of RAM and outputs... checks notes... around 28 billion :explode:

What's happening here? How is this virtually possible and how come my machine can type check that term in a second or two?

Henrik Böving (Sep 19 2025 at 15:23):

The theorem has lots of common subterms that are being shared, Expr.size counts the nodes in the tree instead of the DAG view of the term. If you disregard term sharing exponential overhead is to be expected for sufficiently messed up terms.

Arthur Paulino (Sep 19 2025 at 15:27):

Ah, sharing. Makes a lot of sense, thanks!

Sebastian Ullrich (Sep 19 2025 at 15:28):

See also

set_option pp.exprSizes true in
#print Std.Tactic.BVDecide.BVExpr.bitblast.blastUdiv.denote_blastDivSubtractShift_q
... := [size 2575268672/6967/6967] fun ...

Last updated: Dec 20 2025 at 21:32 UTC