Zulip Chat Archive
Stream: metaprogramming / tactics
Topic: An interval tactic for constant real inequalities
Geoffrey Irving (Jun 18 2024 at 20:40):
Here's a working interval arithmetic tactic. It uses native_decide
and the interval repo is messy, but at least it illustrates what we'll have in the future:
example : (2 : ℝ) < 3 := by interval
example : exp 1 < 2.7182818284591 := by interval
example : log 200 * 20000 ≤ 106000 := by interval
example : |exp (1 + log 7) - 19.02797279921331| < 1e-14 := by interval
example : (5 : ℝ) ^ (0.18732 : ℝ) < 1.351858 := by interval
Kim Morrison (Jun 20 2024 at 01:10):
I'm curious where the cut-off is beyond which native_decide
is needed.
Geoffrey Irving (Jun 20 2024 at 06:52):
Even the middle example fails with rfl
instead of native decide, unfortunately, so doing one (not super optimized) Taylor series calculation is enough.
Joachim Breitner (Sep 02 2024 at 16:28):
Did you investigate a bit why decide
isn't good enough? Maybe there is some kernel-unsuitable code in there (arrays, well-founded recursion etc)? I elsewhere noticed that Nat.sqrt
reduces 1000x faster in the kernel when implemented using fuel rather than well-founded recursion, so maybe there are similar performance foot guns?
I wonder what kind of reduction engine the kernel would have to have to make your use cases work well.
Geoffrey Irving (Sep 02 2024 at 16:56):
Not yet. Is there a good description of how to profile for this purpose?
Geoffrey Irving (Sep 02 2024 at 18:59):
Also what does “fuel” mean in this context?
Geoffrey Irving (Sep 02 2024 at 19:03):
Ah, recursion via an integer depth.
Geoffrey Irving (Sep 06 2024 at 20:01):
I've been trying to work through the decide issues, but haven't gotten anywhere due to not knowing how to profile. This commit illustrates what I've been trying to do:
https://github.com/girving/interval/commit/958acee938d967a7fa101ac714c6b3c155bb45a1
For a while I thought the problem was that my definitions of Floating
andInterval
arithmetic had inline proofs, so that computational defs when expanded exposed huge proof terms. But I've refactored as much as possible so that proofs are in separate lemmas, and no luck.
My best guess now is that https://leanprover.zulipchat.com/#narrow/stream/270676-lean4/topic/Type.20checking.20time.20correlates.20with.20value.20in.20expression/near/403068076 is the problem: most of my code is on top of UInt64
, which is super fast to the compiler but looks like multiple wrapping layers around Nat
to the kernel. And if it gets the order of unfolding wrong things could go very badly.
But I can't actually confirm that wild guess, as I don't know how to profile kernel evaluation.
Geoffrey Irving (Sep 06 2024 at 20:01):
Cc @Joachim Breitner
Joachim Breitner (Sep 06 2024 at 20:45):
Is there a #reduce
line that you expect to be faster than it is that I could look at when I find the time?
Geoffrey Irving (Sep 06 2024 at 20:52):
The one that's blocking me currently is https://github.com/girving/interval/blob/series/Interval/Interval/Constants.lean#L120. A reduce statement inserted above there that exhibits the same behaviour is
set maxRecDepth 10000000000000 in
#reduce log_half == Log2.interval
which still exceeds recursion depth. And it does it immediately, so it seems like actual recursion depth can't be quite what's happening.
Geoffrey Irving (Sep 06 2024 at 20:57):
I think this isn't too hard to minimize, actually. I'll do that now.
Joachim Breitner (Sep 06 2024 at 21:15):
Also #reduce
isn’t actually using the kernel, it’s using the elaborator’s reduction functions. Typically they exhibit similar performance, but I expect it may differ.
Geoffrey Irving (Sep 06 2024 at 21:44):
Not fully minimized, but now it's a Mathlib-only file:
import Mathlib.Data.UInt
open Set
open scoped UInt64.CommRing
/-!
## Floating point arithmetic
-/
/-- Floating point number -/
@[unbox] structure Floating where
n : UInt64
s : UInt64
deriving DecidableEq
instance : Zero Floating where zero := ⟨0, 0⟩
instance : One Floating where one := ⟨2^62, 2^63 - 62⟩
instance : Neg Floating where neg x := ⟨-x.n, x.s⟩
def Floating.blt (x y : Floating) : Bool := x.s > y.s
def Floating.ble (x y : Floating) : Bool := !(y.blt x)
instance : LT Floating where lt x y := x.blt y
instance : LE Floating where le x y := x.ble y
instance : DecidableRel (α := Floating) (· < ·) | a, b => by dsimp [LT.lt]; infer_instance
instance : DecidableRel (α := Floating) (· ≤ ·) | a, b => by dsimp [LE.le]; infer_instance
instance : Min Floating where min x y := bif x.ble y then x else y
instance : Max Floating where max x y := bif x.ble y then y else x
def Rat.log2 (x : ℚ) : ℤ := x.num.natAbs.log2 - x.den.log2
/-- Conversion from `ℚ`, taking absolute values and rounding up or down -/
def Floating.ofRat (x : ℚ) : Floating :=
let r := x.log2
let n := x.num.natAbs
let p := if r ≤ 62 then (n <<< (62 - r).toNat, x.den) else (n, x.den <<< (r - 62).toNat)
⟨p.1 / p.2, r - 62 + 2^63⟩
/-!
## Interval arithmetic
-/
@[unbox] structure Interval where
lo : Floating
hi : Floating
deriving DecidableEq
instance : Union Interval where union x y := ⟨min x.lo y.lo, max x.hi y.hi⟩
def Interval.ofRat (x : ℚ) : Interval := ⟨.ofRat x, .ofRat x⟩
def radius : ℚ := 1 / 1180591620717411303424
def mid : ℚ := -810262049469142726183466090821 / 11689610405896601564675094755497
def interval : Interval := .ofRat (mid - radius) ∪ .ofRat (mid + radius)
def log_half : Interval :=
⟨⟨12053589751108223786, 9223372036854775745⟩,
⟨12053589751108223787, 9223372036854775745⟩⟩
-- maximum recursion depth has been reached
-- use `set_option maxRecDepth <num>` to increase limit
-- use `set_option diagnostics true` to get diagnostic information
#reduce log_half == interval
Joachim Breitner (Sep 06 2024 at 23:39):
Not a full fix, but certianly one reason is (as I expected) that some functions are defined by well-founded recursion, namely Nat.log2
. You can see that
#reduce Nat.log2 12053589751108223786
times out, but if you defined it using structural recursion, like
def Nat.fix (h : α → Nat) {motive : α → Sort u} (F : (x : α) → ((y : α) → h y < h x → motive y) → motive x) : (x : α) → motive x :=
let rec go : ∀ (fuel : Nat) (x : α), (h x < fuel) → motive x := Nat.rec
(fun _ hfuel => (not_succ_le_zero _ hfuel).elim)
(fun _ ih x hfuel => F x (fun y hy => ih y (Nat.lt_of_lt_of_le hy (Nat.le_of_lt_add_one hfuel))))
fun x => go (h x + 1) x (Nat.lt_add_one _)
def Nat.log2' : Nat → Nat := Nat.fix (fun n => n) (fun n ih => if h : n ≥ 2 then ih (n / 2) (log2_terminates _ ‹_›) + 1 else 0)
then
#reduce Nat.log2' 12053589751108223786
works.
Related issues are https://github.com/leanprover/lean4/issues/5192 so that the kernel doesn’t even try reducing such functions (so at least you have a slightly easier time noticing that something is wrong) and https://github.com/leanprover/lean4/issues/5234 about maybe letting lean compile all suitable functions using this better reducingNat.fix
operator. Interesting ideas, but no definite plans yet.
Joachim Breitner (Sep 06 2024 at 23:45):
But even with that log2'
, the problem is simply too large numbers. If I minimize your example further down, I hit
#reduce Nat.log2' 956588586188586504152503510242497262758674667026601
and that takes too long.
Joachim Breitner (Sep 06 2024 at 23:48):
Interestingly, that works with the kernel’s reduction function, which you can access via
elab "#kernel_reduce" t:term : command => Lean.Elab.Command.runTermElabM fun _ => do
let e ← Lean.Elab.Term.elabTermAndSynthesize t none
let e' ← Lean.ofExceptKernelException <| Lean.Kernel.whnf (← Lean.getEnv) (← Lean.getLCtx) e
Lean.logInfo m!"{e'}"
so it’s Lean.Meta.WHNF that's recursing too much.
This is an interesting application, thanks! Maybe we will have something that helps here in the future.
Geoffrey Irving (Sep 07 2024 at 07:43):
Thank you! I’ll see if I can get kernel reduction to work for the original case.
Geoffrey Irving (Sep 08 2024 at 14:57):
Fixing Nat.log2
to use fuel works in my original case! In particular, it lets me use the kernel-only version of decide. Thank you!
However, I naively tried to use the same compiled implementation for my new log2
version, and it does not work. That is, if I do:
/-- Inner loop of `Nat.fast_log2` -/
def Nat.fast_log2.go : ℕ → ℕ → ℕ :=
Nat.rec (fun _ ↦ 0) (fun _ h n ↦ if 2 ≤ n then h (n / 2) + 1 else 0)
/-- Kernel-friendly version of `Nat.log2` -/
@[irreducible, extern "lean_nat_log2"]
def Nat.fast_log2 (n : @& ℕ) : ℕ :=
fast_log2.go n n
then work it works great inside kernel reductions. However, if try to use it from a native_decide
, I get
-- (kernel) Could not find native implementation of external declaration 'Nat.fast_log2' (symbols 'l_Nat_fast__log2___boxed' or 'l_Nat_fast__log2').
-- For declarations from `Init`, `Std`, or `Lean`, you need to set `supportInterpreter := true` in the relevant `lean_exe` statement in your `lakefile.lean`.
example : Nat.fast_log2 4 = 2 := by native_decide
What's the right syntax to re-use the same native implementation?
I'm happy to upstream Nat.fast_log2
if that would be appreciated, BTW.
Geoffrey Irving (Sep 08 2024 at 15:07):
I tried putting supportInterpreter = true
in lakefile.toml
in a few places, but no luck.
Joachim Breitner (Sep 08 2024 at 16:07):
As a workaround you can probably use the fuel version for the kernel, and prove it equivalent to the wf rec with a csimp
lemma, that should use the old one with native_decide
Geoffrey Irving (Sep 08 2024 at 16:09):
Ah nice, I already have that lemma, so I can just add the annotation. Thanks! I hadn’t heard of csimp before.
Last updated: May 02 2025 at 03:31 UTC