Zulip Chat Archive
Stream: lean4
Topic: automatic memoizing for Lean
Alex Meiburg (Jan 23 2025 at 15:36):
Python's lru_cache
is useful for turning simple (but slow) recursive definitions into efficient ones. Is there, or could there reasonably be, a similar tool for Lean? With Lean's powerful metaprogramming I hope so.
For instance,
def stupidFib (x : ℕ) : ℕ :=
if x ≤ 1 then x else stupidFib (x-1) + stupidFib (x-2)
ends up defined by well-founded recursion on a Nat (and all of the other arguments (i.e. none of them) are equal), so this is the simplest case to handle. Any such function could generally be re-written in a form like
def stupidFib (x : ℕ) : ℕ :=
(auxList x).getD 0 0
where auxList : ℕ → List ℕ
| 0 => [0]
| 1 => [1,0]
| Nat.succ n =>
let l := auxList n
(l.getD 0 0 + l.getD 1 0)::l
which will take linear memory but only linear time (instead of exponential time). (Recognizing that it could be done with just O(1) memory instead of linear is nice too, but I'm thinking about the general case first, and I'd want to be able to handle things like def notFib (x : ℕ) : ℕ := if x ≤ 1 then x else notFib (x-1) + notFib (x/2)
.)
Is there anything like a 'decorator' or attribute that could be added to enable this translation automatically?
Kyle Miller (Jan 23 2025 at 15:38):
I remember @Joachim Breitner experimenting with this at some point.
Kyle Miller (Jan 23 2025 at 15:41):
Amusingly, the Lean kernel auto-memoizes, so there's nothing to be done if you just care about reduction. I needed to modify your definition to use structural recursion though to get it to work for large values:
def stupidFib' (x : Nat) : Nat :=
match x with
| 0 => 0
| 1 => 1
| n+2 => stupidFib' (n + 1) + stupidFib' n
#reduce stupidFib' 150
-- 9969216677189303386214405760200
Alex Meiburg (Jan 23 2025 at 15:47):
Alright, in that sense I suppose I had a bad example. :) I'm more interested in cases like notFib
that aren't so easily amenable to structural recursion
Kyle Miller (Jan 23 2025 at 15:55):
Plus, the compiled code won't be memoized. I was just taking advantage of the kernel's cache here.
Joachim Breitner (Jan 23 2025 at 16:08):
Here are my experiments back then, but I beware, I didn’t really understand lean that much back then:
https://github.com/nomeata/lean4-memo-nat
(Demo at https://github.com/nomeata/lean4-memo-nat/blob/master/MemoNat/Demo.lean)
Asei Inoue (Jan 24 2025 at 13:46):
@Joachim Breitner
Nice!!! Why not continue development of this repository? [memo]
attribute is so attractive...
Joachim Breitner (Jan 25 2025 at 12:35):
Hmm, not sure . No urgent need on my side? Also it only works for functions takingNat
and it will precompute it for every smaller parameter , not just the used ones. But yes, in principle such tools that generate csimp
-optimizations are quite plausible
Last updated: May 02 2025 at 03:31 UTC