Zulip Chat Archive
Stream: general
Topic: Proving termination of recursion guarded in abstraction
Easyoakland (Jun 27 2025 at 08:26):
The following code works as expected. increasing terminates immediately on all calls without recursing at all, it simply constructs a new function. However, I don't know how to convince the termination checker of this.
I also don't know how to prove that LazyList is inhabited without .inhabited_proof
Help would be appreciated.
inductive LazyList α | head : (Unit → (α × LazyList α)) → LazyList α | inhabited_proof
instance [Inhabited α]: Inhabited (LazyList α) where
default := .inhabited_proof
def LazyList.head' [Inhabited α]: LazyList α → (α × LazyList α)
| .inhabited_proof => unreachable!
| head f => f ()
partial def increasing: LazyList Nat := s 0
where s := fun n => LazyList.head fun () => (n, s (n+1))
-- example : (increasing |>.head.snd |>.head.fst) = 1 := rfl
-- #reduce increasing |>.head.snd |>.head.snd
#eval increasing
|>.head'.snd -- 0
|>.head'.snd -- 1
|>.head'.snd -- 2
|>.head'.snd -- 3
|>.head'.snd -- 4
|>.head'.snd -- 5
|>.head'.snd -- 6
|>.head'.snd -- 7
|>.head'.fst -- 8
Robin Arnez (Jun 27 2025 at 10:10):
It's because the lazy list isn't inhabited without that constructor:
inductive LazyList α
| head : (Unit → (α × LazyList α)) → LazyList α
mutual
theorem lazyList_not_inhabited' : (α × LazyList α) → False
| (_, b) => lazyList_not_inhabited b
theorem lazyList_not_inhabited : LazyList α → False
| .head f => lazyList_not_inhabited' (f ())
end
It expects you to be able to recurse by applying the function; but this is clearly non-terminating for LazyList, so contradiction
Robin Arnez (Jun 27 2025 at 10:29):
What you'd want is a coinductive; sadly these don't exist in Lean directly; but you can hack one together (which you can't prove things about but I don't think that's your goal):
structure LazyListImpl (α : Type u) where
next : Unit → α × LazyListImpl α
private opaque LazyListRef (α : Type u) : NonemptyType.{u}
structure LazyList (α : Type u) where
val : (LazyListRef α).type
nonempty : Nonempty α
instance [Nonempty α] : Nonempty (LazyList α) :=
match (LazyListRef α).property with
| ⟨a⟩ => ⟨a, ‹_›⟩
@[inline]
private unsafe def LazyList.headImpl (f : Unit → α × LazyList α) : LazyList α :=
(unsafeCast { next := unsafeCast f : LazyListImpl α } : LazyList α)
@[implemented_by LazyList.headImpl]
opaque LazyList.head (f : Unit → α × LazyList α) : LazyList α :=
(f ()).2 -- just show that it's inhabited
@[inline]
private unsafe def LazyList.nextImpl (x : LazyList α) : α × LazyList α :=
unsafeCast ((unsafeCast x : LazyListImpl α).next ())
@[implemented_by LazyList.nextImpl]
opaque LazyList.next (x : LazyList α) : α × LazyList α :=
(Classical.choice x.nonempty, x) -- just show that it's inhabited
partial def increasing : LazyList Nat := s 0
where s := fun n => LazyList.head fun () => (n, s (n+1))
-- example : (increasing |>.head.snd |>.head.fst) = 1 := rfl
-- #reduce increasing |>.head.snd |>.head.snd
#eval increasing
|>.next.snd -- 0
|>.next.snd -- 1
|>.next.snd -- 2
|>.next.snd -- 3
|>.next.snd -- 4
|>.next.snd -- 5
|>.next.snd -- 6
|>.next.snd -- 7
|>.next.fst -- 8
Easyoakland (Jun 27 2025 at 18:38):
It's because the lazy list isn't inhabited without that constructor:
The proof using mutual structural recursion you provided is helpful for understanding. I am, however, confused about how the #eval can work without ever constructing the variant which proves the type is inhabited.
Are proofs in Lean only capable of showing things which are true for all reduction orders even though lean has a particular one? If that's true I can see the argument that a recursion might infinitely reduce the inside of the returned function without ever actually returning.
but you can hack one together
I'm not sure I understand the advantage of the approach you describe with the unsafe and casting. You still have a LazyList with an extra variant.
Robin Arnez (Jun 27 2025 at 20:58):
Easyoakland schrieb:
confused about how the
#evalcan work without ever constructing the variant which proves the type is inhabited.
The restriction that every recursion has to end is only a restriction of the type system. The compiler (and thus #eval) doesn't care about whether your function terminates in the same way that pretty much every programming language allows you to have infinite loops in your program. It would only be problematic if the type system would know the definition of increasing because then it would cause a contradiction (infinite stack of heads). However, partial means that the type system doesn't know any more about the function than its type; for all it knows, the function could just immediately return inhabited_proof.
Easyoakland schrieb:
Are proofs in Lean only capable of showing things which are true for all reduction orders even though lean has a particular one?
Functions in the type system should not be viewed as functions in the programming sense. Instead, you can imagine them as a big table that has an entry for every value. In particular Unit → α becomes meaningless then because that'd just correspond to a table with a single entry. And then we just have a rule that inductive types need to have a finite depth, so you can't have infinite LazyLists "all the way down".
Easyoakland schrieb:
I'm not sure I understand the advantage of the approach you describe with the
unsafeand casting.
The approach here is similar to what partial does -- hide the definition from the type system. We are trying to mimic the definition
inductive LazyList α
| head : (Unit → (α × LazyList α)) → LazyList α
def LazyList.head' : LazyList α → (α × LazyList α)
| head f => f ()
unsafe def increasing : LazyList Nat := s 0
where s := fun n => LazyList.head fun () => (n, s (n+1))
here but without the unsafe on increasing. So what we do is first establish the internal type LazyListImpl which is what the actual representation for LazyList should be. Then we use opaque to define a new type, LazyListRef, which is just some arbitrary nonempty type to please the type system. In the same sense, the definition of LazyList is only there to make the operations of LazyList consistent in the type system. The definitions LazyList.headImpl and LazyList.nextImpl are the "real" definitions of these functions, those that we want to use for #eval. We said LazyListImpl is supposed to be the runtime representation of LazyList, so we just use an unsafe cast to cast between them. Then the real functions LazyList.head and LazyList.next are again just there to please the type system, we just use LazyList.headImpl and LazyList.nextImpl under the hood (through @[implemented_by]). This makes it possible to use LazyList without unsafe but with the same computational behavior as the unsafe definition.
Easyoakland (Jun 27 2025 at 21:56):
This makes it possible to use
LazyListwithoutunsafebut with the same computational behavior as the unsafe definition.
Right. But the increasing function is still partial in your translation. What are you gaining by hiding the real definition of the type that you can't prove things about? The runtime behavior already matches what it should, so there's not need to complicate it with unsafe.
Easyoakland (Jun 27 2025 at 22:02):
which you can't prove things about but I don't think that's your goal
Actually my goal is to prove properties of the transition function without exposing the state in the type signature. That way these LazyLists can be composed without having to thread the internal state.
I'm now trying to add a gas parameter and prove things about the function under the condition that the gas is sufficiently large. Do you have a better idea?
class Gas where
val : Nat
instance: Gas where
val := 2^64
inductive LazyList α | head : (Unit → (α × LazyList α)) → LazyList α | eol
instance [Inhabited α]: Inhabited (LazyList α) where
default := .eol
def LazyList.head' [Inhabited α]: LazyList α → (α × LazyList α)
| eol => panic! "ran out of gas for LazyList.head'"
| head f => f ()
def increasing [g: Gas]: LazyList Nat := s g.val 0
where s := fun gas => fun n => match gas with
| 0 => .eol
| gas+1 => LazyList.head fun () => (n, s gas (n+1))
def LazyList.get [Inhabited α] (self: LazyList α) (idx:Nat): α := match idx with
| 0 => self.head'.fst
| n+1 => let self := self.head'.snd; self.get n
instance [Inhabited α]: GetElem (LazyList α) Nat α (fun _ _ => True) where
getElem self idx _ := LazyList.get self idx
theorem increasing_incr (n g i v : Nat) (g_gt_n: g > n) (h: increasing (g:=⟨g⟩)[i] = v): increasing (g:=⟨g⟩)[i+1] = v + 1 := by
unfold increasing
unfold increasing.s
admit
theorem increasing_eq_idx (n g : Nat) (h: g > n): increasing (g:=⟨g⟩)[n] = n := by
admit
Robin Arnez (Jun 27 2025 at 22:03):
Easyoakland schrieb:
What are you gaining by hiding the real definition of the type that you can't prove things about?
The only difference is that you don't need to consider the case for inhabited_proof. This might be a (small but still) performance improvement.
Robin Arnez (Jun 27 2025 at 22:05):
Easyoakland schrieb:
Actually my goal is to prove properties of the transition function without exposing the state in the type signature.
Okay, then my question is: Do you just care about proofs or also computation? If you don't care about efficient runtime, just use Nat → α.
Easyoakland (Jun 27 2025 at 22:09):
Runtime matters, but it doesn't need to be terribly fast. Just needs to run a few 1,000 or 100,000 iterations over several minutes (though the transition functions will be more complicated) for the "simulation".
The goal with this is to see if it's possible to emulate Clash's Signal Type to hide the coinductive state for easier composition while enabling proofs about its behavior. The hardware synthesizer won't really run the type, but special-case it to extract the transition function.
just use
Nat → α.
I don't follow.
Robin Arnez (Jun 27 2025 at 22:12):
The Nat → α approach would be essentially defining a LazyList by its get function
Robin Arnez (Jun 27 2025 at 22:13):
That makes the runtime for iteration quadratic though
Easyoakland (Jun 27 2025 at 22:16):
The
Nat → αapproach would be essentially defining aLazyListby itsgetfunction
You mean define the function recursively? Like:
def doubling : Nat → Nat
| 0 => 0
| n+1 => 2 * doubling n
That makes the runtime for iteration quadratic though
I know that Thunks exist. Are they not usable for proofs?
Robin Arnez (Jun 27 2025 at 22:19):
Yeah I mean if you try to do something like
def sumOfFirstN (x : LazyList Nat) (n : Nat) (start : Nat) : Nat :=
match n with
| 0 => 0
| k + 1 => sumOfFirstN start + sumOfFirstN x k (start + 1)
and you define your LazyList using a recursive Nat → α function, you'd get quadratic runtime
Aaron Liu (Jun 27 2025 at 22:21):
Easyoakland said:
I know that
Thunksexist. Are they not usable for proofs?
They are usable in proofs, but unfortunately it doesn't make the runtime faster, since you are recomputing the thunk every time you call the function with a different input.
Easyoakland (Jun 27 2025 at 22:24):
runtime for iteration quadratic
@Robin Arnez That probably will be too much of a performance overhead.
you are recomputing the thunk every time you call the function with a different input.
@Aaron Liu How are Thunks meant to be used and how does this differ?
Aaron Liu (Jun 27 2025 at 22:25):
Thunks work best when you use the same one over and over
Robin Arnez (Jun 27 2025 at 22:25):
Thunks are not cache, but if you have two references to the same thunk (as in same memory address) then you only need to evaluate once
Robin Arnez (Jun 27 2025 at 22:25):
But if you define something by repetition you necessarily construct something every time
Aaron Liu (Jun 27 2025 at 22:26):
Since no information is being carried between function calls you do need to recompute it
Easyoakland (Jun 27 2025 at 22:27):
Sounds like the gas method is best then (assuming I'm not missing something where it doesn't work)?
Aaron Liu (Jun 27 2025 at 22:28):
You could pass around a cache
Easyoakland (Jun 27 2025 at 22:30):
So run everything in a monad with state of the cache and indexing function names + arguments to values?
Aaron Liu (Jun 27 2025 at 22:30):
Yeah that's basically what I'm suggesting
Aaron Liu (Jun 27 2025 at 22:31):
It worked for me for relatively small computations
Easyoakland (Jun 27 2025 at 23:08):
It looks like I've either misunderstood the Nat → α approach or it also doesn't work with Lean's termination checker.
abbrev Signal α := Nat → α
namespace Signal
def register (reset: α) (s: Signal α): Signal α := fun (i:Nat) =>
match i with
| 0 => reset
| i+1 => s i
def pure : α → Signal α := fun x _ => x
instance [HAdd α β γ]: HAdd (Signal α) (Signal β) (Signal γ) where
hAdd a b := fun i => a i + b i
instance [HMul α β γ]: HMul (Signal α) (Signal β) (Signal γ) where
hMul a b := fun i => a i * b i
#check Signal Nat
instance : OfNat (Signal Nat) n where
ofNat := Signal.pure n
partial def multiply_add (x y : Signal Nat): Signal Nat := acc
where
acc: Signal Nat := register 0 (acc + x * y)
def _root_.Std.Range.toList (r: Std.Range): List Nat := Id.run do
let mut acc := []
for x in r do
acc := List.cons x acc
acc.reverse
#eval [0:3].toList |>.map (multiply_add 1 2) -- [0, 2, 4]
Aaron Liu (Jun 27 2025 at 23:11):
The way you've written it it's not terminating, even though it terminates on all inputs. I think you need to use some sort of corecursion.
Easyoakland (Jun 27 2025 at 23:12):
The way you've written it it's not terminating, even though it terminates on all inputs.
What? I thought the definition of terminating is termination on all inputs.
Aaron Liu (Jun 27 2025 at 23:13):
Ok if you unfold some stuff then you can prove it terminates, but you can't prove it terminating with the built-in termination facilities.
Easyoakland (Jun 27 2025 at 23:22):
Ah.
def multiply_add (x y : Signal2 Nat): Signal2 Nat := acc
where
acc: Signal2 Nat := fun | 0 => 0 | i+1 => ((acc i) + (x * y) i)
Well if I have to unfold register then that definitely won't work as an approach. The whole idea with the Signal and register functions is to make the Signal definition private and only allow construction with register because only accessing the previous value (like register does) is actually hardware synthesizable.
Is it possible to write a macro which desugars the unworking version into the second for the termination checker without making it easy for the user to use the desugared version?
Robin Arnez (Jun 27 2025 at 23:25):
Here's another definition similar to the one I defined before, except with a model implementation. There are some hacks though because the compiler gets a bit in the way.
/-!
Runtime implementation
-/
inductive LazyListImpl (α : Type u) where
| head : α → (Unit → LazyListImpl α) → LazyListImpl α
unsafe def LazyListImpl.fromGetter (f : Nat → α) (i : Nat) : LazyListImpl α :=
.head (f i) fun _ => fromGetter f (i + 1)
def LazyListImpl.get : LazyListImpl α → Nat → α
| .head x _, 0 => x
| .head _ f, k + 1 => (f ()).get k
@[irreducible]
private def SeqType (α : Type u) := Nat → α
structure LazyList (α : Type u) where
private mkInternal ::
private getInternal : SeqType α
/-!
Porting runtime implementation to `LazyList` using `unsafeCast`
-/
@[inline, always_inline]
unsafe def LazyList.mkImpl (f : Nat → α) : LazyList α :=
unsafeCast (LazyListImpl.fromGetter f 0)
@[inline, always_inline]
unsafe def LazyList.getImpl (l : LazyList α) : Nat → α :=
LazyListImpl.get (unsafeCast l)
@[inline, always_inline]
unsafe def LazyList.lazyConsImpl (head : α) (tail : Unit → LazyList α) : LazyList α :=
unsafeCast (LazyListImpl.head head (unsafeCast tail) : LazyListImpl α)
@[inline, always_inline]
unsafe def LazyList.headImpl (l : LazyList α) : α :=
(unsafeCast l : LazyListImpl α).1
@[inline, always_inline]
unsafe def LazyList.tailImpl (l : LazyList α) : LazyList α :=
unsafeCast ((unsafeCast l : LazyListImpl α).2 ())
/-!
Model implementations
-/
@[implemented_by LazyList.mkImpl, irreducible]
def LazyList.mk (f : Nat → α) : LazyList α := by
with_unfolding_all exact LazyList.mkInternal f
@[implemented_by LazyList.getImpl, irreducible]
def LazyList.get (l : LazyList α) : Nat → α := by
with_unfolding_all exact LazyList.getInternal l
@[implemented_by LazyList.lazyConsImpl]
def LazyList.lazyCons (head : α) (tail : Unit → LazyList α) : LazyList α :=
.mk fun
| 0 => head
| k + 1 => (tail ()).get k
@[implemented_by LazyList.headImpl]
def LazyList.head (x : LazyList α) : α :=
x.get 0
@[implemented_by LazyList.tailImpl]
def LazyList.tail (x : LazyList α) : LazyList α :=
.mk fun a => x.get (a + 1)
@[simp]
theorem LazyList.get_mk : get (mk f) = f := by
with_unfolding_all rfl
@[simp]
theorem LazyList.tail_mk : tail (mk f) = mk (fun a => f (a + 1)) := by
with_unfolding_all rfl
theorem LazyList.forall_iff_mk (p : LazyList α → Prop) : (∀ x, p x) ↔ (∀ x, p (.mk x)) := by
constructor
· intro h x
apply h
· with_unfolding_all
intro h ⟨x⟩
apply h
open Lean.Order in
noncomputable instance [Nonempty α] : CCPO (LazyList α) := letI : CCPO (Nat → α) := (instCCPOPi : CCPO (Nat → FlatOrder Classical.ofNonempty)); {
rel x y := x.get ⊑ y.get
rel_refl := this.rel_refl
rel_trans := this.rel_trans
rel_antisymm {x y} h h' := by
with_unfolding_all
cases x; cases y
cases this.rel_antisymm h h'
rfl
csup f := .mk (CCPO.csup (fun a => f (.mk a)))
csup_spec {x c} hc := by
with_unfolding_all
have h := this.csup_spec (c := fun a => c (LazyList.mk a)) (x := x.get) ?_
· simpa only [LazyList.get_mk, LazyList.forall_iff_mk] using h
· simpa only [chain, LazyList.forall_iff_mk] using hc
}
open Lean.Order in
@[partial_fixpoint_monotone]
theorem LazyList.monotone_lazyCons [Nonempty α] [PartialOrder γ]
(x : α) {f : γ → Unit → LazyList α} (hf : monotone f) :
monotone fun a => LazyList.lazyCons x (f a) := by
intro a b h
simp only [PartialOrder.rel]
intro i
simp only [lazyCons, get_mk]
split
· exact .refl
· apply hf
exact h
def increasing : LazyList Nat := s 0
where s := fun n => LazyList.lazyCons n fun () => s (n+1)
partial_fixpoint
#eval increasing.tail.tail.head -- 2
Robin Arnez (Jun 27 2025 at 23:25):
This one uses partial_fixpoint for termination checking
Easyoakland (Jun 27 2025 at 23:37):
I'm not familiar with partial_fixpoint. I see where it is described in the reference, but I still am not quite sure what it means.
Is there a more detailed explanation elsewhere?
Aaron Liu (Jun 27 2025 at 23:39):
Easyoakland said:
Well if I have to unfold
registerthen that definitely won't work as an approach. The whole idea with theSignalandregisterfunctions is to make theSignaldefinition private and only allow construction withregisterbecause only accessing the previous value (likeregisterdoes) is actually hardware synthesizable.
That's why you need the corecursion!
Robin Arnez (Jun 27 2025 at 23:42):
Easyoakland schrieb:
I'm not familiar with
partial_fixpoint. I see where it is described in the reference, but I still am not quite sure what it means.Is there a more detailed explanation elsewhere?
partial_fixpoint is a recent addition that allows you to write possibly non-terminating functions that are (usually) tail-recursive. However, with the right theorems, you can extend it to other kinds of "almost-tail-recursive" functions: In this case it allows you to insert any amount of lazyCons and treat it as "almost-tail-recursion".
Easyoakland (Jun 27 2025 at 23:43):
Aaron Liu said:
That's why you need the corecursion!
I'm not sure if you mean that Lean doesn't support this kind of construct or if you mean that I need to reformulate it somehow. In the latter case I'm don't know what you mean.
Aaron Liu (Jun 27 2025 at 23:44):
Having corecursion will allow you to write functions like multiply_add without unfolding register
Aaron Liu (Jun 27 2025 at 23:44):
and you can prove they terminate
Easyoakland (Jun 27 2025 at 23:44):
What is "having corecursion".
Aaron Liu (Jun 27 2025 at 23:45):
It means if you define a corecursor, all these things will happen
Robin Arnez (Jun 27 2025 at 23:46):
Corecursion is basically tail-recursion with a constructor application
Aaron Liu (Jun 27 2025 at 23:46):
though it might loop around to being computationally inefficient again
Robin Arnez (Jun 27 2025 at 23:46):
I feel like the approach with partial_fixpoint works really nicely here instead of defining a corecursor though
Aaron Liu (Jun 27 2025 at 23:47):
That probably works too, but I haven't tried it yet so I wouldn't know
Easyoakland (Jun 27 2025 at 23:48):
Aaron Liu said:
It means if you define a corecursor, all these things will happen
I have very limited of understanding of corecursion. My fuzzy understanding is that it's what recursion looks like on the other side. I don't know what a corecursor is.
Easyoakland (Jun 27 2025 at 23:51):
Robin Arnez said:
I feel like the approach with
partial_fixpointworks really nicely here instead of defining a corecursor though
It's going to take me a bit to understand it.
Easyoakland (Jun 28 2025 at 01:05):
Am I correct that annotating a function f with partial_fixpoint works by making the function partial and adding proof of equality between the opaque f and the original function definition which recursively contains f?
Aaron Liu (Jun 28 2025 at 01:07):
Yes, for certain definitions of "partial". Also, it doesn't make f an opaque, it only adds the @[irreducible] tag.
Easyoakland (Jun 28 2025 at 01:11):
By makes the function partial I meant partial def.
What's the difference between opaque and @[irreducible]?
I see in the reference
Opaque constants are defined constants that are not subject to δ-reduction in the kernel.
and
Irreducible definitions are not unfolded at all during elaboration. Definitions can be made irreducible by applying the
irreducibleattribute.
Which seem to be the same, since
δ (delta) reduction: Replacing occurrences of defined constants by the definition's value
Aaron Liu (Jun 28 2025 at 01:11):
It does not make it partial def
Aaron Liu (Jun 28 2025 at 01:12):
Irreducible definitions are not unfolded during elaboration (unless you force it), but they are unfolded during kernel checking
Aaron Liu (Jun 28 2025 at 01:12):
However opaque constants are not unfolded at all
Aaron Liu (Jun 28 2025 at 01:13):
The difference is that irreducible definition are "please don't unfold me" but you can still override it while opaque constants just can't be unfolded at all
Easyoakland (Jun 28 2025 at 01:14):
Oh, I see. You prompted me to use #print. Now I see what's going on better. Thank you.
/-
@[irreducible] def Test.increasing.s : Nat → LazyList Nat :=
Lean.Order.fix (fun f n => LazyList.lazyCons n fun x => f (n + 1)) increasing.s._proof_6
-/
#print increasing.s
Aaron Liu (Jun 28 2025 at 01:15):
partial def f (n : Nat) : Nat := f (n + 1)
-- opaque f : Nat → Nat
#print f
Easyoakland (Jun 30 2025 at 09:24):
@Robin Arnez
I'm trying to get this to work:
def add_lists (x y : LazyList Nat): LazyList Nat := acc
where
acc: LazyList Nat := register 0 (fun () => acc + g)
partial_fixpoint
Defining addition on LazyLists to be elementwise:
instance [HAdd α β γ]: HAdd (LazyList α) (LazyList β) (LazyList γ) where
hAdd a b := .mk fun i => a.get i + b.get i
Showing register was monotone was the same as lazyCons:
def register (reset: α) (s: Unit → LazyList α): LazyList α := Test.LazyList.lazyCons reset s
open Lean.Order in
@[partial_fixpoint_monotone]
theorem LazyList.monotone_register [Nonempty α] [PartialOrder γ]
(x : α) {f : γ → Unit → LazyList α} (hf : monotone f) :
monotone fun a => register x (f a) := by
intro a b h i
simp only [register, lazyCons, get_mk]
split
· exact .refl
· apply hf
exact h
But showing that the addition is monotone has proved challenging. How exactly should I go about using partial_fixpoint on add_lists?
What I've tried so far
From what I can tell, using partial_fixpoint requires being able to show that the function is monotonic with respect to a partialOrder made by CCPO.toPartialOrder. It can't be any particular partial order, otherwise you could use .rel x y := x = y for everything. (= isn't a CCPO because there's no minimum as required by this provable theorem: CCPO.exists_minimum [inst: CCPO α] : ∃ m, ∀ x, inst.toPartialOrder.rel m x.)
I have a theorem for monotone hAdd on LazyLists.
open Lean.Order in
@[partial_fixpoint_monotone]
theorem LazyList.monotone_hadd
[Nonempty α] [Nonempty β] [Nonempty γ] [PartialOrder δ] [inst: HAdd α β γ]
(h_add_le_add: ∀ (a a' : α) (b b' : β), ((a:α) ⊑ (a':α) → (b:β) ⊑ (b':β) → a + b ⊑ a' + b'))
{f : δ → LazyList α} {g : δ → LazyList β} (hf: monotone f) (hg: monotone g) :
monotone fun x => (@instHAddLazyList _ _ _ inst |>.hAdd) (f x) (g x) := by
intro a b h i
simp only [(.+.), get_mk]
apply h_add_le_add
. apply hf
exact h
. apply hg
exact h
but this is only with the FlatOrder (Classical.ofNonempty) ordering of α β γ and corresponding LazyList.
I'd want to show that Nat.add is monotonic to be able to use addition on LazyLists with partial_fixpoint, but Nat.add is doesn't satisfy h_add_le_add with FlatOrder 0 (by h_add_le_add (0⊑2) (1⊑1) = 0 + 1 ⊑ 2 + 1 implies 1 ⊑ 3 implies False).
Nat.le does satisfy h_add_le_add, but I can't get the proof of LazyList.monotone_hadd to work with an arbitrary partialOrder for α β γ.
I also have the above theorem with a more generic CCPO LazyList instance (syntactically identical to using FlatOrder (Classical.ofNonempty)) but it requires the type argument of the LazyList to have a CCPO instance. Nat.le can't have this instance because there's no valid supremum csup for the chain of all natural numbers fun _ => True
open Lean.Order in
noncomputable instance [inst: CCPO α] : CCPO (LazyList α) := letI : CCPO (Nat → α) := instCCPOPi; {
rel x y := x.get ⊑ y.get
rel_refl := this.rel_refl
rel_trans := this.rel_trans
rel_antisymm {x y} h h' := by
cases x; cases y
cases this.rel_antisymm h h'
rfl
csup f := .mk (CCPO.csup (fun a => f (.mk a)))
csup_spec {x c} hc := by
have h := this.csup_spec (c := fun a => c (LazyList.mk a)) (x := x.get) ?_
· simpa only [LazyList.get_mk, LazyList.forall_iff_mk] using h
· simpa only [chain, LazyList.forall_iff_mk] using hc
}
open Lean.Order in
@[partial_fixpoint_monotone]
theorem LazyList.monotone_hadd'
[CCPO α] [CCPO β] [CCPO γ] [PartialOrder δ] [inst: HAdd α β γ]
(h_add_le_add: ∀ (a a' : α) (b b' : β), ((a:α) ⊑ (a':α) → (b:β) ⊑ (b':β) → a + b ⊑ a' + b'))
{f : δ → LazyList α} {g : δ → LazyList β} (hf: monotone f) (hg: monotone g) :
monotone fun x => (@instHAddLazyList _ _ _ inst |>.hAdd) (f x) (g x) := by
intro a b h i
simp only [(.+.), get_mk]
apply h_add_le_add
. apply hf
exact h
. apply hg
exact h
Aaron Liu (Jun 30 2025 at 10:07):
Try having List (Option Nat), then you can make addition monotone
Robin Arnez (Jun 30 2025 at 10:09):
Alternatively, use an accumulator:
def add_lists (x y : LazyList Nat) : LazyList Nat := acc (.mk fun _ => 0)
where
acc (added : LazyList Nat) : LazyList Nat :=
register added.head (fun () => acc (added.tail + g))
partial_fixpoint
Aaron Liu (Jun 30 2025 at 10:10):
Sorry, what's your definition of LazyList?
Robin Arnez (Jun 30 2025 at 11:26):
Robin Arnez schrieb:
Here's another definition similar to the one I defined before, except with a model implementation. There are some hacks though because the compiler gets a bit in the way.
...
I assume this one?
Easyoakland (Jun 30 2025 at 21:07):
Aaron Liu said:
Try having
List (Option Nat), then you can make addition monotone
Not sure what this means.
Easyoakland (Jun 30 2025 at 21:10):
Robin Arnez said:
Alternatively, use an accumulator:
This typechecks. And continues for the multiply_accumulate example
def LazyList.pure (a:α): LazyList α := .mk fun _ => a
def multiply_accumulate (x y : LazyList Nat): LazyList Nat := acc (LazyList.pure 0)
where
acc (accumulated : LazyList Nat): LazyList Nat :=
register accumulated.head (fun () => acc (accumulated.tail + x * y))
partial_fixpoint
but this requires the user having access to the tail and head functions, which means they could apply the tail constructor multiple times and make an hardware unsythensizable design, which I'm trying to avoid. The goal is that in user code LazyList can only be constructed from a constant (fun _ => c), a use of the register function (register reset next), or as an applicative functor which can only map the LazyList elementwise.
Am I to assume that you suggest this alternative because + can't be shown monotone in a CCPO?
Easyoakland (Jun 30 2025 at 21:11):
Aaron Liu said:
Sorry, what's your definition of
LazyList?
The model that @Robin Arnez described earlier.
Robin Arnez (Jun 30 2025 at 21:14):
Just asking, how are these lists supposed to be used? In particular how and when are they destructured?
Easyoakland (Jun 30 2025 at 21:16):
Robin Arnez said:
Just asking, how are these lists supposed to be used? In particular how and when are they destructured?
The way this works is that the topmost function takes as input LazyList input and returns LazyList output. The built up list can only be inspected elementwise through the <*>, <$>, and some elementwise arithmetic (+, *, etc...) only.
Robin Arnez (Jun 30 2025 at 21:18):
Doesn't that mean that the contents of LazyList are never inspected though?
Easyoakland (Jun 30 2025 at 21:20):
Robin Arnez said:
Doesn't that mean that the contents of
LazyListare never inspected though?
In what sense? The goal is to build up an increasingly complicated transition function which can be synthesized to hardware. The values of the list can be observed to make a new list.
The multiply_accumulate example is one instance of how they are intended to be used.
def multiply_accumulate (x y : LazyList Nat): LazyList Nat := acc
where
acc: LazyList Nat := register 0 (fun () => acc + x * y))
partial_fixpoint -- though this doesn't typecheck
This could be synthesized in hardware as a multiplication and addition of two input wire buses representing x and y to a state register outputting a wire bus from that register. Another hardware component can use this bus to generate its own output.
Easyoakland (Jun 30 2025 at 21:20):
(deleted)
Easyoakland (Jun 30 2025 at 21:25):
For proving properties about these lists I would imagine a namespace that would make these constructors in scope to enable software only code and proof, similar to how open Classical enables non-constructive logic to be used in proof.
Robin Arnez (Jun 30 2025 at 21:34):
Hmm there is some complication involved here because acc := (·+1) <$> acc is clearly unsound but wrapping the right-hand side in register makes it sound again.
Robin Arnez (Jun 30 2025 at 21:35):
So there needs to be at least some distinction between these two cases
Aaron Liu (Jun 30 2025 at 21:36):
Write a corecursor, like docs#Stream'.corec
Robin Arnez (Jun 30 2025 at 21:37):
That won't suffice though because we explicitly want to have access to the value we are defining within the right-hand side and apply transformations
Robin Arnez (Jun 30 2025 at 21:38):
Corecursors only work for tail-recursive right-hand sides
Aaron Liu (Jun 30 2025 at 21:39):
You also need a stream destructor
Robin Arnez (Jun 30 2025 at 21:39):
How would that help with the multiply_accumulate example?
Easyoakland (Jun 30 2025 at 21:56):
Robin Arnez said:
Hmm there is some complication involved here because
acc := (·+1) <$> accis clearly unsound but wrapping the right-hand side inregistermakes it sound again.
I just tried this in Clash
invalid :: Signal System (Unsigned 16)
invalid = acc
where
acc = (+1) <$> acc
and surprisingly it just synthesized the invalid circuit. The hardware simulator (verilator) does complain that the logic is circular.
/* AUTOMATICALLY GENERATED SYSTEMVERILOG-2005 SOURCE CODE.
** GENERATED BY CLASH 1.8.2. DO NOT MODIFY.
*/
`default_nettype none
`timescale 100fs/100fs
module invalid
( // No inputs
// Outputs
output logic [15:0] result
);
logic [15:0] c$result_rec;
assign c$result_rec = c$result_rec + 16'd1;
assign result = c$result_rec;
endmodule
Easyoakland (Jun 30 2025 at 22:01):
Running the Haskell code never terminates. So, at least for my initial goal of emulating Clash, it doesn't matter if this case typechecks or not. Though, I would guess it has to be rejected by Lean since it is logically invalid.
Easyoakland (Jun 30 2025 at 22:09):
Aaron Liu said:
Write a corecursor, like docs#Stream'.corec
Ah. I didn't know of this. I'll take a look.
Easyoakland (Jul 01 2025 at 00:26):
Aaron Liu said:
Write a corecursor, like docs#Stream'.corec
This works fine for the very simple case
def increasing' : Stream' Nat := Stream'.corec (id) (fun st => st+1) 0
but it's not clear to me how this would work when incorporating other state:
instance [HAdd α β γ]: HAdd (Stream' α) (Stream' β) (Stream' γ) where
hAdd a b := fun i => a.get i + b.get i
instance [HMul α β γ]: HMul (Stream' α) (Stream' β) (Stream' γ) where
hMul a b := fun i => a.get i * b.get i
instance : Functor (Stream') where
map := Stream'.map
instance : Applicative (Stream') where
pure := Stream'.pure
seq f x := f.apply (x ())
def multiply_accumulate' (x y : Stream' Nat): Stream' Nat :=
-- let f : Stream' (Nat -> Nat -> Nat) := Stream'.corec (id) (fun st => fun x y => st + x * y)
let f := fun (x y st : Nat) => st + x * y
let fapp := f <$> x <*> y
?_
Aaron Liu said:
You also need a stream destructor
I don't know what this is and see no definition of it in Mathlib.Data.Stream.
Aaron Liu (Jul 01 2025 at 00:28):
A destructor means something like docs#Stream'.head and docs#Stream'.tail
Easyoakland (Jul 01 2025 at 00:32):
Aaron Liu said:
A destructor means something like docs#Stream'.head and docs#Stream'.tail
Ah. As I said earlier, it can't be the case that the user needs to use these functions directly, because then they could call them in hardware unsythesizable ways (calling .tail too many times).
The Digital Circuits in CλaSH: Functional Speciications and Type-Directed Synthesis paper provides an even more subtle way to misuse these:
delayInc
:: a → Signal a → Signal a
2 delayInc i (s :− ss ) = i :− s :− delayInc i ss
Easyoakland (Jul 01 2025 at 00:37):
@Aaron Liu
The goal here is to see if it is possible in Lean make a type like Stream where the only way to interact with it with certain "blessed" functions. These include cons (also called register), pure, <*>, <$>, but do not include arbitrary manipulation like .tail.
It certainly seems possible if I allow manipulation to be partial (as evidenced by the original post), but the goal is to make theorems about this type.
Aaron Liu (Jul 01 2025 at 00:38):
How do you want to handle termination?
Aaron Liu (Jul 01 2025 at 00:38):
I'm assuming you want to allow recursive definitions
Easyoakland (Jul 01 2025 at 00:39):
In what sense? I've written how I'd want this to work above, am I explaining something poorly?
Aaron Liu (Jul 01 2025 at 00:41):
For now I'll recommend the corecursor (although this is purely from the theorem proving side, I don't know how they perform in execution)
Easyoakland (Jul 01 2025 at 00:41):
The software simulation side of this terminates by only running the eventual function with something akin to Stream.take, which is only accessible in the special namespace I described here.
Easyoakland said:
For proving properties about these lists I would imagine a namespace that would make these constructors in scope to enable software only code and proof, similar to how
open Classicalenables non-constructive logic to be used in proof.
Easyoakland (Jul 01 2025 at 00:44):
Aaron Liu said:
For now I'll recommend the corecursor (although this is purely from the theorem proving side, I don't know how they perform in execution)
Ok, how would you write the multiply accumulate function using only Stream.corec but not .head, .tail or Stream.get?
def multiply_accumulate' (x y : Stream' Nat): Stream' Nat :=
-- let f : Stream' (Nat -> Nat -> Nat) := Stream'.corec (id) (fun st => fun x y => st + x * y)
let f := fun (x y st : Nat) => st + x * y
let fapp := f <$> x <*> y
?_
Aaron Liu (Jul 01 2025 at 00:46):
What do you want this function to do?
Aaron Liu (Jul 01 2025 at 00:46):
oh I see
Aaron Liu (Jul 01 2025 at 00:47):
You also probably want a stream fold function
Easyoakland (Jul 01 2025 at 00:48):
Aaron Liu said:
What do you want this function to do?
this without the partial
def register (reset: α) (s: Stream' α): Stream' α := reset :: s
partial def multiply_accumulate'''' (x y : Stream' Nat): Stream' Nat := acc
where
acc: Stream' Nat := register 0 (acc + x * y)
Easyoakland (Jul 01 2025 at 00:51):
Aaron Liu said:
You also probably want a stream fold function
Ok... What's that? Something like this?
def Stream'.fold (s: Stream' α) (f: α → β → α) (init: α): Stream' β → α
Aaron Liu (Jul 01 2025 at 00:52):
I guess what I really mean was a stream scan function
Aaron Liu (Jul 01 2025 at 00:52):
Fold doesn't really work with infinite streams
Easyoakland (Jul 01 2025 at 00:53):
If you could write the type-signature or link to what you mean that would be helpful.
Aaron Liu (Jul 01 2025 at 00:57):
I feel like your choice to omit head and tail is leading to a lot of pain
Easyoakland (Jul 01 2025 at 01:01):
It's not really a choice. It's a requirement for this to work to describe hardware. The alternative I already know will work is representing hardware as input -> StateM (output, state)instead. The problem with that approach is that state is explicit and must be threaded by the user on composition.
Easyoakland (Jul 01 2025 at 01:02):
If you allow head and tail then you get this
Aaron Liu (Jul 01 2025 at 01:05):
What do you mean StateM (output, state)? I feel like this could be a good solution
Easyoakland (Jul 01 2025 at 01:15):
Aaron Liu said:
What do you mean
StateM (output, state)? I feel like this could be a good solution
Literally that. Represent coinduction as a function with explicit state input and output.
def multiply_accumulate'' (x y: Nat): StateM Nat Nat := do
let res := (← get) + x * y
set res
pure res
The reason I was interested in finding an alternative is that it's harder to make temporal guarantees when the user can just decide not to call the transition function.
If the state was inaccessible you could write a delay function that guaranteed output after n transitions.
If state is explicit like
def delay1 (a:α): StateM α α := do
let next ← get
set a
pure next
def delayN (n:Nat) [NeZero n] (a:α): StateM (Vector α n) α := do
let mut state ← get
let (res, out) := state.shiftRight a
set res
pure out
then you can't stop the user from just ... not calling the transition function.
Aaron Liu (Jul 01 2025 at 01:16):
What do you mean "transition function"
Easyoakland (Jul 01 2025 at 01:16):
Finite state machine has a transition function.
Aaron Liu (Jul 01 2025 at 01:17):
What part of the code corresponds to the transition function?
Easyoakland (Jul 01 2025 at 01:17):
delayN is a transition function for a finite state machine which delays the input n transitions of delayN.
Easyoakland (Jul 01 2025 at 01:17):
There's a different transition function for every finite state machine.
Aaron Liu (Jul 01 2025 at 01:18):
So your problem is that the user can decide to not use your code?
Easyoakland (Jul 01 2025 at 01:20):
Something in the future I'd want to try is representing a Signal n type which represents values in time, with a guarantee that the value is useful after a delay of n. This comes up in hardware a lot because of pipelining. For example a multiplication circuit might be output the result of the inputs after 3 transition delay.
Aaron Liu (Jul 01 2025 at 01:20):
hmm
Aaron Liu (Jul 01 2025 at 01:21):
my immediate reaction is "build a monad"
Aaron Liu (Jul 01 2025 at 01:21):
but I guess it depends on what you mean by "time" and "delay"
Easyoakland (Jul 01 2025 at 01:22):
Aaron Liu said:
my immediate reaction is "build a monad"
Where? StateM is a monad. Signal can't be a monad. It has to be an Applicative Functor.
Aaron Liu (Jul 01 2025 at 01:22):
ok, build an applicative functor then
Easyoakland (Jul 01 2025 at 01:23):
Aaron Liu said:
ok, build an applicative functor then
That's what this thread is. The problem with making it work without partial annotations.
Aaron Liu (Jul 01 2025 at 01:23):
what specifically is not working
Easyoakland (Jul 01 2025 at 01:23):
without
partialannotations.
Aaron Liu (Jul 01 2025 at 01:23):
explain what you did, what happened, and what you want to happen
Aaron Liu (Jul 01 2025 at 01:24):
For your multiply_accumulate function, I have a solution
Easyoakland (Jul 01 2025 at 01:26):
Do you see the first post I made in this subject/thread? That's what I did. Then @Robin Arnez helped make it non-partial using partial_fixpoint but this didn't work for the more complicated multiply_accumulate.
@Robin Arnez Didn't answer, but I think the implication was that + can't be proven monotone.
Aaron Liu (Jul 01 2025 at 01:26):
sorry if I'm not being helpful right now I just want to understand your problem
Easyoakland (Jul 01 2025 at 01:27):
Aaron Liu said:
For your
multiply_accumulatefunction, I have a solution
I'm interested.
Aaron Liu (Jul 01 2025 at 01:32):
I think it works it you have a Stream'.comul like this (are you still using Stream'?):
import Mathlib
def Stream'.comul {α : Type*} (s : Stream' α) : Stream' (Stream' α) := fun x y => s (x + y)
Aaron Liu (Jul 01 2025 at 01:32):
but I haven't tested it thouroughly
Easyoakland (Jul 01 2025 at 01:34):
Aaron Liu said:
sorry if I'm not being helpful right now I just want to understand your problem
There's something like 4 levels of difficulty to this business of representing changing values in time while hiding state.
First is a constant value. That's easy.
Second is a value which changes only dependent on itself. That's what I got stuck on but seems to be solved with partial_fixpoint (I have proofs on the output of increasing that work).
Third, is a time-varying value dependent on other values. That's like multiply_accumulate.
Fourth, is a function with multiple time-varying values which each depend on multiple other time-varying values. This would be something like a full processor.
Aaron Liu (Jul 01 2025 at 01:35):
interesting
Aaron Liu (Jul 01 2025 at 01:35):
How does it work in real life? Maybe you can derive some inspiration.
Easyoakland (Jul 01 2025 at 01:40):
Aaron Liu said:
I think it works it you have a
Stream'.comullike this (are you still usingStream'?):import Mathlib def Stream'.comul {α : Type*} (s : Stream' α) : Stream' (Stream' α) := fun x y => s (x + y)
I don't know how this should be used. It looks like equivalent to fun s => s :: Stream'.tails s but I don't see how that's helpful.
Easyoakland (Jul 01 2025 at 01:42):
Aaron Liu said:
How does it work in real life? Maybe you can derive some inspiration.
In case it wasn't clear, Clash already does this. It's Signal type is what I'm trying to emulate. The problem is that (after translating Haskell lazyness to strict Lean as in the original post) Lean's termination checker is sad.
Aaron Liu (Jul 01 2025 at 01:44):
Easyoakland said:
If you could write the type-signature or link to what you mean that would be helpful.
Stream'.scanl {α β : Type*} (s : Stream' α) (f : β → α → β) (init : β) : Stream' β
I just realized this is a special case of
Stream'.mapM {α β : Type*} {m : Type _ → Type _} [Monad m] (s : Stream' α) (f : α → m β) : m (Stream' β)
Aaron Liu (Jul 01 2025 at 01:59):
actually it looks like Stream'.mapM isn't implementable ignore what I said
Easyoakland (Jul 01 2025 at 02:00):
This? If yes, I'll go back to my pending question: how would you use this to implement multiply_accumulate without partial def?
def Stream'.scanl {α β : Type*} (s : Stream' α) (f : β → α → β) (init : β) : Stream' β :=
Nat.rec (motive := fun _ => β) init (fun n prev => f prev (s n))
Aaron Liu (Jul 01 2025 at 02:03):
finally
def multiply_accumulate := fun x y =>
scanl (x * y) (fun acc new => acc + new) 0
Easyoakland (Jul 01 2025 at 02:11):
I'll have to think about it. That might work. Kind of annoying we can't write regular equations.
I think scanl can be implemented with an input bus for the first argument, a register for the current scan state which starts with the value of the third argument to scanl, and a combinatorial circuit for the second argument which takes the value of the state register and the input bus then writes the result to the state register.
Easyoakland (Jul 01 2025 at 02:36):
For reference this is a tail-recursive version of scanl:
def Stream'.scanl'' {α β : Type*} (s : Stream' α) (f : β → α → β) (init : β) : Stream' β := fun n =>
let rec go (i : Nat) (acc : β) : β :=
if i < n then go (i+1) (f acc (s.get i))
else acc
go 0 init
and it looks like this is going to have the same issue with quadratic runtime as the stream at n requires calculating the input stream at each value <n.
Easyoakland (Jul 01 2025 at 02:42):
Yup. This is unusably slow.
#eval
let s := multiply_accumulate''' (pure 1) (pure 2)
let s := multiply_accumulate''' (s) (pure 2)
let s := multiply_accumulate''' (s) (pure 2)
let s := multiply_accumulate''' (s) (pure 2)
s.take 100
Last updated: Dec 20 2025 at 21:32 UTC