Zulip Chat Archive
Stream: lean4
Topic: Lean as an HDL
Easyoakland (Jun 25 2025 at 05:14):
Hi,
I'm new to Zulip, so please let me know if I'm doing it wrong.
I'm interested in using Lean4 as a hardware description language (HDL) using similar techniques as CLASH. This would enable proving the correctness of designs while taking advantage of the very nice Lean tooling, types, metaprogramming, and language.
To do this I'm trying to understand the Compiler internals, and would appreciate pointers about how it fits together or where to look for documentation.
Particular things I'm looking for are how to:
- Get the abstract syntax tree (AST) or intermediate representations (IR) of functions, definitions, constants, etc...
- Get the types of objects in the AST (for determining wire bus widths and slicing in the resulting HDL)
- Normalize and perform constant propagation on this AST
- Lower match patterns to decision trees
- Specialize and inline arguments to functions (especially for higher-order functions which can't otherwise be synthesized to hardware).
- How to prevent inlining of certain functions (for making primitive operations which compile to the equivalent underlying HDL constructs). Perhaps however `opaque` is handled is sufficient for this?
Also I saw that there's a new compiler? How does that effect things?
The rest of this post contains things I've discovered by spelunking in the code-base.
I've figured out that importModules enables importing definitions from the .olean generated by lake exe.
let env ← importModules #[{module := `Main}] {} (trustLevel := 1024)
let .some constant_info := env.find? name | panic! "uh oh"
though I'm not sure if this is how one is supposed to read olean files. The ConstantInfo seems like it has all the information I'd need, but I would like to re-use as much of Lean's compiler internals as possible instead of re-writing everything starting from ConstantInfo and Expr.
enabling trace.compiler.ir with set_option trace.compiler.ir true shows me (for example):
set_option trace.compiler.ir true
def test (n:Nat): Nat := match n with
| 0 => 1
| n+1 => 2
[init]
def test (x_1 : obj) : obj :=
let x_2 : obj := 0;
let x_3 : u8 := Nat.decEq x_1 x_2;
case x_3 : obj of
Bool.false →
let x_4 : obj := 2;
ret x_4
Bool.true →
let x_5 : obj := 1;
ret x_5
which indicates to me that the decision tree lowering I'd need is somewhere in the compiler. This particular output seems to be coming from Lean.IR.compileAux. Are those obj untyped or is the type information accesible but not pretty-printed? Following the callstack (Lean.IR.compile > Lean.Compiler.LCNF.PassManager.run > Lean.Compiler.LCNF.compile and Lean.Compiler.LCNF.main) for this I see several types that seem important such as
- PassManager
- CoreM
- CompilerM
which I don't understand. What is this whole Pass system in LCNF? What is the correct way to construct Lean.Core.Context and Lean.Core.State for using CoreM?
Max Nowak 🐉 (Jun 25 2025 at 06:36):
I am not familiar with CLASH and HDLs, but what are you trying to achieve? If you're trying to formalize a language, so make a kind of DSL and AST, you don't need to look into how Lean generates code.
Max Nowak 🐉 (Jun 25 2025 at 06:37):
(In which case, https://leanprover-community.github.io/lean4-metaprogramming-book/ and https://lean-lang.org/doc/reference/latest/Notations-and-Macros/#language-extension might help)
Max Nowak 🐉 (Jun 25 2025 at 06:47):
Maybe you can give a very small ideal example of how you want your thing to be ultimately used by end users?
Easyoakland (Jun 25 2025 at 07:54):
The idea of Clash is that a significant subset of pure functional languages (non-recursive functions and datatypes plus a few exceptions like vectors with type level length plus recursion which can be fully unfolded at compile time through inlining, specialization, and monomorphization) can be compiled from a pure language into a hardware description. Clash uses Haskell. Since Lean is a pure language it should be possible to compile a large subset of Lean into a hardware description.
For example:
given
inductive MyType (n:Nat) where
| A : BitVec n → MyType n
| B : Option UInt16 → MyType n
usage of MyType n would generate a wire bus of 1 + max(n, (1 + 16)) bits for the discriminant and maximum possible payload. Similarly for a function which returns a MyType n.
A function gets translated into a combinatorial circuit with a bus of wires for its input and a bus of wires for its output.
Stateful hardware components are represented by Mealy machines, or in other words, a function of the form input -> state -> (state, output) or α -> StateM σ β.
Control flow gets compiled into muxes.
To make this work higher-order functions have to be specialized into first-order functions.
It's not a DSL, but a direct compilation of a subset of the Functional language. The appeal is that it intends to make writing the hardware simpler. In this case it would also enable using Lean's ability to prove theorems about Lean.
Easyoakland (Jun 25 2025 at 07:59):
Also, I have read the metaprogramming book but it doesn't cover CoreM. Just MetaM and ElabM inside elab and macro, where the monad is magically ran for you, which doesn't help in regard to Lean.Compiler.LCNF.compile.
Easyoakland (Jun 25 2025 at 08:05):
The way I'd hope this could be used is by making an executable or function which can take a monomorphic "top-level-entity" function defined in Lean and recursively generate the corresponding hardware description in System Verilog.
neil (Jun 25 2025 at 09:06):
It's not a DSL, but a direct compilation of a subset of the Functional language. The appeal is that it intends to make writing the hardware simpler. In this case it would also enable using Lean's ability to prove theorems about Lean.
I don't really understand why you want to be able to compile arbitrary lean code instead of defining a language within Lean, as is usually done. If you're trying to add another compilation target for lean, you're going to be exposed to changes in the compiler, which will probably be a maintenance nightmare. And you still have access to a lot of lean's language features, as long as you can "compile" them into terms of your language.
neil (Jun 25 2025 at 09:11):
looks like Clash has something to this effect in their docs as well:
Due to the Clash’s tight integration with GHC, updates to the GHC version that Clash uses result in changes to the Clash version. As GHC’s internals change frequently, even for minor bumps, it cannot be guaranteed that these changes will not result in Clash changes.
so clash is sort of a fork of a given version of GHC; I was wondering how that worked
Easyoakland (Jun 25 2025 at 16:44):
I don't really understand why you want to be able to compile arbitrary lean code instead of defining a language within Lean, as is usually done.
On why you would want to do this instead of an embedded DSL I'll quote directly from Digital Circuits in CλaSH: Functional Specifications and Type-Directed Synthesis by Christiaan P.R. Baaij
One popular approach to alleviate the implementation burden is to create an embedded domain specific language (DSL) for circuit design, which is the approach taken by, for example, the Lava HDL [...].
One technical difficulty is that these circuit generators will, in the presence of feedback loops, generate infinite trees, which have to be folded back into a graph structure [24]. One deficit of the embedded language approach is that not all of the (desirable) features of the host-language can be used for circuit description. Most importantly, the choice-constructs (such as case-statements) of the host language cannot be used to describe choice-constructs in the eventual circuit [...].
and
Haskell has a rich set of choice-constructs, including features such as guards and pattern matching. In the style of embedding chosen by Lava, these choice-constructs can, however, not be used to model choice-structures in the circuit. This is a direct consequence of using Haskell’s evaluation mechanism to construct the circuit graph: choice-constructs can be used to guide the construction of the circuit graph, but it is not possible to observe all the alternatives. The problem is that functions must operate on values of type
Signal ain order to build the data-structure that will represent the circuit. The Signal type is, however, not transparent in terms of pattern-matching. That is, given a values, of typeSignal Bool, the expressioncase s of {True → ... ; False → ...}, is invalid. That is, even thoughTrueandFalseare indeed the constructors ofBool, they are not the constructors ofSignal Bool.
I'll also add that implementing hardware synthesis as a new backend for the language enables using the language to "simulate" the hardware design simply by compiling and running it regularly to software, which is very nice for testability.
If I used a DSL instead of Lean directly I imagine I would need to write an interpreter of the circuits and then write proofs about the interpreter of the DSL on circuit source trees, which sounds like it adds an unnecessary extra step to all the proofs.
It doesn't make sense to me to make this a DSL because the semantics match those of Lean. If I wrote a DSL it would just be a re-implementation of Lean, which is what I'm trying to avoid.
If you're trying to add another compilation target for lean [...]
Yes, adding another compilation target for Lean is probably similar to what I'm trying to do. In The Lean 4 Theorem Prover and Programming Language by Leonardo de Moura and Sebastian Ullrich, it states
The IR is a collection of Lean data structures, and users can implement support for backends other than C by writing Lean programs that import Lean.Compiler.IR.
which seemed to imply code and cultural support for adding compiler backends. Is this no longer the case?
you're going to be exposed to changes in the compiler, which will probably be a maintenance nightmare.
There has to be some level that's expected to be stable. The Expr is public API used by macros and seems to contain all the information needed. I would hope this doesn't change constantly since that would break any metaprogramming that anyone writes, and makes me question the metaprogramming utility in Lean. The question is how much of the compiler can I re-use while transforming this, to hopefully jumpstart the project, even if I have to diverge in how compilation is done over time.
Robin Arnez (Jun 25 2025 at 17:24):
You can use
import Lean
#eval return (Lean.IR.findEnvDecl (← Lean.getEnv) `Lean.IR.findEnvDecl).get!
which is probably reasonably stable but @Cameron Zwarich can probably tell you more
Easyoakland (Jun 25 2025 at 17:47):
The problem with that seems to be that types get erased to Lean.IR.IRType.object
as part of
inductive IRType where
| float | uint8 | uint16 | uint32 | uint64 | usize
| irrelevant | object | tobject
| float32
| struct (leanTypeName : Option Name) (types : Array IRType) : IRType
| union (leanTypeName : Name) (types : Array IRType) : IRType
deriving Inhabited, Repr
, but to synthesize to hardware the types have to be concrete and known
Easyoakland (Jun 25 2025 at 17:48):
I would assume this is possible, otherwise I don't know how Lean plans on doing unbox optimizations.
Easyoakland (Jun 25 2025 at 18:01):
(unboxing is precisely what the hardware synthesis requires)
Robin Arnez (Jun 25 2025 at 18:09):
Easyoakland schrieb:
otherwise I don't know how Lean plans on doing unbox optimizations
The struct and union types already exist and they will probably then get used for unbox optimization (more or less significantly changing the compiler and backend). But no, the IR doesn't contain that, maybe the mono compilation stage would be better suited?
import Lean
open Lean.Compiler.LCNF in
#eval do ppDecl' (← getMonoDecl? ``Option.get!).get!
Easyoakland (Jun 25 2025 at 18:20):
That particular code panics with PANIC at Option.get! Init.Data.Option.BasicAux:21:14: value is none.
Trying ``_root_.main after defining a main function similarly panics, so I'm not sure what state it's reading from.
Easyoakland (Jun 25 2025 at 18:35):
For reference this also panics on the Option.get!
open Lean Lean.Compiler.LCNF in
def main : IO Unit := do
initSearchPath (← findSysroot)
let env ← importModules #[{module := `Main}] {} (trustLevel := 1024)
let f : CoreM _ := do ppDecl' (← getMonoDecl? ``Option.get!).get!
let core_state ← ST.mkRef {env}
let a := ReaderT.run (f) {fileName := default, fileMap := default} <| core_state
match (← a.toIO') with
| Except.ok v => println! s!"ToString.toString v = {ToString.toString v}"
| .error e => println! "uh oh"
pure ()
(and surprisingly still prints the match statement? I guess panics don't abort the process?)
Robin Arnez (Jun 25 2025 at 20:11):
Easyoakland schrieb:
That particular code panics with
PANIC at Option.get! Init.Data.Option.BasicAux:21:14: value is none.Trying ``_root_.main after defining a main function similarly panics, so I'm not sure what state it's reading from.
You need to use the latest nightly with the new compiler
Robin Arnez (Jun 25 2025 at 20:11):
It's reading new compiler state
Cameron Zwarich (Jun 25 2025 at 20:45):
Thanks for reaching out, @Easyoakland! I am, for lack of a better description, the current code owner of the Lean compiler. I also happen to have some industrial experience with RTL and tooling for HDLs, and am interested in seeing people apply Lean to HW verification.
The main problem I see with a Clash-like approach for Lean is that it's not obvious what your Signal equivalent should be. Lean does support encodings of coinductive predicates, but it doesn't support coinductive types with computational rules.
If you're looking to reuse pieces of the existing compiler, then you definitely want to look more at LCNF (the mostly pure middle-end) as opposed to IR (the impure backend, highly coupled with Lean's runtime system). Unlike GHC's Core, which preserves types throughout the compiler, LCNF is an ultimately untyped representation that attempts to preserve types as much as possible as hints for specialization, inlining, etc. This works because use a mostly uniform representation of values, but any HDL would need to use a highly non-uniform representation.
Easyoakland (Jun 25 2025 at 21:43):
I am, for lack of a better description, the current code owner of the Lean compiler. I also happen to have some industrial experience with RTL and tooling for HDLs, and am interested in seeing people apply Lean to HW verification.
Happy to hear!
The main problem I see with a Clash-like approach for Lean is that it's not obvious what your
Signalequivalent should be.
The first draft of Clash (as I understand it) was based on Mealy machines instead of based on lazy infinite lists in the Signal type. I suspect having state be explicit will be useful for proof anyway (for example proving a predicate on state after n cycles/transitions) so I don't think it's a big loss.
Something like
structure Mealy (σ α β) where
transition: α -> StateM σ β
reset: σ
for a single clock domain, and something like a tuple/HList of concrete Mealy for entities with multiple clock domains, where each machine will have a corresponding clock domain.
Lean does support encodings of coinductive predicates, but it doesn't support coinductive types with computational rules.
I don't know what you mean by coinductive predicates or coinductive types with computational rules in Lean, but the above Mealy machine should encode the coinductive computational behavior in the transition function.
If you're looking to reuse pieces of the existing compiler, then you definitely want to look more at
LCNF(the mostly pure middle-end) as opposed toIR(the impure backend, highly coupled with Lean's runtime system).
This is very helpful to know.
Unlike GHC's Core, which preserves types throughout the compiler, LCNF is an ultimately untyped representation that attempts to preserve types as much as possible as hints for specialization, inlining, etc. This works because use a mostly uniform representation of values, but any HDL would need to use a highly non-uniform representation.
Sounds like I will have to work on a separate representation.
Can you point me to where/how the compiler de-sugars match statements? Even if the representations turn out different, re-using the same ideas will be helpful.
Cameron Zwarich (Jun 25 2025 at 21:52):
I agree that using Mealy machines should work.
Can you point me to where/how the compiler de-sugars match statements? Even if the representations turn out different, re-using the same ideas will be helpful.
You probably want to see all of the code called from here: https://github.com/leanprover/lean4/blob/25b1b46572db22b300471ff0668d15ca3091e10a/src/Lean/Compiler/LCNF/ToLCNF.lean#L699-L700
Elliot Potts (Jun 26 2025 at 09:05):
As a recent clash user, I am watching with interest. That is all :slight_smile:
neil (Jun 28 2025 at 04:51):
@Easyoakland I thought about it a bit after making that post and realized that I did agree with you, ultimately, but thank you for sharing all that reasoning.
Easyoakland (Jul 02 2025 at 23:07):
Cameron Zwarich said:
Lean does support encodings of coinductive predicates, but it doesn't support coinductive types with computational rules.
Could you expand on this? What does it mean that Lean supports encoding coinductive predicates but not with computational rules?
Mario Carneiro (Jul 02 2025 at 23:26):
You can write a type which is isomorphic to a coinductive type, for example src#Stream'.Seq is morally a coinductive type (you can see the definition one would like to write just above it) and it has the corecursor, destructor, and a bisimulation principle like you would have from a real coinductive type. But the definition itself doesn't look at all like a coinductive type, it's encoded using streams of options. The main thing that you do not get from this kind of construction is the computational rules, e.g. destruct (corec f b) = omap (corec f) (f b), which would be definitional equalities in a system which had coinductive types built in. The best we can do is have them as simp lemmas.
Easyoakland (Jul 02 2025 at 23:43):
I was informed about src#Stream' here. The problem with it was quadratic runtime.
This still allows a model that has both a computational behavior and proof of that computational behavior? I thought @Cameron Zwarich had meant something else.
How does QPF relate? Is it also a way of getting "corecursor, destructor, and a bisimulation principle [...]. But [...] doesn't look at all like a coinductive type"?
Mario Carneiro (Jul 03 2025 at 00:25):
Right, you don't want to use the model construction as the actual implementation, it has the wrong order of evaluation and results in worse asymptotics
Mario Carneiro (Jul 03 2025 at 00:26):
the definitional equality I mentioned is the actual way you want this to compute
Mario Carneiro (Jul 03 2025 at 00:26):
and that requires special support from the compiler to recognize that the definition of corec in terms of the model should not be taken at face value and compiled to code
Mario Carneiro (Jul 03 2025 at 00:27):
and yes, QPF is basically doing this for general coinductive types, and it suffers from the same issues as Stream'
Mario Carneiro (Jul 03 2025 at 00:28):
it's basically just some metaprogram that will actually implement a coinductive keyword which automates model constructions in the style of Stream', but because it's not part of the compiler it can't fix the computation issue
Mario Carneiro (Jul 03 2025 at 00:35):
There is an alternative definition which has the right computational behavior but lives in the wrong universe:
def Stream' (β : Type) : Type 1 /- ! -/ :=
(α : Type) × (α → β × α) × α
def Stream'.corec (f : α → β × α) (a : α) : Stream' β := ⟨α, f, a⟩
def Stream'.destruct (s : Stream' β) : β × Stream' β :=
let ⟨α, f, a⟩ := s
let (b, a) := f a
(b, ⟨α, f, a⟩)
theorem Stream'.destruct_corec (f : α → β × α) (a : α) :
(corec f a).destruct = let (b, a) := f a; (b, corec f a) := rfl
Mario Carneiro (Jul 03 2025 at 00:36):
it also doesn't have the right universe polymorphism; corec should allow the type α to be in Sort u but it only works for α : Type
Mario Carneiro (Jul 03 2025 at 00:39):
So ideally the compiler should be able to swap out the model definition of corec with this definition. This can almost be done already today using implemented_by, but it has an issue where it will delete some projections that are necessary to maintain the separation between the model implementation and the computational implementation. There was a fix for it (lean4#2292) but it never got merged and the new compiler did not fix the issue.
Mario Carneiro (Jul 03 2025 at 00:40):
This has been blocking coinductives for years :sob:
Cameron Zwarich (Jul 03 2025 at 01:58):
@Mario Carneiro I didn't realize that this is all that is blocking a performant implementation of coinductive types, so thanks for explaining that.
Mario Carneiro (Jul 03 2025 at 01:59):
well, this and a ton of non-core work to actually write a good coinductive package
Easyoakland (Jul 03 2025 at 02:03):
For this particular project (HDL) I think only a performant Stream type would be needed to replace the manual state machines, so I would be interested if that gets unblocked.
Mario Carneiro (Jul 03 2025 at 02:04):
depending on your application the above existential type implementation might actually work for you
Easyoakland (Jul 03 2025 at 02:06):
Mario Carneiro said:
depending on your application the above existential type implementation might actually work for you
Because it's only a problem if I need universe polymorphism?
Mario Carneiro (Jul 03 2025 at 02:06):
well it makes the type have a weirdly large universe, which means you can't use it in IO and some other things
Mario Carneiro (Jul 03 2025 at 02:07):
so it may or may not be a problem
Easyoakland (Jul 28 2025 at 05:57):
Cameron Zwarich said:
You probably want to see all of the code called from here: https://github.com/leanprover/lean4/blob/25b1b46572db22b300471ff0668d15ca3091e10a/src/Lean/Compiler/LCNF/ToLCNF.lean#L699-L700
It looks like the ToLCNF is just about collecting Expr into the ANF format to perform code optimization.
I don't (at least for now) need to do code optimization, beyond partial evaluation, so I don't think this really matters to me.
Currently I'm compiling directly from Expr to a netlist while simultaneously normalizing/reducing the term using whnf.
The way I'm handling matching right now is by unfolding until it hits a recursor and then compiling the recursors themselves. I had to write my own whnfImp because Lean.Meta.whnf doesn't unfold auxiliary matches. For that I'm using Lean.Compiler.LCNF.inlineMatchers. Is there a whnf which doesn't stop at auxiliary match statements?
Why are auxiliary match statements treated so specially by the compiler (in that they are unfolded as unoften as possible), and why are many recursors not supported directly by the Lean code generator?
My current progress on this project is here. It seems to handle combinational logic correctly for the most part such as all of the following functions.
If you have any feedback on the code as it stands, e.g., am I doing things weird or re-implementing logic which already exists, I would appreciate it.
I believe the current issue I'm facing is that Acc.rec isn't reduced by whnf. I'm guessing this is related to reduction getting stuck as described Theorem Proving in Lean4.
When does Acc.rec get used? Is it part of compiling functions that are terminating by well-foundedness?
I encountered it while trying to compile len_manual_mono in
def len_manual (x: Vector α n): Fin (n+1) := x.elimAsList fun l hl => match l with
| .nil => ⟨0, by omega⟩
| .cons a b =>
have : n - 1 + 1 + 1 = n + 1 := by
have : n > 0 := by exact?
omega
1 + (len_manual (n:=n-1) (.mk b.toArray (by exact?)) |>.castSucc |> cast (by rw [this]))
def len_manual_mono : Fin 3 := len_manual #v[0,1]
where #eval len_manual_mono is 2 as expected but it doesn't reduce to 2, and instead gets stuck at a really long term whose head is Acc.rec.
-
When are recursors not reducible?
True.rec,False.rec, andNat.recall reduce whileAcc.recandEq.recdon't. -
Theorem Proving in Lean4 says "But in nontrivial examples, eliminating cast changes the type of the term, which might make an ambient expression type incorrect". What nontrivial examples? Is the issue of type correctness still present if proofs are erased with
lcProof? Or in other words, is the type-correctness failure only in proof terms, or does it also affect inhabitants ofTypes such as inhabitants ofNatorFin? -
Is the the issue of type correctness related to whatever "subject reduction" is?
-
How should I modify
whnfto reduce these kinds of stuck recursors? I'm guessing I need some kind ofwhnf-evalhybrid?
Notification Bot (Jul 28 2025 at 06:05):
A message was moved here from #lean4 > Understanding compiler internals to use Lean as HDL by Easyoakland.
Easyoakland (Jul 28 2025 at 06:06):
I tried to rename the topic, and it looks like Zulip just made a new one instead and moved my message there. Here's a link to the new topic and message continuing the thread: #lean4 > Lean as an HDL
Edit: Looks like the previous topic messages were moved here.
Notification Bot (Jul 28 2025 at 13:49):
44 messages were moved here from #lean4 > Understanding compiler internals to use Lean as HDL by Mario Carneiro.
Easyoakland (Aug 04 2025 at 03:42):
I believe the current issue I'm facing is that
Acc.recisn't reduced bywhnf. I'm guessing this is related to reduction getting stuck as described Theorem Proving in Lean4.
I just noticed that recursive functions (even those defined by well-founded recursion) have an auxiliary ._unsafe_rec which recurses by name and which toLCNF uses. That fixes the Acc.rec issue since Acc.rec only shows up in the model definition.
Easyoakland (Aug 04 2025 at 03:48):
I do still think I'm going to need to modify whnf to be more like eval, but the path for doing so is clearer to me now.
Nina Chloé Kassi (Sep 28 2025 at 17:31):
Hellow,
I recently tried a new HDL and the HDL <-> theorem prover thing came back into my mind.
Here is the thing: Most hardware description is done in Verilog/SV/VHDL.
While this thread went in the opposite direction, my idea is that Verilog/SV/VHDL code could be translated into lean4 to verify the circuits.
Advantage of this would be, that already existent codebases could be verified using lean4.
Easyoakland (Sep 28 2025 at 18:38):
Nina Chloé Kassi said:
While this thread went in the opposite direction, my idea is that Verilog/SV/VHDL code could be translated into lean4 to verify the circuits.
I don't know how you would describe the circuits in lean4, without having effectively a full hardware description language in lean anyway.
Shreyas Srinivas (Sep 28 2025 at 20:21):
a full HDL in this case would mean a lot of bells and whistles that a minimal HDL can avoid by just providing a way to write RTL.
Shreyas Srinivas (Sep 28 2025 at 20:21):
Fwiw, I am not sure why wouldn't want to try the Lava approach. Write a monad that builds the circuit expression and prove things about it.
Easyoakland (Sep 28 2025 at 21:00):
Shreyas Srinivas said:
Fwiw, I am not sure why wouldn't want to try the Lava approach. Write a monad that builds the circuit expression and prove things about it.
In this scheme BitVec.and, for example, translates to hardware as an and, and in addition can be reasoned about using all the tooling which already exists in Lean for dealing with bitvectors since it is just a regular function to the Lean type system. If you wanted to do the Lava approach I think you'd need to add a bunch of axioms that the circuit is equivalent to the model. If you have to write the logical model equivalence anyway, it seems reasonable to go all the way and use the native operations directly so that the translation is two-way instead of one-way. That way writing the hardware feels nicer (syntax matches regular Lean), and there's not a weird translation layer to bootstrap the logic that shows up in proofs.
This method also enables simulating the circuit directly in Lean, without having to write a custom interpreter for the circuit graph generated by a lava-like approach.
Additionally, while the ability to verify hardware designs is one of the reasons I'm working on this, I also just want to make an HDL which is nice to use and has good tooling (which I get almost automatically by using Lean's front-end).
Easyoakland (Sep 28 2025 at 21:05):
If you're interested, while the compiler is still very much in progress, something like this (a flip-flop which outputs the next element of the fibonacci series on each clock):
def cyclic_fibonacci_series : Mealy (BitVec n) :=
(Mealy.pure ()).scan (reset:=(0,1))
(fun () ((prev2: BitVec n), (prev1: BitVec n)) =>
let next := prev2+prev1
(prev2, (prev1,next)))
def cyclic_fibonacci_series_mono := cyclic_fibonacci_series (n:=18)
already works and translates to (the admittedly not pretty)
module cyclic_fibonacci_series_mono (
input logic var [1-1:0] clk
,input logic var [1-1:0] rst
,output logic var [18-1:0] o
);
logic var [36-1:0] registerState___Compiler_ToSystemVerilog__hyg_15339;
logic var [36-1:0] newRegisterState___Compiler_ToSystemVerilog__hyg_15340;
logic var [18-1:0] registerOutput___Compiler_ToSystemVerilog__hyg_15341;
logic var [18-1:0] Prod_mk_field_0___Compiler_ToSystemVerilog__hyg_15361;
assign Prod_mk_field_0___Compiler_ToSystemVerilog__hyg_15361 = registerState___Compiler_ToSystemVerilog__hyg_15339[18-1:0];
logic var [18-1:0] Prod_mk_field_1___Compiler_ToSystemVerilog__hyg_15370;
assign Prod_mk_field_1___Compiler_ToSystemVerilog__hyg_15370 = registerState___Compiler_ToSystemVerilog__hyg_15339[36-1:18];
logic var [18-1:0] Prod_mk_field_0___Compiler_ToSystemVerilog__hyg_15376;
assign Prod_mk_field_0___Compiler_ToSystemVerilog__hyg_15376 = registerState___Compiler_ToSystemVerilog__hyg_15339[18-1:0];
logic var [18-1:0] Prod_mk_field_1___Compiler_ToSystemVerilog__hyg_15381;
assign Prod_mk_field_1___Compiler_ToSystemVerilog__hyg_15381 = registerState___Compiler_ToSystemVerilog__hyg_15339[36-1:18];
assign {registerOutput___Compiler_ToSystemVerilog__hyg_15341, newRegisterState___Compiler_ToSystemVerilog__hyg_15340} = {Prod_mk_field_0___Compiler_ToSystemVerilog__hyg_15361, {Prod_mk_field_1___Compiler_ToSystemVerilog__hyg_15370, (Prod_mk_field_0___Compiler_ToSystemVerilog__hyg_15376 + Prod_mk_field_1___Compiler_ToSystemVerilog__hyg_15381)}};
always_ff @(posedge clk)
case (rst)
0: registerState___Compiler_ToSystemVerilog__hyg_15339 <= {18'0, 18'1};
default: registerState___Compiler_ToSystemVerilog__hyg_15339 <= newRegisterState___Compiler_ToSystemVerilog__hyg_15340;
endcase
assign o = registerOutput___Compiler_ToSystemVerilog__hyg_15341;
endmodule
Easyoakland (Sep 28 2025 at 21:13):
Shreyas Srinivas said:
a full HDL in this case would mean a lot of bells and whistles that a minimal HDL can avoid by just providing a way to write RTL.
That's what I meant. With this approach, the bells and whistles can be implemented in just library code written in Lean. The only special support you need is the primitives and the ability to compose them.
Easyoakland (Sep 28 2025 at 21:23):
And while I'm posting stuff which works, this is neat. (There's no special support in the compiler for delay):
def delay1 [reset:Inhabited α] (s:Mealy α): Mealy α := s.scan (fun v st => (st,v))
-- Ok, this is cool and It Just Works™.
def delayN (n:Nat) [reset:Inhabited α] (s:Mealy α): Mealy α := Id.run do
let mut s := s
for _ in [0:n] do
s := delay1 s
pure s
def delay4 [reset:Inhabited α] (s:Mealy α): Mealy α := delayN 4 s
def delay4_mono := delay4 (α:=BitVec 3) (cyclic_fibonacci_series)
compiled output
Shreyas Srinivas (Sep 28 2025 at 21:53):
Right this is a high level synchronous hdl (like clash ofc). Suddenly your design choices make sense.
Shreyas Srinivas (Sep 28 2025 at 21:56):
I am not writing an hdl per se, but the skeleton of one (still private for now) which is shallow embedded to verify much more weird circuit properties. I haven’t figured out how to translate this into provably correct verilog because HDls and their synthesis tools behave in rather unpleasant ways at this level unless you hand control every piece of the RTL.
Shreyas Srinivas (Sep 28 2025 at 21:59):
This means you need to avoid the high level combinator stuff getting translated into arbitrary numbers of flipflops. The FPGA HDLs that Pl people develop usually don’t have this issue. So it is kind of safe to generate registers where required and let the layout tools handle the optimisations.
Last updated: Dec 20 2025 at 21:32 UTC