Zulip Chat Archive
Stream: general
Topic: what is lean.js
TongKe Xue (Nov 27 2025 at 22:29):
Can someone please tell me what https://github.com/leanprover/lean4/blob/master/doc/make/emscripten.md is meant for ?
- I have followed the (regular) build instructions via
nix develop
21613 cmake --preset release
21614 make -C build/release -j56
21615 make -C build/release -j200
and everything went fine
- I am under the impression that lean4 does not compile to wasm32 yet (i.e. this is why the lean4web / lean game server uses a x86_64 server instead of running all in Chrome/wasm32)
question: what is this lean.js, and what are the emscriptsen build instrs for?
Thanks!
Markus Himmel (Nov 28 2025 at 05:13):
Lean had a wasm32 build until v4.15.0 (see for example the list of assets at https://github.com/leanprover/lean4/releases/tag/v4.15.0), but we ran into an Emscripten limitation which would have taken a lot of engineering effort to overcome, and we decided to drop the WebAssembly target for the time being. So the current status is that the WebAssembly target is still in the code, but in a broken state. Whether it will come back depends on whether there is a use case for a WebAssembly target that is important enough to justify the significant investment to bring it back and keep it running (this is a very high bar).
TongKe Xue (Nov 28 2025 at 05:38):
Thank you for the informative historical insight. Could you expand a bit more on
but we ran into an Emscripten limitation which would have taken a lot of engineering effort to overcome, and we decided to drop the WebAssembly target for the time being
My background is: (1) I just recently got the lean4 github code to build locally (on x86_64), (2) I I have some familiar with wasm32, not via emscriptsen, but Rust.
I'm curious is the issue: 4GB memory limit ? This appears to be resolved in https://spidermonkey.dev/blog/2025/01/15/is-memory64-actually-worth-using.html (never tested this myself).
Digging around github issues, I found https://github.com/leanprover/lean4/pull/2599 , where the claim seems to be "wasm32 makes CI tests really slow"
I'm curious if you could share what the Emscriptsen limitation is (or point me to the github issue). Thanks!
Markus Himmel (Nov 28 2025 at 07:07):
The problem was that the build used EXPORT_ALL and we exceeded the limit of 10000 exported symbols (not sure whether this is an emscripten or wasm-ld limitation)
TongKe Xue (Nov 28 2025 at 08:10):
- I currently reproduce the "10,000 export limit error yet"
doc/make/emscripten.md says:
mkdir -p build/emscripten
cd build/emscripten
emconfigure cmake ../../src -DCMAKE_BUILD_TYPE=Emscripten
make lean_js_js lean_js_wasm
however, running the emconfigure cmakeline, I get
emconfigure cmake ../../src -DCMAKE_BUILD_TYPE=Emscripten
error: use `emcmake` rather then `emconfigure` for cmake projects
changing to emcmake, I get
emcmake cmake ../../src -DCMAKE_BUILD_TYPE=Emscripten
configure: cmake ../../src -DCMAKE_BUILD_TYPE=Emscripten -DCMAKE_TOOLCHAIN_FILE=/nix/store/202rpslcjhwax183ahrf54xf46kvp1li-emscripten-4.0.10/share/emscripten/cmake/Modules/Platform/Emscripten.cmake -DCMAKE_CROSSCOMPILING_EMULATOR=/nix/store/ijsy113yzy0mpcr1sf0773nz9v0r7hff-nodejs-22.17.1/bin/node
CMake Error at CMakeLists.txt:8 (message):
STAGE must be set; use the CMakeLists.txt in the root folder
changing to
emcmake cmake ../../src -DCMAKE_BUILD_TYPE=Emscripten -DSTAGE=0
I get an error about not finding mimalloc.h (still sorting this out)
Setting aside those issues: do we actually need export_all ? Intuitively, I feel there is a very small number of lean4 wasm fns we need to expose to js. The exported fns need to be able to call non-exported fns, but it is not clear to me at all why we need to export all fns.
This shows my lack of understanding of lean4 internals: could you enlighten me on why we need to export all lean4 fns? Intuitively, it seems we only need to export a few for repl, type checker, lsp, ...
Thanks!
Markus Himmel (Nov 28 2025 at 08:19):
Well, a Lean program can do import Lean and then it has access to literally the entire Lean codebase, so all of that needs to be available.
But like I said, this problem isn't insurmountable. It's just not that high on the priority list.
Max Nowak 🐉 (Nov 28 2025 at 08:22):
A naive solution would be to only export the necessary symbols, right? Do you have links at hand to prior discussion on those limitations? If not, I'll find it myself. I might spend a weekend playing around with this.
TongKe Xue (Nov 28 2025 at 08:29):
Again, this is due to my lack of understanding of Lean internals/fundamentals:
Markus Himmel said:
Well, a Lean program can do
import Leanand then it has access to literally the entire Lean codebase, so all of that needs to be available.But like I said, this problem isn't insurmountable. It's just not that high on the priority list.
My understanding is that import Lean needs to expose those symbols to the Lean REPL/Editor. It is not clear to me why this implies the same symbols need to be exposed to the Wasm/JS FFI Boundary.
My current mental model (which is probably wrong -- and I would appreciate knowing where I'm wrong) would be this analogy: Suppose we wanted to get Emacs + ELisp to run in Chrome/wasm32:
- We need to export a few emacs/elisp fns to Wasm/JS boundary.
- We do not need to expose every elisp fn to to the Wasm/JS boundary.
- And being able to call a elisp fn inside a scratch buffer does NOT imply we can also call the elisp fn directly from js.
My (current) mental model is that this argument would also apply to the import Lean case above: being able to refer to a theorem / fun / def in the Lean editor/repl does NOT imply that we need to expose it to the wasm/js FFI boundary.
Could you let me know where I'm wrong (or point me at the docs I can read up on this) -- it's clear I'm misunderstanding something about the way Lean Editor/Repl <-> wasm32 interaction works, but it's not clear to me where my understanding is.
Thanks!
Markus Himmel (Nov 28 2025 at 08:45):
I guess I was thinking about what would happen if you #eval something, i.e., how the interpreter finds the native implementation of a function defined in core. But the truth is that I don't know the details of how this works, both on the "traditional" platforms and on WebAssembly, so it's very possible that there isn't any problem.
It would definitely be great to see a working build of a recent Lean that doesn't need the export_all.
Sebastian Ullrich (Nov 28 2025 at 08:46):
The interpreter will use exported symbols to jump into native code. It doesn't need an exported symbol for every single Lean function if you don't call it or can live with the interpretation overhead, but now you're in the business of designing a new heuristic for what to export to it and what not.
TongKe Xue (Nov 28 2025 at 09:02):
The interpreter will use exported symbols to jump into native code.
I'm clearly misunderstanding something here. Let's see what I'm doing wrong. So my background is that I have a Rust/wasm32 app running in Chrome. It's about 100k loc of Rust, and I think I expose a total of 2 functions. func1 = after wasm is loaded, it calls an init fn; func2 = sometimes from the JS dev console, I want to send the Rust code a msg. Literally 2 functions exposed from wasm to JS. (No exposed fns for keyboard/mouse/postmessage callbacks; Rust creates an anon closure on the fly and registers it with JS land).
So now let's talk about something like https://live.lean-lang.org/ . Right now, it has a websocket to some x86_64 server running "lake serve bunch of options" from https://github.com/leanprover-community/lean4web/blob/main/server/bubblewrap.sh#L45
Now, let's imagine an alternative world, where instead of
- "lean4web client" <-> websocket <-> "x86_64 server: lake serve", we did
- "lean4web client" <-> postmessage <-> "chrome webweworker: lean4/wasm" ; post msg here referring to https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage
Intuitively to me, it seems the "lean4/wasm in Chrome webworker" needs to expose the same number of fns as the "x86_64 server: lake serve" -- i.e. a single fn for processing window.postMessage instead of websocket msgs.
Now, clearly, I'm misunderstanding something. What am I getting wrong? :-)
TongKe Xue (Nov 28 2025 at 09:05):
In short, ignoring lots of low level details:
Naively, if we tried
- started with "lake serve --"
- replaced websocket w/ handling window.postmessage
- only exposed the fn for handling window.postmessage
- what breaks down?
Context: I just realized. I never stated my XY problem. I'm trying to build something in Rust + http://egui.rs (running in Chrome) that talks to lean4 (running in a Chrome webworker), as a potential alternative to the standard lean4web client UI.
Eric Wieser (Dec 03 2025 at 02:48):
My guess is that we need to export the functions via webassembly, because this happens to export them in a way that emscripten's implementation of dlsym (which the Lean interpreter uses) can load
Eric Wieser (Dec 03 2025 at 02:50):
As far as I can tell 10000 symbols was an old chrome limitation, but perhaps the bottleneck is somewhere else, or Markus Himmel elided a zero
TongKe Xue (Dec 03 2025 at 03:40):
I repeated "10k" somewhere up there. Turns out, the limit appears to be 100k: (unless bottleneck elsewhere).
Snir Broshi (Dec 03 2025 at 04:22):
AFAICT Chrome's limit is 1M exports, and it was never 10k
https://chromium.googlesource.com/v8/v8/+/fd3b8616dbc6115c8e0536d5adbeb2c8863c8bca/src/wasm/wasm-limits.h#33
Markus Himmel (Dec 03 2025 at 07:08):
Sorry, that was a typo on my end. It's 100k. Here is an excerpt from one of the error messages we observed:
1/5 Test #1070: leanruntest_ac_rfl.lean ..........***Failed 1.95 sec
failed to asynchronously prepare wasm: CompileError: WebAssembly.instantiate(): exports count of 100019 exceeds internal limit of 100000 @+279585
Aborted(CompileError: WebAssembly.instantiate(): exports count of 100019 exceeds internal limit of 100000 @+279585)
/home/runner/work/lean4/lean4/build/stage1/bin/lean.js:1
TongKe Xue (Dec 03 2025 at 07:43):
This shows my utter lack of emscriptsen knowledge (though I've done quite a bit of Rust + wasm32; involving webworkers + postmessage).
Why can't we stuff lake serve -- in a webworker, have it use window.postMessage instead of tcp/websockets, and thus on the wasm side, only have to "expose 1 function" ?
I apologize for my ignorance here -- why do we need to expose 100k fns? (I think the previous argument was "so the repl can make fn calls"); but if we just stuff all of lake serve -- as a single binary blob, it seems if we change the tcp/websocket to use postmessage (and run in a separate webworker), JS would not need to have access to all 100k exported fns.
TongKe Xue (Dec 03 2025 at 07:47):
PS: I'm not in anyway suggesting the above is easy. I'm trying to debug my ignorance of why we need to export 100k fns, instead of just the fn lake serve -- needs to support its tcp/websocket. Naively, this lake serve -- network protocol seems like the perfect place to draw the wasm/js ffi boundary (for the sake of having a web lean4 repl).
Henrik Böving (Dec 03 2025 at 08:19):
A lean server process can be coerced to call any of those 100k+ functions through #eval so they need to be exposed
Markus Himmel (Dec 03 2025 at 08:22):
To spell it out a bit more: the problem isn't at the boundary between lake serve and the repl, the problem is what lake serve needs to do to answer the requests it receives from the repl. lake serve needs to be able to execute arbitrary Lean programs, which may call arbitrary library functions.
TongKe Xue (Dec 03 2025 at 08:36):
Clearly I am misunderstanding something very fundamental/basic, but I still don't understand this.
-
I have Rust projects that are 100,000+ loc, running wasm32. I only need to expose < 10 Rust fns to js. Rust fns calling rust fns do not require them to be exposed to js.
-
I can embed Python (https://crates.io/crates/rustpython) or lua (https://crates.io/crates/piccolo) interpreter in them -- and again, I only need to expose a websocket/postMessage
-
I feel like I'm missing something very very basic here. Why is it, for a Lean4 repl, it does not suffice to just expose a bidirectional json msg port of sorts? Why is it that these Lean4 fns must be exposed from wasm to JS ?
Again, I'm sorry for such a stupid question. I feel like I'm misunderstanding something very very basic here. I don't understand why "lean repl needs to be able to call X" implies "X has to be exposed from wasm to js".
Markus Himmel (Dec 03 2025 at 08:43):
The special thing about Lean is that is has this slightly unique model where it is both a compiled and an interpreted language. When you send #eval someFunction to the Lean server it calls the interpreter on someFunction. The interpreter will check if it finds a compiled version of someFunction. If it finds it, it will jump into that native code. Otherwise, it will interpret someFunction, which in turn might call more functions, for which it will then again do the same thing of first looking for a compiled version and otherwise interpreting.
The "using the native version if available" step is the problem. It will only work if the native code is available as an exported symbol.
TongKe Xue (Dec 03 2025 at 08:58):
-
I was definitely ignorant of this dual compiled-code/interpreter structure. Thank you for clarifying that. Is there documentation I can read up on this ?
-
On x86_64 Desktop, when I am editing in VSCode, is the Lean4 server doing dynamic code generation? Either via JIT or building a .so and doing dlopen?
-
To the best of my knowledge (which is not very strongly held beliefs), wasm neither (1) supports JIT nor (2) supports dlopen. AFAIK, wasm blobs communicate via postmessage or sharedarraybuffer. Does this mean that technically, for the wasm build, we know, at compile time which fns have "native version", and which fns not? So in very wild theory hypothetical world for wasm32 builds, we could just hard code the jumps at compile time, and not have to export the fns. If this is true, is it also the case we're not doing this because it would heavily complicate / almost "fork" parts of the codebase, as the x86_64 and wasm32 would behave very differently for building/linking/calling fns?
-
Purely out of curiosity, if we just went "all interpreter, no native code" how bad would the performance hit be? (10x? 100x?) I think the most common "lean4 in Chrome" use case is a basic editor + some toy problems (probably not even loading up all of Mathlib) for educational / demo purposes.
Thank you for taking your time to clear up my incorrect assumptions. I appreciate it. :-)
Markus Himmel (Dec 03 2025 at 09:12):
I'm not aware of any documentation beyond the module doc at https://github.com/leanprover/lean4/blob/0173444d24df9253d7947cfe82519e4a6caffd3a/src/library/ir_interpreter.cpp#L7.
Henrik Böving (Dec 03 2025 at 09:31):
Purely out of curiosity, if we just went "all interpreter, no native code" how bad would the performance hit be? (10x? 100x?) I think the most common "lean4 in Chrome" use case is a basic editor + some toy problems (probably not even loading up all of Mathlib) for educational / demo purposes.
usually 10x
TongKe Xue (Dec 04 2025 at 16:39):
Different question (but perhaps it hits same limitation).
Instead of compiling the LeanREPL into wasm32, I have a program written in Lean (say InsertSort), and I want to compile this lean program to wasm32. Is this possible, or would this also hit the 100k export limit?
My intuition is:
- lean will emit InsertSort.c
- we need to use emscriptsen and somehow link vs (1) Lean cpp runtime and (2) libraries the cpp runtime depends on (big number libraries + ...)
Is there a tutorial for doing this somewhere? I want to do:
input: InsertSort.lean
output: some wasm module I can run in Chrome32 and call via JS
Thanks!
PS: If bundling lean runtime itself does not hit 100k limit, what hits the 100k limit in the LeanREPL case? Something like MathLib or something else ?
Snir Broshi (Dec 04 2025 at 22:44):
Markus Himmel said:
Sorry, that was a typo on my end. It's 100k. Here is an excerpt from one of the error messages we observed:
1/5 Test #1070: leanruntest_ac_rfl.lean ..........***Failed 1.95 sec failed to asynchronously prepare wasm: CompileError: WebAssembly.instantiate(): exports count of 100019 exceeds internal limit of 100000 @+279585 Aborted(CompileError: WebAssembly.instantiate(): exports count of 100019 exceeds internal limit of 100000 @+279585) /home/runner/work/lean4/lean4/build/stage1/bin/lean.js:1
Well in that case it was 100k previously, and a year ago it was bumped up to 1M
https://chromium.googlesource.com/v8/v8/+/285bfe195f7918f623ee6d0f138774ec003f2262%5E%21/#F0
Does that mean wasm Lean is viable right now? (at least until Lean bypasses that increased limit)
Cauli Ziani (Dec 26 2025 at 13:13):
A bit of forensics:
Here's the PR that disabled wasm, changes first appeared on [v4.16.0-rc1](https://github.com/leanprover/lean4/releases/tag/v4.16.0-rc1)
The title is "chore: temporarily disable Web Assembly build in CI", but no description on the reason why it was done.
I searched for a related CI failure (~same date as the PR above) and could find one that showed the "Web Assembly" step failing, but unfortunately the logs have expired., so I cannot tell exactly what went wrong.
The way to go here is probably to reinstate it in the CI and see what breaks in the PR's actions, and go from there.
TongKe Xue (Dec 26 2025 at 13:32):
I could not verify, but others have claimed (and I believe them); at 4.15, the number of wasm exports exceeds 100k, which breaks something in wasm-ld.
Are you able to run this build locally and get the wasm error? (I tried and could not. Still not sure if it is possible to replicate the CI process locally.)
Cauli Ziani (Dec 26 2025 at 13:53):
@TongKe Xue Apparently the limit was bumped from 100k to 1M ( )
I'll try to reinstate the Web Assembly build first through the CI and see if I can replicate the error.
This was the way wasm builds were built before, so I believe this is the shortest path to a solution (when compared with the ad-hoc builds from the emscripten.md file, which seems more obscure)
TongKe Xue (Dec 26 2025 at 15:01):
- I wish you the best. If I can be of help, let me know.
- I agree that "ad-hoc builds from emscripten.md file" is not the way to go. I was trying it here: #general > reproducing lean4 wasm32 build of 100k export error In private conversations, I pushed it to the part where it fails because it tries to link vs a 64-bit-gcc instead of 32-bit-gcc (the goal was: do a 32-bit x86 stage0; then use that for a 32-bit wasm stage 0). This is about where I gave up and decided to take a step back.
If I were to try this again, I'd try to "replicate CI environment locally" -- so I think you're 100% on the right track. Good luck!
Cauli Ziani (Dec 30 2025 at 20:56):
@TongKe Xue update: I forked the project and debugged a lot, and finally managed to have successful build.
The issue here is that the build is 602MB :sweat_smile:, while the latest wasm build from 4.15.0 was 190MB.
I ended up disabling -flto (docs) because the CI was terminating after 52 mins (RAM usage exceeding CI runner is likely the cause).
Here's the PR to my own forked repo (work in progress)
Here's the successful CI run
Here's the WASM build artifact (I haven't tested yet)
I haven't hit the "exports limit error" during this build.
Next steps:
- try this wasm build in a web project, to see if it can run in the browser
- try to reduce build size to the ~190 MB we had before
- submit the PR for review
Henrik Böving (Dec 30 2025 at 21:14):
The size of Lean code in core has tripled since that version so this build size increase does not really come as a surprise.
Cauli Ziani (Dec 30 2025 at 21:21):
Okay, I "vibe-coded" a throaway React app to run the WASM directly in the browser, the wasm files are actually 137MB + 89.5 MB.
And it errored with The Lean command-line driver can only run under Node.js. For the embeddable WASM library, see lean_wasm.cpp (source)
But the file lean_wasm.cpp doesn't seem to exist anymore, possibly Lean 3 era stuff. I would have to do some archeology to understand if a browser-embeddable version is viable.
New info (for myself): The previous WASM builds were only targeting Node.js
Cauli Ziani (Dec 30 2025 at 21:55):
I'm adapting the shell to use NODEFS for Node.js and Emscriptens [MEMFS](https://emscripten.org/docs/porting/files/file_systems_overview.html#emscripten-file-system-architecture) for browsers. This might work, and we won't need to build target.
We'll probably need to lazy-load the .olean files as they are required, because the full lib is huge (244MB for all .olean files).
A proof of concept could preload everything :sweat_smile:, 137MB+89MB+244MB = ~470MB
I'm doing all of this without input from the contributors of the main repo, but at this point it's mostly just curiosity.
Robin Arnez (Dec 30 2025 at 21:59):
Yeah, and with node.js you get
failed to asynchronously prepare wasm: CompileError: WebAssembly.instantiate(): exports count of 201107 exceeds internal limit of 100000 @+226760
So yes, the export count is a problem.
Cauli Ziani (Dec 30 2025 at 22:09):
@Robin Arnez
Node.js uses the v8 engine, which was patched to increase the limit from 100K to 1M exports end of 2024, so I expect this problem to be gone in newer versions of Node
(I need to match the versions of V8 <> Node.JS to see if it has already been released)
Cauli Ziani (Dec 30 2025 at 22:16):
The change to v8 landed on v8-version 14.5.75870136 (commit ref)
To check your node's v8 version you can run node -p process.versions.v8
Mine returns 12.9.202.28-node.14 (which doesn't have the fix, because I'm on node 14)
EDIT: the tags from node -p process.versions.v8 do not necessarily match the v8 git ones
TongKe Xue (Dec 30 2025 at 22:36):
@Cauli Ziani : This looks really exciting. Let me know if (1) you get stuck or (2) you got it working end to end. It seems like you have strong momentum and it would be a distraction if I tried to jump in right now.
Looking forward to see where you push this. Having Lean4 run in Chrome would be awesome!
Kim Morrison (Jan 05 2026 at 02:25):
@Cauli Ziani I tested your wasm build artifact from Jan 4 with Node.js v25.2.1 (V8 14.1) and the wasm loads successfully. No more "exports count exceeds internal limit" error.
However, I hit a filesystem issue on macOS. The EM_ASM block mounts /home and /tmp, then calls FS.chdir(process.cwd()). On macOS:
/tmpis a symlink to/private/tmp, soprocess.cwd()returns/private/tmp/.../homeis empty (macOS uses/Usersfor home directories)
I patched lean.js to add these mounts after the existing ones:
try{FS.mkdir("/Users");FS.mount(NODEFS,{root:"/Users"},"/Users");}catch(e){}
try{FS.mkdir("/private");FS.mkdir("/private/tmp");FS.mount(NODEFS,{root:"/private/tmp"},"/private/tmp");}catch(e){}
With this fix, initialization gets further but crashes during initialize_Lean:
[DEBUG:INIT] initialize_Std() done
[DEBUG:INIT] About to initialize_Lean...
RuntimeError: unreachable
at wasm-function[140491]:0x80d5f2d
Cauli Ziani (Jan 05 2026 at 08:06):
@Kim Morrison thanks for testing, Kim!
I am still debugging those "unreachable", I'm running on Chrome and can run commands like --version and --help.
In what environment are you running the wasm?
In my build, I'm trying to add support for browsers, and it uses process.release.name (node on node, undefined on browsers) and routes to the appropriate FS (NODEFS, MEMFS).
If I try to compile some lean code (lean --json /workspace/input.lean), I get the same error as you.
However, if I "warm up" the binary by calling lean --version and then lean --json /workspace/input.lean, I get a bit further for some reason:
Exit code: 1
[DEBUG:P] initSearchPathInternal called
[DEBUG:P] initSearchPath: leanSysroot = /
[DEBUG:P] initSearchPath: final searchPath = [/lib/lean, //lib/lean]
Error: memory access out of bounds
...
[DEBUG:F] Input file exists, size: 11 bytes
lean.js:1 [DEBUG:F] Input file content (first 200 chars): #eval 2 + 2
lean.js:1 [DEBUG:G] run_shell_main entered, building args list
lean.js:1 [DEBUG:G] Args list built, calling lean_shell_main...
lean-worker-simple.html:302 main() threw exception: RuntimeError: memory access out of bounds
at lean.wasm:0x80e50da
at lean.wasm:0x80cebc6
at lean.wasm:0x14de008
at lean.wasm:0x14dec7e
at lean.wasm:0x14ddf21
at Object.ccall (lean.js:1:303458)
at runMain (lean-worker-simple.html:65:27)
at lean-worker-simple.html:299:33
lean-worker-simple.html:308 ❌ Unexpected exception: RuntimeError: memory access out of bounds
at lean.wasm:0x80e50da
at lean.wasm:0x80cebc6
at lean.wasm:0x14de008
at lean.wasm:0x14dec7e
at lean.wasm:0x14ddf21
at Object.ccall (lean.js:1:303458)
at runMain (lean-worker-simple.html:65:27)
at lean-worker-simple.html:299:33
The wasm errors are completely opaque (lean.wasm:0x80e50da), so it's a bit difficult to debug.
The build takes 55min so I'm slowly and persistently iterating, adding more and more logs and :smile:
I'll upload my "AI-slop demo" to a github repo, it's definitely code that I'm not proud of but just so that we have a common ground. I'll even push the binaries from 4 Jan along with the .olean files
Cauli Ziani (Jan 05 2026 at 08:32):
Here's the repo: https://github.com/cauli/lean4-wasm-in-browser
It's a React application, so you'll need to npm i npm run dev
And to run it you also need to extract some files from the latest CI artifact from my fork (https://github.com/cauli/lean4/actions/workflows/ci.yml) into the /public folder.
Introduction are up-to-date in the README
Cauli Ziani (Jan 05 2026 at 08:36):
Demo looks like this (screenshot has call to --help)
demo.png
Cauli Ziani (Jan 12 2026 at 22:38):
Managed to compile a program running wasm locally/browser, I'll take the win but everything is extremely slow and full of things to optimize:
- I had to disable
#evalon wasm due to the size of the.irfiles - The process of loading the libraries into the WASM MEMFS and initiating it is so slow, and I'm loading locally 6000+ .oleans
- I am struggling to re-run the command on the same wasm module, so the slow-as-hell instantiation has to happen every time
- This probably only works on Chrome due to memory limits :laughing:
All in all, a huge pain, but I might be able to optimize this further (reuse wasm, load only necessary libs, .tar.gz the necessary .oleans)
Cauli Ziani (Jan 12 2026 at 22:46):
If you fancy trying:
- clone https://github.com/cauli/lean4-wasm-in-browser (mostly me hitting Opus 4.5 with a stick), npm i
- Download the latest wasm build from my lean4 fork (CI action): https://github.com/cauli/lean4/actions/runs/20930232165/artifacts/5102608389
- Place
build-Web Assembly.zipin /public/lean-wasm/ - run
bash ./script/create-lean-lib.sh(you might need zstd installed) - npm run dev -> https://localhost:5173/ (it will load 6000 files one by one, 1GB, so I don't recommend putting that online just yet :laughing:)
TongKe Xue (Jan 12 2026 at 22:51):
clone https://github.com/cauli/lean4-wasm-in-browser (mostly me hitting Opus 4.5 with a stick)
Reminds me of this GBS quote: "The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."
As someone who gave up on reviving lean-on-wasm4; just wanted to say thanks for persisting when others (like myself) gave up. Cheers
Cauli Ziani (Jan 12 2026 at 22:56):
@TongKe Xue thanks! But don't get tooo optimistic, it's unusable right now. But it's a start. If we manage to reuse the instantiated wasm, this will get much better
@Kim Morrison the "unreachable" issues were due to function interfaces not matching! wasm is stricter about this (more than C++), the warnings we got in the build pipeline should actually be treated as errors . When wasm hit those function calls, it crashed.
Cauli Ziani (Jan 12 2026 at 23:10):
Maybe we just have to run this in --server mode, but in the browser? :thinking:
Marvin (Feb 19 2026 at 11:05):
Hi any progress on this?
Marvin (Feb 19 2026 at 11:05):
What is the latest version of lean you have tested this on?
Last updated: Feb 28 2026 at 14:05 UTC