Zulip Chat Archive

Stream: lean4

Topic: what other async primitives lean could implement?


Serhii Khoma (srghma) (Aug 04 2025 at 10:23):

I have asked chatgpt

what other std primitives lean could implement except already implemented

using std::thread;
using std::mutex;
using std::recursive_mutex;
using std::shared_timed_mutex;
using std::atomic;
using std::atomic_bool;
using std::atomic_ushort;
using std::atomic_uint;
using std::atomic_uchar;
using std::condition_variable;
using std::lock_guard;
using std::unique_lock;
using std::atomic_load;
using std::atomic_load_explicit;
using std::atomic_fetch_add_explicit;
using std::atomic_fetch_sub_explicit;
using std::memory_order_relaxed;
using std::memory_order_release;
using std::memory_order_acquire;
using std::memory_order_acq_rel;
using std::memory_order_seq_cst;
using std::atomic_thread_fence;

and he answered

Nice question! Lean 4’s runtime currently wraps a solid core of C++11 concurrency primitives, as you listed — mostly the essentials for thread creation, locking, atomics, and condition variables.

Here are some **additional std library concurrency primitives** that Lean *could* wrap/implement if there was a use case, but currently don’t seem to be exposed explicitly in the runtime:
* * *

### Other useful `std` concurrency primitives Lean could consider implementing/wrapping

Primitive

Description / Use case
**`std::shared_mutex`**

More modern shared mutex (reader-writer lock) — `shared_timed_mutex` is older, deprecated since C++17 in favor of `shared_mutex`. Could update to this.
**`std::shared_lock`**

RAII lock for `shared_mutex` to hold a shared (reader) lock, complements unique_lock for shared mutexes.
**`std::future` and `std::promise`**

For async task/result passing without explicit thread management; Lean’s task system could interop or reimplement with futures.
**`std::packaged_task`**

Wraps a callable for deferred execution with future results; useful in thread pools and async task scheduling.
**`std::jthread`**

C++20 cooperative cancellation-aware thread wrapper (auto joins on destruction), simplifying thread lifecycle.
**`std::barrier`**

C++20 synchronization primitive to block threads until a certain point. Useful for complex parallel computations.
**`std::latch`**

Similar to barrier but one-time use, good for thread startup synchronization.
**`std::semaphore`**

Counting semaphore to control access to limited resources.
**`std::atomic_flag`**

Simple atomic boolean flag, useful for spinlocks or lightweight synchronization.
**`std::condition_variable_any`**

More general condition variable that can work with any BasicLockable (not just std::unique_lockstd::mutex), can improve flexibility.
**`std::scoped_lock`**

Variadic RAII lock guard to lock multiple mutexes at once without deadlock risk.
**`std::spinlock` (not in std, but often implemented)**

Spinlocks for very low-level synchronization, sometimes used in runtimes.
**`std::call_once` and `std::once_flag`**

For one-time initialization in a thread-safe way (singleton patterns, etc).
* * *

### Why Lean might not yet wrap these?
* The Lean runtime focuses on the **minimum primitives needed** for their task scheduling and synchronization models.

* Some newer features (like `std::jthread`, barriers, semaphores) require C++20 and might reduce portability or require updating build toolchains.

* Some constructs like futures/promises might be redundant with Lean’s own `Task` and async runtime abstractions.

* Others, like `shared_lock` or `scoped_lock`, are convenience wrappers but can be implemented on top of existing primitives.

* * *

### Could Lean benefit from these?
* Possibly — especially things like `shared_lock`, `scoped_lock`, or `call_once` could clean up code and improve correctness.

* Semaphores and barriers would help if Lean runtime adds more complex thread coordination.

* `jthread` could simplify thread lifetime management.

* Using `condition_variable_any` would allow waiting on custom mutex types or locks.

* * *

### TL;DR

Lean currently wraps core primitives (threads, mutexes, condition variables, atomics, locks). It could also add support for:
* Shared locks: `std::shared_mutex`, `std::shared_lock`

* More advanced synchronization: `std::barrier`, `std::latch`, `std::semaphore`

* Thread lifetime helpers: `std::jthread`, `std::call_once`

* Concurrency abstractions: `std::future`, `std::promise`

Q: How lean async api will evolve? will `std::future`/`std::promise` be added?

Ruben Van de Velde (Aug 04 2025 at 10:56):

Please don't dump random llm slop in here

Serhii Khoma (srghma) (Aug 04 2025 at 11:36):

Ruben Van de Velde

I asked a question:

  1. How lean async api will evolve?
  2. will  std::future/std::promise  be added?
  3. what else can be added / improved in lean async api?

Markus Himmel (Aug 04 2025 at 11:52):

We actually did quite a bit of work on async primitives this year. Besides docs#Task (which is like std::future) and docs#IO.Promise (which is like std::promise) which have been around for a long time, there are many new things that were added this year: docs#Std.Barrier, docs#Std.Channel, docs#Std.Mutex, docs#Std.RecursiveMutex, docs#Std.SharedMutex, docs#Std.Internal.Async.Selectable and probably more that I'm forgetting right now. At the moment we're looking more at hardening the existing APIs and don't have anything new on our immediate roadmap, but if you have a use case that is not covered by the existing API, you should let us know.

Joe Hendrix (Aug 04 2025 at 16:02):

@Markus Himmel I don't have an immediate need, but I've occasionally wanted async IO.

Is this being built with libuv (or another async library) compatibility in mind?

Robin Arnez (Aug 04 2025 at 16:31):

It's built with libuv :-) See https://github.com/leanprover/lean4/tree/master/src/Std/Internal/UV

Serhii Khoma (srghma) (Aug 05 2025 at 00:43):

@Markus Himmel

Tokio has different flavours of Channels:

  1. mpsc: multi-producer, single-consumer channel. Many values can be sent.
  2. oneshot: single-producer, single consumer channel. A single value can be sent.
  3. broadcast: multi-producer, multi-consumer. Many values can be sent. Each receiver sees every value.
  4. watch: multi-producer, multi-consumer. Many values can be sent, but no history is kept. Receivers only see the most recent value.

And in lean?

how to create them? If already possible - then which flavour should be used (unbounded, zero, bounded)?

  1. multi-producer, single-consumer channel.
  2. single-producer, single consumer channel.
  3. multi-producer, multi-consumer, with history
  4. multi-producer, multi-consumer, without history

Markus Himmel (Aug 05 2025 at 04:54):

If you want a Rust analogy, Lean channels work like crossbeam/async_channel channels, not like Tokio channels. You can create one using docs#Std.Channel.new.


Last updated: Dec 20 2025 at 21:32 UTC