Zulip Chat Archive
Stream: lean4 dev
Topic: Algebraic effects in Lean
Dan Abramov (Jul 09 2025 at 21:52):
I know Lean doesn't have algebraic effects (and maybe doesn't plan to due to implementation difficulty). My question is slightly different but related.
If implementation difficulty was not a concern (i.e. if the team had a lot of time to work just on this), would AE fit into Lean's model? Or would it require a significant revision (e.g. to the type system)? In other words, are any parts of Lean's design fundamentally AE-averse in some way?
Thanks!
Chris Bailey (Jul 10 2025 at 00:39):
This was a few years ago, but I recall Sebastian Ullrich briefly addressed one of the technical facets in this thread:
Dan Abramov (Jul 10 2025 at 02:46):
Are they inefficient in OCaml? I presume theyāre used for nontrivial programs there too. Does OCaml not have some constraints than Lean does? Or is OCamlās implementation considered inefficient?
(Iām genuinely curious, Iāve almost never used OCaml)
Locria Cyber (Aug 10 2025 at 21:47):
OCaml is inefficient in general.
Locria Cyber (Aug 10 2025 at 21:48):
Avoid algebraic effects if possible, unless you need the call/cc feature of free monad. Effects are typed holes. Simply passing in functions works the same.
(deleted) (Aug 13 2025 at 17:14):
I think algebraic effects are ultimately a program verification thing
(deleted) (Aug 13 2025 at 17:15):
Running programs with algebraic effects directly is very inefficient
(deleted) (Aug 13 2025 at 17:16):
But otherwise if you are modeling the semantics of programs then algebraic effects are the way to go
Dan Abramov (Aug 19 2025 at 10:45):
Is there something definite I can read about them being inefficient?
Is Koka slow? The Koka website mentions Perceus and FBIP (which both seem relevant to Lean) so Iād assume folks working on Koka are similarly performance-minded.
Dan Abramov (Aug 19 2025 at 11:00):
Also, is effects being slow the official position of the Lean team? Or is this more like a guess? (Sorry I donāt know whoās on the team.)
The reason Iām asking is that I want to understand whether Lean has firmly decided not to look at typed effects and is happy with the status quo (monads and monad transformers). Or if effects donāt fit into the Lean typesystem in some fundamental way. Or if more research is needed on that. Or if they are at some fundamental level slow (which I find a bit harder to believe, but then I lack the domain knowledge to verify it so Iād take that on faith from someone who closely investigated this).
Overall it feels like Lean is state of the art in multiple ways, so rawdogging monads and monad transformers seems like a step backwards compared to the PL research. But then again, Iām an outsider and canāt firmly substantiate my claims so itās just vibes.
Dan Abramov (Aug 19 2025 at 11:03):
cc @Sebastian Ullrich Iām wondering if your statement from the linked thread (about unacceptable runtime vs compile overhead) is basically the official position? And that nothing changed much since then? Also I wonder if thereās something Lean does that improves on status quo wrt usability of monads. Thanks!
Dan Abramov (Aug 19 2025 at 11:05):
Locria Cyber said:
Avoid algebraic effects if possible, unless you need the call/cc feature of free monad. Effects are typed holes. Simply passing in functions works the same.
I donāt understand this part. I think the point of AE is that intermediate abstractions donāt need to be aware of what youāre āpassing inā. So the handler can live way above and still coordinate with the thing way below. Whereas āsimply passing functionsā forces every intermediate layer in the call stack to be aware of this relationship.
AndrƩs Goens (Aug 19 2025 at 11:07):
Dan Abramov said:
Is there something definite I can read about them being inefficient?
Is Koka slow? The Koka website mentions Perceus and FBIP (which both seem relevant to Lean) so Iād assume folks working on Koka are similarly performance-minded.
As a small side note, it's very relevant to Lean, Lean also uses a version of Perceus and Leo was also on the Perceus paper (https://dl.acm.org/doi/abs/10.1145/3453483.3454032) (see also https://dl.acm.org/doi/abs/10.1145/3412932.3412935 for Lean's reference counting)
AndrƩs Goens (Aug 19 2025 at 11:08):
One thing that could very well be done (and I've played around with the idea given enough time) would be to have effects and effect types at a surface level and then elaborate them into monadic code, which would affect compile time but ideally not runtime performance (if done right)
Henrik Bƶving (Aug 19 2025 at 11:08):
AndrƩs Goens said:
Dan Abramov said:
Is there something definite I can read about them being inefficient?
Is Koka slow? The Koka website mentions Perceus and FBIP (which both seem relevant to Lean) so Iād assume folks working on Koka are similarly performance-minded.
As a small side note, it's very relevant to Lean, Lean also uses a version of Perceus and Leo was also on the Perceus paper (https://dl.acm.org/doi/abs/10.1145/3453483.3454032) (see also https://dl.acm.org/doi/abs/10.1145/3412932.3412935 for Lean's reference counting)
It's more the other way around, Perceus uses a version of what Lean does :P
AndrƩs Goens (Aug 19 2025 at 11:09):
Henrik Bƶving said:
AndrƩs Goens said:
Dan Abramov said:
Is there something definite I can read about them being inefficient?
Is Koka slow? The Koka website mentions Perceus and FBIP (which both seem relevant to Lean) so Iād assume folks working on Koka are similarly performance-minded.
As a small side note, it's very relevant to Lean, Lean also uses a version of Perceus and Leo was also on the Perceus paper (https://dl.acm.org/doi/abs/10.1145/3453483.3454032) (see also https://dl.acm.org/doi/abs/10.1145/3412932.3412935 for Lean's reference counting)
It's more the other way around, Perceus uses a version of what Lean does :P
haha, sorry, fair enough! On that note @Henrik Bƶving , weren't you also working on FBIP for Lean? Is that still in the horizon?
Henrik Bƶving (Aug 19 2025 at 11:10):
We already have FBIP in Lean though? After all that term was invented by the Lean paper. Cameron Zwarich is working on FBIP improvements at the moment.
AndrƩs Goens (Aug 19 2025 at 11:10):
AndrƩs Goens said:
One thing that could very well be done (and I've played around with the idea given enough time) would be to have effects and effect types at a surface level and then elaborate them into monadic code, which would affect compile time but ideally not runtime performance (if done right)
and this could easily be a user-level library, since Lean is so extensible :wink:
AndrƩs Goens (Aug 19 2025 at 11:12):
Henrik Bƶving said:
We already have FBIP in Lean though? After all that term was invented by the Lean paper. Cameron Zwarich is working on FBIP improvements at the moment.
I guess I was thinking of @Anton Lorenzen's version in https://dl.acm.org/doi/abs/10.1145/3607840
Kyle Miller (Aug 19 2025 at 22:05):
Dan Abramov said:
I wonder if thereās something Lean does that improves on status quo wrt usability of monads
The elaborator that handles monad lifting and coercing through monads basically Just Works, and I rarely need to think about it. Lean's do notation is also very flexible and does a bunch of bookkeeping for you to simulate an imperative language.
Monads are also fairly well understood. There isn't much to them (in the "how much code does it take" sense to define things from scratch), and its a pretty light abstraction for being able to wire together imperative computations. The name's off-putting, sure, and the fact there's a name put to the abstract concept makes it seem like you're supposed to understand the abstract concept thoroughly first, but that's not necessary. "IO" is Lean's "public static void main". It means something, but do you need to fully understand it before you can do something useful?
I know the context is your thread . Let me ask you this: what if someone had been better at naming things and called them "contexts". No "monads" and "monad transformers", just "contexts". So, for example, ReaderT C (StateT S IO) would be a context where you have a reader context that gives you a configuration C you can read, a state context where you've got some state S you can modify, and it's all being done in an IO context where you can do I/O. Would you still be thinking "there must be something out there where I don't have to learn 'contexts'; I've heard about 'algebraic effects', they seem like they'd feel more obvious"? :-)
The word "context" also suggests what's so weird about "monad". It's odd to say "do I have to learn contexts". It's easier to say "do I have to learn about the IO context", etc. It's very mathy to talk about "contexts" themselves.
Answering your question in that thread, yes, you need to learn to use the common monadscontexts to program in Lean. For imperative programming, beyond needing to use ā to mark where side effects are happening (which is a nice improvement to imperative programming I think), the biggest difference is that data is immutable by default. It's possible to do mutation-based programming though; there's basically a pointer type available, both using IO or using the magic docs#runST.
Kim Morrison (Aug 20 2025 at 00:45):
Oh my, now I'm dreaming about actually renaming Monad to Context everwhere. :heart:
Dan Abramov (Aug 20 2025 at 01:09):
The elaborator that handles monad lifting and coercing through monads basically Just Works, and I rarely need to think about it. Lean'sĀ
doĀ notation is also very flexible and does a bunch of bookkeeping for you to simulate an imperative language.
Thanks, this is actually very encouraging! Once I free up some mindspace I think I'll try to actually learn how to work with them in Lean.
The name's off-putting, sure, and the fact there's a name put to the abstract concept makes it seem like you're supposed to understand the abstract concept thoroughly first, but that's not necessary. "
IO" is Lean's "public static void main". It means something, but do you need to fully understand it before you can do something useful?
I hope this is the case. The trigger for the linked thread is that I started looking at FPIL and this gave me flashbacks from early 2010s when folks excited about FP would try to talk about these things with web developers and were understandably met with blank stares.
Screenshot 2025-08-20 at 01.53.42.png
I've clicked through these chapters and quite a bit of it doesn't feel like learning about "actually using stuff" as much as reading about quirks of compositional bookkeeping. Even if they were called "contexts", I'm not sure that the amount of bookkeeping being taught in those chapters feels "grounded" to me.
Maybe it's just I haven't had the enlightenment yet and so having to think about "functors", "applicatives", etc, just to be able to compose some State and IO feels offputting. Again, I bring up effects for comparison not so much due to clearer terminology but because the mental model feels "obvious" (you raise effects, they interrupt upwards through the call stack until there's a handler, then the handler can resume ā like typed resumable try/catch ā see, I can explain the entire thing in one sentence). Whereas learning about lifting and composing applicative functor monad transformer sequences... ah I guess I can do it but it just feels a lot more detached. I struggle to get myself to do it, which means mainstream non-FP developers will also struggle to get themselves to do it (frankly, it's a huge turnoff), which means coding in Lean will probably be a lot more niche than maybe it could be.
Maybe this is more feedback about FPIL itself and the section titles and the choices of what to focus on in the book though. I'm still open to learning this stuff and then seeing if my opinion on it changes. With effects, I feel like it's possible to teach while mostly focusing on the actual concepts you want to deal with (concrete effects/handlers, not machinery). Whereas here it feels like 80% of the focus is on manipulating abstract machinery.
Kyle Miller (Aug 20 2025 at 02:02):
If FP is offputting, how about you don't read Functional Programming In Lean then :-) I find these things boring to read about myself. It's more fun to find existing code to study and to try using it as a model to write my own things.
Lean style tends to be a fairly straightforward kind of functional programming. For whatever reason, we tend to use functions like docs#Option.map directly rather than hide them behind an abstract interface (that said, when it's convenient, you'll see a <$> operator, but <*> is very rare to see). Maybe it's that dependent types are hard enough, and we don't have enough spare neurons to write perfectly abstracted code. Sometimes I also think about how "something doesn't need to be something to be something". It doesn't literally need to be <$> (Functor.map) for it to be a functor application. No need to let everyone know you know it's a functor. Similarly, it's fine to use a for loop and say "that's a fold". It's only a problem when it gets hard to reason about, and you might start reaching for folds/maps/etc. to try to manage the complexity.
I remember when doing serious projects in Haskell how desperate I felt trying to get the right monad lifts for my transformer stack. In Lean, I pretty much never feel that way. The elaboration routine is really great, and plus we have an Infoview to see expected types, so it's all so much easier to deal with mistakes. The only exception is that sometimes I need to write (myFn Ā·) instead of just myFn for callbacks to properly trigger monad lifting. Possibly this can be fixed.
The (ā m) expressions are a big deal for usability. Without them, code tends to either be littered with applicative functors or look like assembly language. It's pretty nice ā even if there are still some impedance mismatches (for example, when you want to mix pure values and monadic values in a short-circuiting && or || in if, you need to learn some tricks still, though I think there is or will be a linter to warn about this).
Dan Abramov said:
you raise effects, they interrupt upwards through the call stack until there's a handler, then the handler can resume ā like typed resumable try/catch ā see, I can explain the entire thing in one sentence
Is it clear that this is a way to model state? That's the most pressing thing in a language with no mutation. (More accurately, one that requires explicitly opting in to mutation.) Knowing the exact context of things is a feature of FP, since you can reason about the effects of computations. You mention algebraic effects are like resumable try/catch ā aren't exceptions usually one of the more difficult parts of a program to reason about? I don't mean to say that algebraic effects are worse of a solution than monads, but it's not clear to me that they're obviously better reasoning-wise. They're certainly a more recent topic of research, with interesting problems.
With monads, it's very straightforward: the state monadcontext is encapsulating the fact that each computation takes the current state as input and outputs an updated state. The state is threaded through in a very direct way, and there's no way to do spooky assignments of global variables outside the state. The hardest part here isn't monads, but getting used to state objects as a functional programming discipline. Then with do notation those state transformations can be implicitly threaded through each computation step, and you don't need to think about it too hard.
State monads and immutable data also give you a superpower that's almost unthinkable in mainstream imperative programs: you can snapshot the state and backtrack. This is used all the time in the implementation of Lean, where it speculatively tries something then undoes all the state mutations if it didn't work out. It's really boring in FP (you really just store the state in some variable then reset the state), but in imperative languages you'd have to carefully apply the action pattern and keep track of actions and be sure that the actions really can reverse their effects.
Dan Abramov (Aug 20 2025 at 02:31):
Kyle Miller said:
If FP is offputting, how about you don't read Functional Programming In Lean then :-) I find these things boring to read about myself. It's more fun to find existing code to study and to try using it as a model to write my own things.
Fair :) I brought it up because it's currently the only recommended resource on the site about programming in Lean. In fact it's positioned on https://lean-lang.org/learn/ as (quoting) the "main" resource for programmers who want to learn Lean and says "no prior knowledge of functional programming is needed". Maybe I'm nitpicking but IMO a resource positioned like this could use less jargon-y top-level chapter titles for broader appeal. E.g. do we want Rust developers to read this too? Would they be compelled to?
The practical points you speak to sound great. Which makes me think that maybe this is an issue with me picking FPIL as an intro (but then again, that's what the site told me to read). I believe you that it's possible to not necessarily know something is a functor, but then again, education-wise having "Functors, Applicative Functors, and Monads" as a top level chapter soon after "Hello World" in the table of contents tells me these are unavoidable topics for being able to write programs in Lean. Maybe they aren't. For comparison, looking through the https://doc.rust-lang.org/book/ sidebar doesn't fill me with similar dread although I still haven't learned any Rust. Maybe it's because jargon is always contextualized there with the task at hand so it feels more earned.
Kyle Miller said:
Is it clear that this is a way to model state? That's the most pressing thing in a language with no mutation.
Is how state is modeled actually different? My impression is you can more or less map effects 1:1 to monads. You have a State monad, you can have a State effect/handler with similar API. The difference, from my understanding, is you don't need to do anything special when you want to compose multiple difference effects since it's done behind the scenes.
Kyle Miller said:
You mention algebraic effects are like resumable try/catch ā aren't exceptions usually one of the more difficult parts of a program to reason about? I don't mean to say that algebraic effects are worse of a solution than monads, but it's not clear to me that they're obviously better reasoning-wise.
That's interesting, I personally don't find exceptions difficult to think about. What I like about them is that you always write "normal" code locally, and the mechanism for "intercepting" control flow is just adding a handler to the call stack. And if both sides are typed, you have an explicit coordination primitive between the sides of the wormhole. But nothing in your code needs to coordinate composition of independent ones because how such layers compose because it's done by the language/VM. That seems "simple enough" when I think about it but I have no experience with actually using effects. Maybe using them in practice sucks!
Kyle Miller said:
State monads and immutable data also give you a superpower that's almost unthinkable in mainstream imperative programs: you can snapshot the state and backtrack.
To clarify, I'm fully onboard with immutability and whatever falls out of that globally. (My background is in React, so the struggle of getting people to "think immutably" is familiar to me ā we even have a section on that in the intro tutorial!) I guess I didn't think of monads and effects as being different in that regard: I would expect State Effect/Monad to be equally powerful and have roughly a similar interface. Where I would expect to see a difference is in type signature and in how using multiple Monads/Effects is composed. Concretely, I'd expect that with effects, functions "above" wouldn't know that there's anything fancy going on (until we reach the place where the effect handler is), and that there's no need to explicitly transform anything because the compiler/runtime handles the machinery for us. I recognize this is a pretty abstract description and may be wrong (since I don't actually understand how monads are composed yet anyway).
Dan Abramov (Aug 20 2025 at 02:50):
Funnily enough, the thing that made me remember of AE (and wonder about them in Lean) is learning about typeclasses and instances. In my mind, AE are kind of like typeclasses/instances but going up the call stack instead of statically. And instead of lifting or transforming monads you're just taking a union of effects in types.
Again, I'm talking from a noob perspective and I suspect it may take another 10 years for some "big questions" in AE design space to shake out. I'm sure it's not actually all rosy as I describe. I just really like Lean and I wish it had more tricks up its sleeve in this space. But if there's nothing compelling enough, that's fine I guess.
Dan Abramov (Aug 20 2025 at 02:55):
This seems like a useful list of things that suck about and/or are unclear about AE
Kyle Miller (Aug 20 2025 at 03:16):
Dan Abramov said:
You have a State monad, you can have a State effect/handler with similar API
My point here is basically, why does your description of stacks and handlers and exceptions have anything to do with state? Is your quick explanation of algebraic effects enough of an explanation to understand how it has anything to do with state management?
I know your React background, so I know you're onboard with immutability. The monad superpower is that the state is very explicitly demarcated, and also you don't technically need any runtime support for monads ā it's all implemented in terms of basic concepts (functions and products). By looking at the source code, it's possible to learn that "state mutation" means that each computation is a function of the form Ļ ā α Ć Ļ. That is, they take a state and return a new state, while also computing some value. That value can be used to determine which computation comes next. If algebraic effects are implemented in a way that involves looking at actual call stacks, then that's not something you can understand from within the language. I know it's possible to implement algebraic effects as libraries, but then that means it's not literally a call stack.
Back to your explanation of algebraic effects in one sentence, here's my one-sentence explanation of monads: the way Lean handles imperative programs is that each line of do notation is implicitly a callback for the previous lines, and the callback is called using a handler specific to the monad. It doesn't sound any more complicated, though it does invoke callback-based programming.
Patrick Massot (Aug 20 2025 at 13:24):
Dan Abramov said:
For comparison, looking through the https://doc.rust-lang.org/book/ sidebar doesn't fill me with similar dread although I still haven't learned any Rust.
Maybe itās because you still havenāt learned any Rust. Otherwise the "Understanding Ownershipā in the side bar would probably fill you with a lot more dread than monads.
Sebastian Graf (Aug 25 2025 at 14:45):
Disclaimer: Although I work for the FRO, the following is just personal opinion/a vision, written down in my free time.
Much of this comes from my experience as a Haskell developer (I have contributed a lot to GHC) and having followed forum discussions on the relevant library ecosystem.
TLDR; we should wait until the design of Haskell's bluefin library converges and then just copy the approach, even going so far as moving it into Core/Init.
Nomenclature
The whole "* effects *" nomenclature is confusing. Broadly, I think of an "effectful computation" as one that communicates with its calling context via a handler.
An effect is an interface, its handler the implementation. Talking about the semantics of an effect is impossible unless you know the handler.
In analogy to exception handling, the effectful computation might throw an effect and the calling context catches this effect in a handler.
However, in contrast to plain exceptions, a general effect handler has the ability to yield back a return value to the computation, much like coroutines (e.g., JS's Generator).
(The Effekt research language makes the connection between (effects and handlers) and (interface and implementations) explicit by providing means to convert between the two viewpoints.)
Often, "<blah> effects and handlers" refers to an implementation of how to achieve this communication for effects of kind <blah> and its handler.
It is suprisingly difficult to give an implementation that (1) works for all kinds of effects without knowing their handlers at compile-time, (2) supports higher-order operations such as catch. Recent work on Hefty Algebras has done it, though.
Still, such implementations have major drawbacks so I don't consider them usable; we'll get there in time.
Applications need non-algebraic effects, and don't often care about dynamic handlers
I'm not excited about algebraic effects (and handlers), because applications need non-algebraic effects.
The most basic effect interaction that is important in applications is combining a state effect (MonadState) which has a very efficient implementation in the form of StateT (the handler is the instance) with an exceptions effect (MonadExcept + the instance at ExceptT).
Surely such applications want effectful operations to (1) get and set the state, (2) throw an exception, and (3) catch an exception.
However, catch is not supported by algebraic effects and handlers. See Section 1.2 of Hefty Algebras; at the end it lists a few more examples (withReader is one of them).
Furthermore, real-world applications often know the handler for certain primitive effects such as state and exceptions. There is no need for late binding MonadState and MonadExcept; any lawful and efficient implementation is good enough.
Transformer stacks are non-modular and don't scale to Big Software
TLDR; libraries using plain monad (transformers) do not compose, hence are insufficient to write Big Software.
Why are monad transformers apparently sufficient for Lean?
Opinionated answer: Because we have no big applications written in Lean yet, except for the language implementation which is a medium-sized monolith.
If Lean will be used to write Big Software, we will see the limitations of this approach.
Example of Big Software: AWS + PostgreSQL
Big Software is usually composed from multiple libraries, each maintaining its own notion of "context" (i.e., internal state and exceptions).
Say I want to compose an application that communicates with AWS and writes to a database. In the Haskell world, you could use the libraries amazonka and perhaps postgresql-simple. Disclaimer: I have never used these libraries, I just want to point out the practical issue of composing them.
Also note that none of them use monad transformer stacks in the initial example; they all use plain IO (which in Haskell includes the ability to try, catch and finally exceptions). We'll get to the reason for that.
Transformer stacks
A possible (flawed) design for a library aws just like amazonka is to provide functions such as fetchFile (url : String) : AWS RequestBody that operate in a monad AWS that encapsulates the internal state and exceptions of aws. Here, AWS can be implemented using a stack of monad transformers such as def AWS α := ExceptT AwsException (StateT AwsState IO) α. The AwsState would store stuff such as authentication tokens etc.
aws would provide a function runAWS (user : String) (password : String) (stuffToRun : AWS α) : IO (Except AwsException α) to run a computation stuffToRun in the AWS world in terms of some username/password pair and hides all authentication and request management from the user. (I doubt that auth in AWS actually is as simple as username/password pair, but bear with me for the sake of the example.)
There is another library postgresql that provides a function query (q : Query) : DB String operating in a monad DB, similarly implemented internally as def DB α := ExceptT DBException (StateT DBConnection IO) α. There is also runDB (connectionString : String) (stuffToRun : DB α) : IO (Except DBException α).
Each library in itself has nicely encapsulated the annoying details of how to connect to their respective service and how to handle exceptions.
But we cannot write a program that composes both! The following program is untyped, because fetchFile and query live in incompatible monads:
def main : IO Unit = runAWS "user" "pw" <| runDB "db.example.com" do
let response <- fetchFile "salaries.txt" -- as I said, I do compilers, not CRUD :)
let salary := ... response ...
let res <- query "select * from employees where ${salary} < salary"
...
In this case, there is a way out:
- Define the combined application monad
def AppM α := ExceptT (AwsException ā DBException) (StateT (AwsState x DBConnection) IO) α - Define
runAppM (user password connectionString : String) : AppM α -> IO (AwsException ā DBException) αin terms of the implmentations ofrunAWSandrunDB. - Define
MonadLift Aws AppMandMonadLift DB AppMinstances
Note that way out breaks the abstraction of the library, i.e., the breaking changes to the implementation of Aws and runAWS will break your application.
Furthermore, such an approach will stop working for higher-order effects such as catchAwsException (thing : AWS α) (handler : AwsException -> AWS α) : AWS α.
At the very least, you would need an instance for MonadControl AWS AppM (IIRC, it's a very complicated class) and the type of catchAwsException will likely need to change to account for the DBConnection state.
(continued in next post)
Sebastian Graf (Aug 25 2025 at 14:45):
The ReaderT Ļ IO pattern composes but lacks encapsulation
So libraries using custom monads (and transformers) do not compose.
That is the reason why the actual Haskell implementations of amazonka and postgresql-simple do not define custom monads. Instead they define all operations in IO which expect state records as separate arguments. Exceptions are implicit in IO. This way, the issue of MonadControl etc. goes away; it's all just IO.
The cost of this approach is having to pass around state records. This can be ameliorated by instead defining all functions on ReaderT Ļ IO, and instantiating Ļ with a record of IORefs for the state.
This is exactly the popularly pragmatic "reader IO" RIO pattern in Haskell. It is very related to Lean's StateRefT.
Still, this pattern feels like programming in C, where you pass around state records to top-level functions that may modify just about everything.
Say I'm debugging a problem with the DB in a big code base, and I'm wondering if it's worth digging into the definition of a big function fetchFile (url : String) : ReaderT AppState IO ResponseBody.
It sounds like it is unrelated to the DB, but how can I be sure?! I cannot; I would have to dig into this definition.
It would be great if the type system could assure that fetchFile indeed does not talk to the DB.
And to me that is the single, dominating appeal of an effect system.
Effect systems compose and help maintaining Big Software
To me, an effect system is mainly a safe abstraction over ReaderT Ļ IO that helps with encoding architectural invariants in the types.
Haskell's bluefin library is a beautiful example of this approach. With it, I would define compound effects for AWS and DB and then write
structure AWS (e : Effects) where
state : State AwsState e
exc : Exception AwsException e
def runAWS (io : IOE e) (user password : String) (stuffToRun : ā {e}, AWS e -> Eff (e :& es) r) : Eff (e :> es) (Except AwsException r) := ...
def fetchFile (aws : AWS e) (url : String) : Eff (e :> es) ResponseBody := ...
structure DB (e : Effects) where
state : State DBConnection e
exc : Exception DBException e
def runDB (io : IOE e) (connectionString : String) (stuffToRun : ā {e}, DB e -> Eff (e :& es) r) : Eff (e :> es) (Except DBException r) := ...
def query (db : DB e) (q : Query) : Eff (e :> es) String := ...
def main : IO Unit =
let r <- runEff fun io => runAWS io "user" "pw" fun aws => runDB io "db.example.com" fun db => do
let response <- fetchFile aws "salaries.txt"
let salary := ... response ...
let res <- query db "select * from employees where ${salary} < salary"
...
match r with
| .ok (.ok r) => ... r ...
| _ => ... -- handle exceptions
Note
- This type-checks without needing to define any instances for
MonadLiftwhatsoever - It will run just as fast as the
IO-based ("programming in C") variant - It is very clear in the type system that
fetchFileuses theawscapability but not thedbcapability, and vice versa forquery.
I'm reasonably certain that this will become the dominant architectural pattern for FP.
The higher-rank type (ā {e}, AWS e -> ...) takes a bit of getting used to, but on the other hand it infers well.
Needless to say, this is a pattern for programming in the large; no need to bother newcomers with these concepts.
Performance of the analytic vs. synthetic approach to implementing Eff
bluefin's author Tom Ellis calls the IO-based approach "analytic" in contrast to "synthetic".
- A synthetic implementation of the
MonadState Ļeffect isStateT Ļ, that is, the effect is implemented by some algebraic data type (Ļ -> α x Ļ). - An analytic implementation of the
MonadState Ļeffect isStateRefT Ļ, that is, the effect is implemented by anST.Refwhich has special support by the runtime system to enable an efficient implementation.
In the "limit over all possible effects", the synthetic approach arrives at the free monad (Hefty Algebras are a variant of this), while the analytic approach stays with the very simple monomorphic implementation ReaderT Ļ IO (in case of bluefin, it even is just IO).
While the free monad is a great model to prove functional correctness properties about effectful programs, it is far too slow to execute programs because everything is late bound and allocates.
By contrast, ReaderT Ļ IO/IO is fast; its implementation of bind is known, we never need to eta-expand any continuations, there is no late binding of effect handlers unless explicitly requested, resource cleanup is deterministic and prompt etc.
Further reading
- The documentation of
bluefin - Tom Ellis' recent talk about the history of effect systems in Haskell
- Blog posts by Tom Ellis on
bluefin - A long discussion comparing a Haskell implementation of Hefty Algebras to
bluefin. TLDR; Hefty Algebras are slow and don't supportbracket/a properfinallyfor doing resource cleanup. - "Why are there so many libraries for algebraic effects (in Haskell)?"
- My summary of why you would want to use an "effect system"
Last updated: Dec 20 2025 at 21:32 UTC