Zulip Chat Archive
Stream: lean4
Topic: monads documentation
Chris Lovett (Aug 24 2022 at 00:59):
Having learned the hard way that Monads are critical to understanding Lean (and Lake) I've decided to port https://mmhaskell.com/monads/ to Lean, both to learn more about monads myself, and because folks have indicated this is a good place to start. I've started here: https://github.com/leanprover/lean4/pull/1505 and I'd love the following feedback:
- what is missing that Lean adds specifically to this story
- please add PR comments if you find errors, and/or have suggestions on how to improve the specifics about Lean.
In the PR you can use this to get a new markdown rendering:
image.png
(ignore the # ++ trick I'm using in some of the code snippets -- see https://github.com/leanprover/mdBook/pull/11)
Chris Lovett (Aug 29 2022 at 20:59):
Ok, my first draft is ready for review, all feedback welcome here and as PR comments.
See http://lovettsoftware.com/lean4/monads/intro.html where I've posted my fork.
Henrik Böving (Aug 29 2022 at 22:14):
A few remarks:
Functors:
- "In this Functor, map is called a "higher order function" because it takes a function as input (α → β) and returns another function as output F α → F β" You are saying "in this Functor, map is called a HOF" but map is always a HOF so you might instead want to say "Functor.map is called a HOF".
- You also only use the HOF term once in this chapter (namely here) so it's just a term being thrown at the reader for no apparent purpose (although quite an important one in functional programming of course)
- It should be noted that
mapConst
has a default implementation, namely: https://github.com/leanprover/lean4/blob/0925051c51e183a0b8c504ae53f5cab0525d6cac/src/Init/Prelude.lean#L2592 it is in the type class because some Functors might have a more specialized variant than this which is more performant so we don't want to force all functors on one unspecialized one - You sort of tease the Functor laws but just leave it hanging there, I'd say at least a link to the laws chapter would be nice here.
Applicatives:
- why is the little tangent on custom Lean notation for
<$>
in the applicative chapter? if at all i would have expected it in the functor one since its doing stuff with Functor.map? - generally there is an inconsistency between what you put in backticks sometimes Option and Applicative appear with/without them changing throughout the text
- there is a link to the applicative laws at Lazy Evaluation but it leads no where, I assume you pushed that into the general Law chapter?
Monads:
- At least in emacs you can write the ← with a \l, one letter saved \o/
- You're claiming that the ← operator is special but it is in fact not at all special, it desugars into a simple application of >>=, your example code:
def runOptionFuncsDo (input: String) : Option (List Nat) := do
let i ← optionFunc1 input
let f ← optionFunc2 i
optionFunc3 f
will be compiled into something like:
optionFunc1 input >>= (fun i => optionFunc2 i >>= fun f => optionFunc3 f)
as you can see a straight forward translation from let varName <- foo;../
to foo >>= fun varName => ...
Reader:
- here you suddenly begin using the
return
keyword without having introduced it or its relationship topure
before. Note that unlike in haskell there is actually a semantic difference betweenreturn
andpure
in Lean, firstreturn
is a special keyword so I can writereturn f x
and it will be parsed asreturn (f x)
whilepure f x
gets parsed as(pure f) x
so you usually have to put in apure <| f x
, furthermorereturn
will also cause an early return in a monadic function like in an imperative language whilepure
will not. - you close with saying:
Now it's time to move on to State Monad which is like a ReaderM that is also updatable.
this is not the difference betweenReaderM
andStateM
though as you already know,ReaderM
is merely a function from the implicit state to the result type, nothing is preventing me from changing the input to this function in a sub part of my program, that is if a have a computaiton that executes three functionsf1
,f2
andf3
in theReaderM
Monad I can of course easily pass a custom context to sayf2
and execute the other two with the one I got from further up. This is can be done in a nice way using thewithReader
function. The key difference betweenReaderM
andStateM
is inReaderM
what one computation does with the context does not involve computations that come after it, that is if I have a computation that useswithReader
to modify the context the next computation will work with the old context again. This is different with theStateM
monad where calling a function can actually have an effect on computations that are performed after the one I just called because state changes are propagated further through the additional hidden return parameter.
State:
- since you didnt know about the
withReader
the explanation of the difference is slightly wrong here as well of course
Monad lifting:
- s/transative/transitive
- you mention
MonadLiftT
in the paragraph above the one about transitive lifting where you explain whatMonadLiftT
does, this might seem like a typo the the reader at first, though I do not know from the top of my head how to do this better - you also mention that the reader can uncomment the
set_option
but I dont see (and my C-f doesn't find) aset_option
?
Laws:
- For the functor laws I can also give you an example for why you need both laws to hold:
def bad_fmap {α β : Type u} : (α → β) → Maybe α → Maybe β
| _, _ => none
this will fail the id law but obey the composition law, I dont have such a nice example for an applicative though.
- you might want to show our
LawfulMonad
etc. typeclasses as well
But overall these are just (partially stylistic) nitpicks, I think it's well done in general :thumbs_up:
Chris Lovett (Aug 29 2022 at 23:39):
Thanks for the quick review! Most of the language came straight from https://mmhaskell.com/monads/tutorial so most of the credit goes to them. The trick is finding places where Lean is different, so thanks for your help on that.
[responses inline below] A few remarks:
Functors:
"In this Functor, map is called a "higher order function" because it takes a function as input (α → β) and returns another function as output F α → F β" You are saying "in this Functor, map is called a HOF" but map is always a HOF so you might instead want to say "Functor.map is called a HOF".
I think "In this Functor, map" is pretty clear and means the same thing as "Functor.map". Am I missing something?
You also only use the HOF term once in this chapter (namely here) so it's just a term being thrown at the reader for no apparent purpose (although quite an important one in functional programming of course)
"higher order functions" came from https://mmhaskell.com/monads/functors. I figured if the author of this page though the concept was important enough to mention, then I should too :-)
It should be noted that
mapConst
has a default implementation, namely: https://github.com/leanprover/lean4/blob/0925051c51e183a0b8c504ae53f5cab0525d6cac/src/Init/Prelude.lean#L2592 it is in the type class because some Functors might have a more specialized variant than this which is more performant so we don't want to force all functors on one unspecialized one
Added to end of Functors.
You sort of tease the Functor laws but just leave it hanging there, I'd say at least a link to the laws chapter would be nice here.
This was a refactoring bug, I moved all the laws to section 7 but forgot to move this paragraph, so the tease is gone now.
Applicatives:
why is the little tangent on custom Lean notation for
<$>
in the applicative chapter? if at all i would have expected it in the functor one since its doing stuff with Functor.map?
You are right, I moved it to Functors.
generally there is an inconsistency between what you put in backticks sometimes Option and Applicative appear with/without them changing throughout the text
Fixed. Note that I do use lower case versions when I don't want to call attention to it and make the paragraph read more smoothly.
there is a link to the applicative laws at Lazy Evaluation but it leads no where, I assume you pushed that into the general Law chapter?
Fixed.
Monads:
At least in emacs you can write the ← with a \l, one letter saved \o/
Good catch, vscode can also do \l
You're claiming that the ← operator is special but it is in fact not at all special, it desugars into a simple application of >>=, your example code:
mmhaskell also says "The <- operator is special. ". You think it is not special, but I think it is special because it calls bind
for you!
Reader:
here you suddenly begin using the
return
keyword without having introduced it or its relationship topure
before. Note that unlike in haskell there is actually a semantic difference betweenreturn
andpure
in Lean, firstreturn
is a special keyword so I can write
return f x
and it will be parsed asreturn (f x)
whilepure f x
gets parsed as(pure f) x
so you usually have to put in apure <| f x
, furthermorereturn
will also cause an early return in a monadic function like in an imperative language whilepure
will not.
you close with saying:
Now it's time to move on to State Monad which is like a ReaderM that is also updatable.
this is not the difference betweenReaderM
andStateM
though as you already know,ReaderM
is merely a function from the implicit state to the result type, nothing is preventing me from changing the input to this function in a sub part of my program, that is if a have a computaiton that executes three functionsf1
,f2
andf3
in theReaderM
Monad I can of course easily pass a custom context to sayf2
and execute the other two with the one I got from further up. This is can be done in a nice way using thewithReader
function. The key difference betweenReaderM
andStateM
is inReaderM
what one computation does with the context does not involve computations that come after it, that is if I have a computation that useswithReader
to modify the context the next computation will work with the old context again. This is different with theStateM
monad where calling a function can actually have an effect on computations that are performed after the one I just called because state changes are propagated further through the additional hidden return parameter.
thanks, I'll work on all this.
State:
since you didn't know about the
withReader
the explanation of the difference is slightly wrong here as well of course
Monad lifting:
s/transative/transitive
fixed
you mention
MonadLiftT
in the paragraph above the one about transitive lifting where you explain whatMonadLiftT
does, this might seem like a typo the the reader at first, though I do not know from the top of my head how to do this better
fixed
you also mention that the reader can uncomment the
set_option
but I dont see (and my C-f doesn't find) aset_option
?
fixed.
Laws:
For the functor laws I can also give you an example for why you need both laws to hold:
def bad_fmap {α β : Type u} : (α → β) → Maybe α → Maybe β
| _, _ => none
this will fail the id law but obey the composition law, I dont have such a nice example for an applicative though.
you might want to show our
LawfulMonad
etc. typeclasses as well
Thanks, I'll work on this...
Chris Lovett (Aug 29 2022 at 23:48):
One thing I wasn't sure about in the original Haskell language is the use of the term "structures", I find this a bit confusing in Lean because lean has a special "structure" type. Is there something I should use instead? "type classes" perhaps?
Mario Carneiro (Aug 30 2022 at 00:17):
I would avoid the acronym HOF, we basically never use it. HO for higher order is used occasionally but I think it's fine just to use "higher order function" without abbreviation in this context
Chris Lovett (Aug 30 2022 at 05:47):
Sure my section on functors does not use the acronym.
Henrik Böving (Aug 30 2022 at 06:26):
I just used HOF to write quicker since it was obvious from the context what it means I usually wouldn't write it in documentation either yeah
Chris Lovett (Aug 30 2022 at 08:06):
you close with saying: Now it's time to move on to State Monad which is like a ReaderM that is also updatable. this is not the difference between ReaderM and StateM though as you already know, ReaderM is merely a function from the implicit state to the result type, nothing is preventing me from changing the input to this function in a sub part of my program, that is if a have a computaiton that executes three functions f1, f2 and f3 in the ReaderM Monad I can of course easily pass a custom context to say f2 and execute the other two with the one I got from further up. This is can be done in a nice way using the withReader function. The key difference between ReaderM and StateM is in ReaderM what one computation does with the context does not involve computations that come after it, that is if I have a computation that uses withReader to modify the context the next computation will work with the old context again. This is different with the StateM monad where calling a function can actually have an effect on computations that are performed after the one I just called because state changes are propagated further through the additional hidden return parameter.
How's this?
StateM vs ReaderM
While ReaderM
functions can use withReader
to modify the state before calling another function, StateM
functions are a little more powerful, let's look at this function again:
def nextTurn : StateM GameState Bool := do
let i ← chooseRandomMove
applyMove i
isGameDone
In this function chooseRandomMove
is modifying the state that applyMove
is getting and chooseRandomMove
knows nothing about applyMove
. So StateM
functions can have this kind of downstream effect outside their own scope, whereas, withReader
cannot do that.
I'm still using the simpler language to introduce
StateM
because simple is good, and I added this later on in the State docs where more advanced topics are discussed.
Chris Lovett (Aug 30 2022 at 20:57):
All the excellent suggestions from Henrik are now implemented and published here:
http://lovettsoftware.com/lean4/monads/intro.html
Leonardo de Moura (Aug 30 2022 at 21:14):
It is looking good.
Could you please remove comments containing the output of #eval
commands?
image.png
With LeanInk, this is now redundant.
Chris Lovett (Aug 30 2022 at 22:59):
Ooh, can we talk about that? The LeanInk hover tips are great, but I find them a bit tedious on pages like this where there are hundreds of little bubbles. I think it reads more smoothly when they are always visible. Perhaps we should change leanInk to add the always visible comments automatically?
Tomas Skrivan (Aug 30 2022 at 23:05):
I like the comments too. When I was reading the text I even thought to myself that adding the comments there is smart as constantly clicking on the bubbles was disruptive to my reading.
Chris Lovett (Aug 30 2022 at 23:09):
It's also a style I saw a lot in TPIL and also grew to really like.
Chris Lovett (Aug 30 2022 at 23:10):
It's also a good double check, if you don't see this value then something is wrong in your vscode setup :-) Perhaps LeanInk could also verify the comments and spit out errors if what it gets doesn't match the comment :-)
Leonardo de Moura (Aug 30 2022 at 23:15):
I think it reads more smoothly when they are always visible. Perhaps we should change leanInk to add the always visible comments automatically?
This makes sense for the reader, but it should be done by LeanInk. That is, we should add an option to LeanInk.
Information that can be synthesized should not be written manually. We hope all documentation in the future will be powered by LeanInk and doc-gen4.
It's also a style I saw a lot in TPIL and also grew to really like.
TPIL was written many years ago before we had LeanInk.
Chris Lovett (Aug 30 2022 at 23:21):
I agree, can I make that a separate work item though? I would prefer to do what David is doing which is document in the file the "expected value" which is then reported as an error by LeanInk, which helps catch many little breaking changes in the book content as Lean evolves... I captured the idea here: https://github.com/leanprover/LeanInk/issues/29.
Leonardo de Moura (Aug 30 2022 at 23:24):
Yes, I agree. David's solution is great.
Chris Lovett (Aug 30 2022 at 23:26):
While I have your attention, one other thing I wasn't sure about in the original Haskell language is the use of the term "structures", I find this a bit confusing in Lean because lean has a special "structure" type. Is there something I should use instead? "type classes" perhaps? But better would be a is a term that encompasses Functors, Applicatives and Monads? Perhaps from Category Theory I should call them "morphisms" ?
Leonardo de Moura (Aug 30 2022 at 23:30):
The occurrences of "structure" in the instances above make sense to me.
Chris Lovett (Aug 30 2022 at 23:31):
You don't like "morphisms" ?
Leonardo de Moura (Aug 30 2022 at 23:32):
No, I don't.
Chris Lovett (Aug 30 2022 at 23:34):
Ok, I will leave it as is then, thanks.
Last updated: Dec 20 2023 at 11:08 UTC