> The usual list monad is only one of infinitely many ways to turn the List functor into a monad.
So simple, and yet this is a point that I think is rarely made clear enough in "monad explainers." For instance, they almost always talk about "the Maybe monad" -- but this is conflating two things: the Maybe data type, and the Monad instance defined on that type. Propagating "Nothing" is not inherent to the Maybe data type, it's just a convenient behavior to have.
Talking about "the List monad" is even more confusing for a newcomer. When they hear "the List monad implements a kind of nondeterminism," it sounds like nondeterminism is a property inherent to lists themselves -- but of course it is nothing of the sort. All Monad instances are, in a sense, arbitrary.
The trivial implementation isn't law abiding. This law doesn't hold (written in Kleisli form for simplicity)
pure >=> f == f (left identity)
There are no other monads for Maybe. First, any definition of pure must be Just as Nothing doesn't work because of left identity and parametricity prevents any other funny business. Now, by law we know
pure a >>= f == f a
Thus, we must define
Just a >>= f = f a
So the only variable is what (Nothing >>= f) does. For (f: A -> B) we must end up with a Maybe B. We don't have one to start and we can produce Maybe values only via Nothing and Just. So, either >>= is the standard definition or we have to do
Nothing >>= f = Just (_: B) -- we can achieve a B only via use of f, so
Nothing >>= f = Just (f (_: A)) -- now we are stuck, there are no values of A
Thus, we must define
pure a = Just a
Just a >>= f = f a
Nothing >>= f = Nothing
yeah, and as others pointed out it fails right-identity as well. had a brain fart recapping the laws, though i think i got associativity right at least!
Part of this has to do with the fact that it is not possible (without tricks) to have multiple different typeclass implementations for one type in Haskell (I believe Rust also has this restriction). This language restriction has the tendency to seep into people's way of thinking about monads (and other typeclasses). However, it is not fundamental that a typeclass (or other ad-hoc polymorphism) system has this restriction.
That being said, the language design problem of how to support multiple different typeclass implementations for one type is actually trickier than you'd think. For example, it's easy to fall into the diamond problem if you don't have instance canonicity. If you're interested in how this problem can be solved in practice, check out this lovely paper about Modular Implicits in OCaml [1]. The paper is quite accessible if you have some FP background.
At the language level, FP seperates data from behavior. However, in practice, even functional programmers still tend to couple data and behavior.
For instance, not much would change about programming in Haskell if List was defined as an abstract type, so users couldn't see its constructor or interact with it in any way except through its functions.
In OOP languages, this is unambiguous because the function/interface implementation is explicitly defined as part of the type. In FP languages, this is a little less clear because the implementation can be defined anywhere (although, should really be either near the data declaration, or the typeclass declaration).
> In each case return is singleton (it is not known if there exists a monad on lists with a different return).
The "diagonal" monad join [[a, b, c, ...], [1, 2, 3, ...], [x, y, z, ...], ...] = [a, 2, z, ...] has return a = [a, a, a, ...], though I guess it's hard to define it in a way that's well-behaved for finite lists, so you might not consider it "a monad on lists".
It seems like understanding the exotic monads themselves is not the point. To me, that page mostly shows that monad laws allow some weird Monad instances on the same data structure.
The only exotic monad which was intuitive to me was GlobalFailure, which interprets lists as "Maybe NonEmpty": An empty list signifies failure and aborts the whole computation, while non-empty lists behave as normal.
Mazewalk is more than cute, it also occurs when encoding trees whose leaves are operations and whose branches are relative context modifications. (where our key optimisation is that we needn't restore contexts in tail position, just as Mazewalk doesn't palindromise its last unary tree)
Consider it as an alternative to the more common strategy of absolutising the modifications onto the leaves in a first tree walk, and then executing all the now-labelled leaves in arbitrary order in a second pass.
So simple, and yet this is a point that I think is rarely made clear enough in "monad explainers." For instance, they almost always talk about "the Maybe monad" -- but this is conflating two things: the Maybe data type, and the Monad instance defined on that type. Propagating "Nothing" is not inherent to the Maybe data type, it's just a convenient behavior to have.
Talking about "the List monad" is even more confusing for a newcomer. When they hear "the List monad implements a kind of nondeterminism," it sounds like nondeterminism is a property inherent to lists themselves -- but of course it is nothing of the sort. All Monad instances are, in a sense, arbitrary.