Hacker News new | past | comments | ask | show | jobs | submit login
Exotic List Monads (haskell.org)
88 points by curryhoward on Jan 7, 2021 | hide | past | favorite | 25 comments



> The usual list monad is only one of infinitely many ways to turn the List functor into a monad.

So simple, and yet this is a point that I think is rarely made clear enough in "monad explainers." For instance, they almost always talk about "the Maybe monad" -- but this is conflating two things: the Maybe data type, and the Monad instance defined on that type. Propagating "Nothing" is not inherent to the Maybe data type, it's just a convenient behavior to have.

Talking about "the List monad" is even more confusing for a newcomer. When they hear "the List monad implements a kind of nondeterminism," it sounds like nondeterminism is a property inherent to lists themselves -- but of course it is nothing of the sort. All Monad instances are, in a sense, arbitrary.


> Propagating "Nothing" is not inherent to the Maybe data type, it's just a convenient behavior to have.

nitpick about this particular example: is there another lawful implementation of Monad for Maybe? i can't think of any, apart from the trivial

  pure _  = Nothing
  _ >>= _ = Nothing
(eyeballing the lawfulness, but it'll all be `Nothing` so all the equalities should hold, trivially :D)


The trivial implementation isn't law abiding. This law doesn't hold (written in Kleisli form for simplicity)

    pure >=> f == f  (left identity)
There are no other monads for Maybe. First, any definition of pure must be Just as Nothing doesn't work because of left identity and parametricity prevents any other funny business. Now, by law we know

    pure a >>= f == f a
Thus, we must define

    Just a >>= f = f a
So the only variable is what (Nothing >>= f) does. For (f: A -> B) we must end up with a Maybe B. We don't have one to start and we can produce Maybe values only via Nothing and Just. So, either >>= is the standard definition or we have to do

    Nothing >>= f = Just (_: B)     -- we can achieve a B only via use of f, so
    Nothing >>= f = Just (f (_: A)) -- now we are stuck, there are no values of A
Thus, we must define

    pure a = Just a
    
    Just a >>= f  = f a
    Nothing >>= f = Nothing


yeah, and as others pointed out it fails right-identity as well. had a brain fart recapping the laws, though i think i got associativity right at least!


I don't think that holds for left identity:

    pure a >>= f ≡ f a


Exactly, and the right identity is also violated:

    m >>= pure ≡ m


ahh right! serves me right, i should've spent more than 10 secs checking


Everyone experiences this in Haskell, where you make some statement online and then someone tells you some way in which it’s incorrect.

I’m sure most real-world Haskell programs are “incorrect” in some way.


Part of this has to do with the fact that it is not possible (without tricks) to have multiple different typeclass implementations for one type in Haskell (I believe Rust also has this restriction). This language restriction has the tendency to seep into people's way of thinking about monads (and other typeclasses). However, it is not fundamental that a typeclass (or other ad-hoc polymorphism) system has this restriction.

That being said, the language design problem of how to support multiple different typeclass implementations for one type is actually trickier than you'd think. For example, it's easy to fall into the diamond problem if you don't have instance canonicity. If you're interested in how this problem can be solved in practice, check out this lovely paper about Modular Implicits in OCaml [1]. The paper is quite accessible if you have some FP background.

[1] https://arxiv.org/pdf/1512.01895.pdf


At the language level, FP seperates data from behavior. However, in practice, even functional programmers still tend to couple data and behavior.

For instance, not much would change about programming in Haskell if List was defined as an abstract type, so users couldn't see its constructor or interact with it in any way except through its functions.

In OOP languages, this is unambiguous because the function/interface implementation is explicitly defined as part of the type. In FP languages, this is a little less clear because the implementation can be defined anywhere (although, should really be either near the data declaration, or the typeclass declaration).


Only really true for the common (bad?) OOP languages, CLOS system in Common Lisp decouples the two and adds multiple dispatch for added spice.


As someone who recently learned monads, this exactly was a point of confusion for me.


> In each case return is singleton (it is not known if there exists a monad on lists with a different return).

The "diagonal" monad join [[a, b, c, ...], [1, 2, 3, ...], [x, y, z, ...], ...] = [a, 2, z, ...] has return a = [a, a, a, ...], though I guess it's hard to define it in a way that's well-behaved for finite lists, so you might not consider it "a monad on lists".


for finite lists, why not just have it return in k-diagonal order? (sort of like Hope's "diagonal comprehension" but with wraparound repetition)


No comments, because no one understand these (At least, I don't).


Haskellers are asleep, let's post impure functions!

function() {console.log(Math.random())}


:: IO ()


Blasphemy!


That's not a function. That's a procedure.


Haha, impure function go brrr


It seems like understanding the exotic monads themselves is not the point. To me, that page mostly shows that monad laws allow some weird Monad instances on the same data structure.

The only exotic monad which was intuitive to me was GlobalFailure, which interprets lists as "Maybe NonEmpty": An empty list signifies failure and aborts the whole computation, while non-empty lists behave as normal.


I appreciate comment threads like this because it shows me I have a lot to learn. HN is a great place to encounter people smarter than you.


I don’t see how this has broad appeal, but at least the Mazewalk interpretation is cute.


Mazewalk is more than cute, it also occurs when encoding trees whose leaves are operations and whose branches are relative context modifications. (where our key optimisation is that we needn't restore contexts in tail position, just as Mazewalk doesn't palindromise its last unary tree)

Consider it as an alternative to the more common strategy of absolutising the modifications onto the leaves in a first tree walk, and then executing all the now-labelled leaves in arbitrary order in a second pass.


I think it's just meant to be fun and/or illustrative of a point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: