For any aspiring Haskell programmer out there, my advice is to stay away from any blog posts describing Monads using analogies of some kind. The fact is that you do not have to "understand" monads in order to write practical programs that use the IO monad or even something more complicated like Parsec. I've seen too many people stop using Haskell when it's time to start doing something that isn't purely functional because they have been afraid of going to unfamiliar territory.
Here's a simple two part exercise that should give you enough of an understanding about the basics of using IO and give you a general idea about monads.
1. Write a "guess the number" game
2. Write it without "do" notation, using (>>=)
That's it. If you're at all like me, this should give you a better start than reading theory about the subject. Now that you have a basic understanding about the practice, it's easier to grok the theory.
And even if you do not understand the theoretical aspects of it by now, you know enough IO to be able to read and write files, connect to the internet using sockets or do some HTTP requests. That opens a lot of possibilities.
The slides suggest an "Eightfold Path to Monad Satori":
1) Don't read the monad tutorials.
2) No really, don't read the monad tutorials.
3) Learn about Haskell types.
4) Learn what a typeclass is.
5) Read the Typeclassopedia.
6) Read the monad definitions.
7) Use monads in real code.
8) Don't write monad-analogy tutorials.
I've found this advice helpful. Monads are something you'd probably invent yourself once you know the rest of the language well enough.
Lots of great Haskell material on his site. In particular this derivation of the State monad from first principles was the "Aha!" moment for me understanding monads.
> Monads are something you'd probably invent yourself once you know the rest of the language well enough.
Very true. I understood monads far better after I wrote some explicitly state-threaded code (where every function took a state and returned (state, actual_return)), wrote functions equivalent to >>= and return, and then turned the result into a monad.
The 'Monad tutorial' linked in the article doesn't use analogies. Instead, it takes you through a couple of pieces of code that have an obvious abstraction, then shows you that these abstractions are all the same, and that this abstraction is called Monad. It's really very good, you should read it.
If anyone reading this wants to see what a basic "guess the number"-style game looks like in Haskell, I wrote one with the same basic logic, but a ton of different coding styles as well as java code that implements the same logic. https://gist.github.com/cleichner/6086604
Note: As you go farther down the page, the techniques become increasingly overkill for a program this small. For this sort of program, I actually like RandomProblem2.hs the most.
There are plenty of learning materials to get to that point. There are probably a few papers on writing GTA5 and World of Warcraft too. Unfortunately, almost no-one is writing about how to do Tetris, Mario or Wolfenstein 3D.
I suspect one of the biggest barriers to adoption that Haskell specifically has is the gaping void in intermediate-level material. When the concepts are so alien to what most programmers work with, even those with some basic knowledge of functional programming ideas more generally, you really need the books or websites that come after things like RWH or LYAHFGG -- things that show how to build non-trivial software, scaling up the toy examples to something more realistic.
It's great to see sites like the one that started this discussion trying to bridge that gap, but I suggest that understanding relatively obscure things like heavyweight parsing frameworks or lenses isn't the most important goal. While they are certainly interesting ideas, this feels like trying to teach people how to write a chess AI with an 1800 rating. When you're just trying out a new language to get a feel for how it works, writing Tetris is probably a better bet.
hell i love me some monads! might just create a language that's pure monadic. (name it FANB (Fortran_a_new_beginning) and it'll have hookers and blackjack! in fact, forget the blackjack)
The first two monad papers, "You could have invented monads" and "Monadic Parsing in Haskell", are terrific.
I think that monads and most monad tutorials are great examples of "this concept is hard because everyone says it is." It reminds me of engineering school, where a lot of people sit around and complain about how hard something is instead of actually trying to do that particular thing. Monads really aren't super difficult to grasp, and I wish people would stop making tutorials that only serve to confuse learners more.
There are a lot of concepts that need to be learned first before monads become simple; namely, functional programming (in general, including the lambda calculus, currying and function composition), and Haskell's type system and how type classes work. All of this stuff needs to be understood at a fairly intuitive level. Add to this that the way much monadic code is often used, and the `do` syntax that supports it, can mislead newcomers into trying to understand monadic code as similar to imperative code. Add also that many of the most common monads, like the State monad, have clever monadic instances involving passing functions around which are beautiful but hardly intuitive to understand. Add further that many monads used in practice are monad transformers, increasing the complexity and opacity further and making production code hard to grok for beginners.
So there are many legitimate reasons why monads are a hard concept. That said, I completely agree that the best way to learn them is to actually try to use them. In fact, I think that the main reason monad tutorials are seen as useless is not because they're poorly written, but because no tutorial can teach you monads a fraction as well as you will learn them by actually using them.
Wikipedia has a pretty good analogy explanation. I like the idea of monads being "programmable semicolons" which are used to inject side effects between purely functional applications.
So if I understand monads correctly then they are basically functions that work like a Unix pipe between functions which takes input data in a monadic container, performs some side effects (I/O for instance), and yields another monadic container with output data that is forwarded to the next function - all that without violating the purely functional character of the whole application chain.
What you're describing is the IO monad to first approximation, most monads don't perform side-effects at all ( List, Cont, Maybe, ... ) and the bind operations only performs some pure overloaded operation specific to the monad instance. For example the list monad performs the concatMap function which is pure.
instance Monad [] where
m >>= f = concat (map f m)
return x = [x]
"they are basically functions that work like a Unix pipe between functions which takes input data in a monadic container, performs some side effects (I/O for instance), and yields another monadic container with output data that is forwarded to the next function - all that without violating the purely functional character of the whole application chain."
That much is true of any applicative functor (which is a superset of monads). The additional power monad gives you is to change up the later portions of that chain based on the earlier results.
Of course, a lot of uses of "monads" don't really make the distinction (and applicative functors are plenty useful).
I'd say it is in fact a familiar abstraction (as the "You Could Have Invented Monads" article shows) that's presented in an unfamiliar way with unfamiliar syntax. Taking something you already understand intuitively and learning to express it in a new formal framework (like much of Haskell) is often harder than learning something brand new in a formal model and slowly developing an intuition for it (like much of physics). Sometimes gaining a formal understanding enriches your intuition and vice versa, but I haven't experienced anything like that with Haskell so far.
I had taken a stab at this with my lens tutorial[1]. But if you just want to use lenses, Joseph Abrahamson's posts[2][3] are much better. And of course, SPJ's talk is great. He's an excellent public speaker.
> I'm really disappointed that so many critical Haskell topics only have a single source of truth that is difficult to find.
I found the same thing when I started learning Haskell a few years ago, so I started a blog on advanced topics[4]. Since then there have been a lot of excellent haskell posts. In particular, Stephen Diehl[5] and Christopher Done[6] write interesting and easy-to-read posts. The School of Haskell[7] also has advanced, interactive posts by very smart Haskell folks. So this problem is not as dire as this post makes it sound.
I think that monads are hard for non-functional programmers to grasp for two reasons: They take a different way of thinking, and "monad" is the wrong name.
"The name is wrong?" the Haskell crowd replies. "Not at all! Look at the function signatures, and look at the definition of a monad in abstract algebra! It is a monad!"
Well, any function with the signature a -> a -> a is a magma (per abstract algebra), and those are all over the place in Haskell. But we don't hear much about magmas in Haskell. We do hear a lot about monads. Why? Because monads are magic in Haskell, and magmas are not.
But here's the thing. Monads are magic for reasons that have to do with programming in Haskell, but that really don't have much to do with abstract algebra. That's why it's a bad name. It's like I told you that something was "an enumerable set of bricks" rather than "it's a three bedroom house". "An enumerable set of bricks" is in fact an accurate description, but it is not a useful one.
Types like (a -> a -> a) which are associative show up everywhere and are often named as Semigroups—there's a (somewhat) popular library which provides all kinds of hooks into the abstract algebra in that space. When we also have that (a -> a -> a) has a zero then we call it a Monoid and that typeclass is built-in.
In short, there's a continual tension between wanting to reify the entire abstract algebraic hierarchy and the weight and arbitrariness of doing such a thing. To resolve this, the community tends to favor typeclasses which have enough law structure to determine uniquely what the semantics of that algebraic pattern ought to be.
For Monad that is absolutely the case. There is wonderful algebraic structure to monads and thus they are named.
For Magma... not so much. So you've got an operator, eh?
Magmas aren't mentioned much in Haskell because they don't have much structure to them, it's just a closed binary operation and typically we'd just refer to them as being subsumed by a Semigroup(oid) or something more complicated. Monads on the otherhand are "magical" in some sense because they all this rich structure to them that we can use to model computations of all kinds in a very abstract and useful way.
As for the name "monad", it's an arbitrary name that comes froms mathematics but I argue it's as good a term as any until the English language comes up with first-class way of talking about the relationships between natural transformations! The abstract algebra names are great because they force a different mode of thinking than the "nominative adjective" style of naming things in OO languages tends to prefer.
I understand what you're saying. My point was largely that I don't think there can be a better name, the English language doesn't have terms that describe abstract concepts like monads well. That's why mathematicians invent new arbitrary words that have a precise meaning tied to a very specific set of definitions and rules.
I agree to an extent, but I think it's a minority opinion. I had a similar point to make about monoids, and would rather see them called "addables" or something similar. Yes, yes, I know, they're not just about adding things... but a great majority of times, `mappend` is used as an adding operation, or something similar (union of sets, concatenating strings, etc). But I didn't receive much support for this view, because "look at the definition of a monoid in abstract algebra! It is a monoid!"
It's this kind of thing that makes Haskell so cool, but also makes it so unlikely that it will ever see real widespread adoption. Haskell is a language based heavily on mathematical theory. There's more of a focus on expressing mathematical concepts and hewing to theoretical correctness than on being practical. Why are there no (real) exceptions in Haskell? Because you can do it with monads. Why aren't there member variables on objects? Because you can do it with functions (or better yet, lenses! a shiny new abstraction to learn!). Why is there no stack trace when a runtime error occurs? This one I don't have a good answer to, but I think it has to do with laziness, which has its roots in theory as well.
I love Haskell and have written several large projects in it. I write it almost every day, currently in the middle of a project with ~1,500 lines of code written over the last month or so. But a lot of it feels like bending over backwards to allow features that would be completely basic in other languages, because of the thing I was talking about above: the starting point is the theory, and the practical comes after. The names of the types are just the tip of the iceberg on this one. There are many great and practical libraries for Haskell (some of which this post's author has written), and many are incredibly powerful, expressive and performant, so it's wrong to say that Haskell isn't "for the real world". But it's really a whole different philosophy than 90% of other languages.
All that said I'm not sure I can come up with a better name for Monad. Context? Pipe? Container? Nothing really springs to mind. It's sort of its own ball of wax.
I think Haskell as a community favors more learning over having edges to abstractions. You could call Monoids "Addables" and it would make intuitive sense more quickly, but it would (a) face edge cases later (multiplication is a monoid, as is "take the leftmost argument") and (b) not link to a larger literature of abstract algebra.
That turns a lot of people off, as you mention, since they want the names of things in their language to provide immediate (partial) intuition as to what's going on.
The upshot is that reading Haskell expects more background literature in abstract algebra (or at least a willingness to read the docs for a while to learn a new vocabulary) but suffers fewer intuition breakdowns.
I think that's a general theme of the style of Haskell programming—suffer few intuition breakdowns. It's reinforced by purity, equational reasoning, sophisticated types, top-level annotation style, some parts of laziness, etc. Altogether these pieces fit together to form something unique and interesting.
So that tradeoff of immediate understandability is painful. The community is battling it by (increasingly) building more and more public documentation. Without that tradeoff Haskell wouldn't be as unique and specialized, though.
Yeah, I agree. Haskell has an almost unmatched capacity for abstraction and high-level thinking. This can be a double-edged sword, but it has made Haskell pretty much in a class of its own, and a very special language.
I'm not sure how helpful intuitive names are in practice, especially when the real concept has nuances that it doesn't capture. What do you get by renaming Monoid "Addable" that you don't get from the first line of a tutorial saying "Monoids are things that can be added"? Then, once you've got the intuition, it's much easier to absorb the line further down which says "oh, but they also need to be associative and have a zero element" because you already have a separate name to file these new properties under. It also provides a precise name for further discussion.
Also, on the topic of names for Monads, the key difficulty there is that there are at least two distinct ways of thinking about them which can be applied in different cases. Some Monads are best described by their join (List, Maybe etc.) and some are best described by their bind (IO, State etc.).
Another misconseption is that talking about monads people usually mean 'magic' monads, in first place IO, but also State and more, while other monads like Maybe are not so magic, and common things like lists are also monads, though they don't look magic at all.
I'm not sure I agree with your point about the name. Practically speaking, what should it be called? Most proposals I've seen rely on some sort of analogy which doesn't hold true in general.
An introduction to type-level programming should also be on the list, I think—but there are no great blog posts yet about newer type-system extensions in GHC, as far as I'm aware.
You can find some good posts on older techniques and features (phantom types, GADTs), but there's not much out there introducing type families other than material riffing on the papers that introduced them. (The papers are usually quite readable, with well chosen examples, but ...)
For example, there's almost nothing beyond the docs about the tweaks in GHC 7.6–7.8 that make doing computation at the type-level easier. Different type-system extensions can get you many of the same places (e.g., GADTs v. type families, particularly the new closed variety), and I haven't seen much advice about when to prefer one variation to another (hint: it usually helps to think about type-function injectivity/non-injectivity and definitional openness/closedness).
Richard Eisenberg's blog [0] might be the best source of insights on this stuff right now, but as a researcher he's often out closer to the bleeding edge than best suits a beginning type-level hacker.
-----
Another topic with poor coverage: Template Haskell.
----
Roman Cheplyaka's post about monad transformers [1] is one of my favorites, by the way. More code than prose, and it's not a from-the-ground-up tutorial, but if you understand the examples, you will understand transformers and why they are actually useful in real-world code.
Ezyang recently posted about "Haskell for Coq Programmers"[1] which I think explains the limitations of type level programming in Haskell pretty well. It made a lot of sense to me because it explains the features coming from a language with a more expressive type system and what capabilities Haskell has.
Good post - I book marked this. As a scala dev, I find higher kinded types hard to approach and find practical uses. I'd like to get into haskell just to improve my ability to work in functional programming in the industry as a scala developer. This post was much appreciated.
Here's a simple two part exercise that should give you enough of an understanding about the basics of using IO and give you a general idea about monads.
That's it. If you're at all like me, this should give you a better start than reading theory about the subject. Now that you have a basic understanding about the practice, it's easier to grok the theory.And even if you do not understand the theoretical aspects of it by now, you know enough IO to be able to read and write files, connect to the internet using sockets or do some HTTP requests. That opens a lot of possibilities.