Hacker News new | past | comments | ask | show | jobs | submit login
Railway-oriented programming (fsharpforfunandprofit.com)
155 points by bunderbunder on June 13, 2014 | hide | past | favorite | 52 comments



The elided punchline is that this entire essay is "just the `Either` monad".

If you can forgive the snark there, the point to take home is that this entire pattern of programming can be nicely summarized through some very high-order topics like Monads. That's one "why" for "why should I learn what 'monad' means".

A very similar essay could be written for non-deterministic programming ("it's 'just' the List or Logic monad"), for backtracking parser combinators ("they're 'just' the State + List monad"), or for state threads themselves ("they're 'just' the State monad"). Each of these programming patterns is neatly summarized by the "obvious" behaviors of a simple type.

One of the powers of Haskell is that it lets you abstract over patterns like this and talk about them inside your language. That's why Haskell has such a fetish with Monads—it's as though GoF had been formally encoded into the language. This notion comes up a lot under the name "Monad" but it might also be "Profunctor" or "Applicative" or "Category".

I really love this essay because it really goes through all the details. Oftentimes an advanced Haskell/ML/F# programmer might summarize a large topic as "it's just the X monad" anticipating that someone familiar with this kind of programming can reverse engineer almost all of the meaning from that plain statement. This is typically true, but certainly expects a lot of experience from the listener.

This essay unpacks all of that meaning and presents it in easily digestible bits. That's great technical writing.


I naturally tend to use this "railway" approach in my code. But often I stumble upon various problems:

1. The "success, but..." scenario. Sometimes you have different types of failures to handle and this style conflates all of them into "failure". For example, I want to WARN about something my caller; everything is OK, but there is something to be known (The call went fine, but the file was not there so I created it). Should information about that value be passed on the failure pipe? If I do so I lose the ability to do simple checks like "if failure.empty?". A third pipe for "warn"? And another one for "info"? Mhhh...

2. There are different way to fail. There are cases in which you really should not go on with any computation, there are cases in which can go on, just note that you are in "danger" mode (ENOSPC).

3. What about transient failures? Should "Network just went down" be treated like "Server returned HTTP 404".

4. Sometimes you want to just ignore all the errors because you are prototyping something and you do not want a lot of boilerplate code laying around. And definitely you do not want a catch all that hides the errors.


"1. The "success, but..." scenario. Sometimes you have different types of failures to handle and this style conflates all of them into "failure"."

No, it doesn't. I'm not sure where you get that, because even in this example "Failure" carries an error message. That's a string for didactic purposes, it can be anything you need it to be.

"2. There are different way to fail. There are cases in which you really should not go on with any computation, there are cases in which can go on, just note that you are in "danger" mode (ENOSPC)."

Here is where avoiding the word "monad" probably does not play to the author's favor. Monadic computations are able to examine the results of computations, so nothing stops you from

     case somethingThatMayFail arg1 arg2 of
         Right x -> do
             ... -- carry on the computation
         Left x -> -- fail out however you're going to fail
Which I believe also handles the rest of your questions... the pattern conveniently propagates errors when you don't want to treat them specially, but you can still dig in and examine them if you need to. Whatever you want to do, you can do, because it's just a program like anything else. For 3, the answer is "just write it the way you want it"... there isn't really an "answer" because that's not a problem in the first place, any more than it would be in C. And for 4, again, just do that then. Nothing's stopping you.


Totally agree with you re 1. The "success, but..." scenario.

We're trying to figure it out at the moment in Snowplow. We process raw events and either accumulate errors on Failure or compose an enriched event on Success. The trouble is, we would like to switch some of these errors to warnings, so we would need to push both the warnings and the enriched event through on Success.

I think we need to switch:

Validation[NonEmptyList[ProcessingMessage], EnrichedEvent]

to something like:

Validation[NonEmptyList[ProcessingMessage], Tuple2[List[ProcessingMessage], EnrichedEvent]]

It feels a bit clunky though...


I think you're supposed to split your logic in enough tiny functions that each really only has success or fail outcomes.

I haven't tried it though.


I'm not sure why the author goes out of his way to avoid saying "monad," but this is certainly a monad analogy blog post whether he wants to call it that or not.

Not that that's necessarily a bad thing; I think this article does a great job of teaching the reader about monads without them knowing it. The subject of the post -- error handling in a purely-functional context -- is usually my go-to example for explaining monads.


> I'm not sure why the author goes out of his way to avoid saying "monad,"

Explained here: http://fsharpforfunandprofit.com/about/#banned


I understand why they do it, but I think they're doing themselves a disservice. It's one thing to speak in jargon and wrongly assume your audience will follow you:

"So this is obviously a catamorphism. This evidently a job for Kleisli arrows!".

And a completely different thing is to construct a solution step by step, using familiar ideas, and then show that it can be seen as a monad, then give a brief explanation of why monads are useful and what motivates them. Which is exactly what the extremely didactic Learn you a Haskell site does (seriously: look at their example for the Writer Monad. It doesn't get more didactic and step-by-step than that)

By avoiding the correct technical terms, they are:

- Losing precision.

- Failing to generalize the concept.

- Underestimating their audience's ability to understand new concepts when property explained.

The last one in particular kills me. I for one do not assume Visual Basic programmers are incapable or unwilling to learn new things.


If you're familiar with the psychological phenomenon known as priming, that may help help to explain the situation. For someone who's suffered through a few too many many blog posts that explain monads poorly, it is possible that simply mentioning the m-word, even in the very last sentence of the article, may inhibit their ability to comprehend the process. Because of that, there's a certain genius to the article's approach: Give folks who've been struggling with monads and arrows the best possible chance to grasp the concept without being encumbered by all the baggage that has built up around the terminology. If they do get the concept, great, and they're virtually guaranteed to realize shortly afterward, "Oh hey, forehead smack, those are monads!"

(Incidentally, for people who've found themselves thrust into this unfortunate situation, a chapter titled A Fistful of Monads probably isn't likely to help much.)

To that end, I'd even go so far as to say your worries about that last bullet point underestimate the audience's ability to understand new concepts when properly explained much worse than the author did.


> even in the very last sentence of the article,

I am not a psychologist, but isn't part of the definition of priming that it happens before? How would saying something afterwards harm your ability to previously understand something? Well, I guess I could see how that'd be possible, but I'm not sure that's priming anymore.


I don't read every article top to bottom, I would waste too much time doing that. I read the intro, and if I like, I estimate how long it will take me with a full scroll to the bottom. I also read the bottom so I know the point we are working up to. Sometimes the article is too long and I skip an overly descriptive middle section.


With complicated subjects sometimes you need to read the explanation a few times before it completely makes sense.


By now I probably sound like a shill, but have you tried reading Learn you a Haskell? There is nothing scary about it, it really is very simple and didactic. Chapter titles are funny: anyone who finds a title such as A Fistful of Monads next to a colorful drawing of The Man with No Name scary is beyond help... :)

I disagree I'm underestimating anyone. I'm advocating teaching people. If you never tell them about monads, they can never realize they've been using them on their own, simply because no-one is ever going to mention monads to them. It just doesn't work like that; at some point someone must tell you about them (a blog or a paper counts in this context).


In your haste, you've forgotten one of the key premises of this line of discussion.

How can someone have gotten themselves into a situation where the very mention of the word 'monad' generates feelings of frustration and simultaneously never have heard about monads?


From "A Fistful of Monads" chapter in "Learn you a Haskell"

> When we first talked about functors, we saw that they were a useful concept for values that can be mapped over. Then, we took that concept one step further by introducing applicative functors, which allow us to view values of certain data types as values with contexts and use normal functions on those values while preserving the meaning of those contexts.

> In this chapter, we'll learn about monads, which are just beefed up applicative functors, much like applicative functors are only beefed up functors

> Monads are a natural extension of applicative functors and with them we're concerned with this: if you have a value with a context, m a, how do you apply to it a function that takes a normal a and returns a value with a context? That is, how do you apply a function of type a -> m b to a value of type m a?

Some people might find this easy to understand, but I guess that many people won't. I don't think that it is because they are stupid, rather that most people are concrete thinkers rather than abstract thinkers.

You would expect programmers to be more comfortable with abstraction than the general population, but even so, in my experience, the level of abstraction that mathematicians use is a level too far for most programmers.

Why should a concept be generalized when you are just learning it? Would you complain that a elementary book on arithmetic fails to mention that the integers form a ring? And that addition is just a special case of a monoid? What advantage is there to this kind of premature generalization?


But all those concepts -- functors, applicative functors and monoids -- have already been explained with didactic examples in previous chapters of LYAH.

Of course, you cannot expect to jump to any point in the middle of a book or article explaining something you don't know and expect to understand every bit of terminology. Without knowing C++ or OOP, would you jump to the middle of an "OOP with C++" blog and expect to understand the terminology? Ok, answer quickly: what's a method? What's a template? What's the difference between class and instance? What does "static" do? (I hope you get the idea).

There is nothing particularly difficult about Haskell, Monads or FP at the level that is taught in LYAH. Seriously. Sure, there is hard to read code in real Haskell projects (much as it happens with C++, by the way), but not in this tutorial.

This argument about "many people won't understand new or abstract terminology" is pretty weak and I'm having trouble with people buying it. Programming IS abstract, but we all learned it. We all were new to crazy new terms like "overloading" and "operator precedence" and it didn't deter us. So why on Earth are we deciding that "Monad" and "Functor" are too difficult?


So they decide to cripple themselves, intentionally restricting their knowledge to avoid being seen as pedantic? I could understand trying to avoid those words for beginners, but it doesn't seem like the site targets beginners only.


He explains how and why his code works. Adding too much formal terminology would make it seem like there is another layer of theory you need to grasp before you truly understand what's going on. Complicated terminology is elitist to beginners, because it nullifies first successes in understanding functional programming.


Because using sloppy language is so much better...


It's probably because every blog post that contains the word "monad" becomes incomprehensible to most people that don't already know what they are.

See also: curse of the monad


I had to look it up:

http://www.i-programmer.info/news/167-javascript/5207-crockf...

"In addition to it begin useful, it is also cursed and the curse of the monad is that once you get the epiphany, once you understand - "oh that's what it is" - you lose the ability to explain it to anybody."


This is not true. I learned about monads by reading websites about monads before I knew what they were. What other way is there to learn new things?


I think the curse alludes to the fact that no one tutorial does a good job at explaining monads, or they rarely do.

It's true for me as it took me several tutorials to get the intuition for it and be able to revisit category theory tutorials (which at first seemed unwilling to give examples).


Yeah, I thought the same thing, once I saw

    val bind : ('a -> Result<'b,'c>) -> Result<'a,'c> -> Result<'b,'c>
Because that's precisely the signature of bind (=<<) in Haskell. Although it looks nicer in Haskell. :P

But I do respect the author's desire to avoid getting too mathy and hifalutin with the theoretical stuff, because you can really go overboard down that path...


The corresponding type signature in haskell is

    bind :: (a -> Either b c) -> Either a c -> Either b c 
So really you just save yourself a couple of angle brackets.

That being said, the author does introduce the (>>=) function, but just doesn't say monad because that can turn people off really quickly.


I was slightly mistaken, because I missed how the error type was put last in all of those Results. In Haskell, the error type is typically written first (Either MyError x), because since `Either a` is an instance of monad, we can replace `Either a` with `m` and get

    bind :: (a -> m b) -> m a -> m b
Which is exactly the signature of (=<<). So you don't just get a cleaner syntax; you now have the ability to use `bind` with any monad (Maybe, List, IO, State...).


I'd say it's the quotes that are noisy, not the angle brackets. But it's a minor nitpick, the F# one is fine.


Because the author's point wasn't to give a monad tutorial or a monad analogy. The point was to illustrate a programming style. That style happens to be one use of a monad, but that wasn't the point.

There are a whole bunch of us who are more pragmatic than theoretical. We care more about "that approach can simplify my code" than we care about "that's another instance of a monad".


As soon as bloggers try to use the word "monad", they have a tendency to go entirely off the rails into math-heavy definitions of things which many can't understand (for no reason!), or fail entirely to explain what a monad actually is and leave everyone more confused than when they started, which is exactly the thing that scares most people off of Haskell.


Have you read Learn you a Haskell? The author is extremely didactic, introducing concepts and motivation before the jargon.

I don't know what about monads feels scary to some people. They aren't actually that hard. There's stuff from C++ that scares the hell out of me, but I've seldom seen people arguing "let's not blog about this C++ feature, because it may scare readers away".


Frankly, even looking at various guides, I still can't really figure out how they work in Haskell, because there's no concrete step-by-step examples anywhere. I can't sit down and say "right, I have these two functions, one looks like `x = 1` and the next looks like `incr y = y+1`, what do I call on what to combine them with a monad, then how do I define a monad which only calls the latter if the former returns 1?", which should be the simplest case.

Haskell's utterly ridiculous amount of syntactic sugar really doesn't help, and I figure its laziness probably doesn't either.


Reading LYAH or the wikibook might be a better way to go, but I don't like it when people say "go read a bunch of stuff" to me, either, so I'll try to help...

Do you want a step-by-step guide for implementing a monad, or using a monad?

The first thing to realize is that Monad is a design pattern. Like any design pattern, it's more applicable to some goals than others.

Could you clarify a bit?

"right, I have these two functions, one looks like `x = 1` and the next looks like `incr y = y+1`, what do I call on what to combine them with a monad, then how do I define a monad which only calls the latter if the former returns 1?"

The best interpretation I can come to is that you want some plumbing that will hide an implicit check that the value being passed along is 1? This would not fit "monad" well - for reasons I can get into if that's actually what you're asking.


But that's precisely how the examples from Learn you a Haskell work...

First the author introduces a simple problem which you could have thought of without knowing anything about Haskell. Then he writes a naive, step-by-step solution without knowledge of Monads, Monoids or whatnot (ok, maybe later chapters assume you've read the previous ones). Then, he shows how the solution he arrived at maps to Monad/Monoid/whatever. He also shows how the solution can be made "more general" than the particular problem you started attempting to solve.

Give it a chance and you'll see what I mean. There's no black magic there.

Also, you can completely bypass do-notation and you'll see there's nothing exceptional about monads.


I'm not talking about do-notation. There's plenty of black magic in Haskell. The overuse of operators and abstractions over everything makes it impossible to even figure out what a given function will be taking as parameters, never mind what a function actually does.

I want to think in terms of actual, concrete pieces of data. Types are just definitions for shapes of data. I can't think at the level of "X takes a Y", I think at the level of "the x function takes the result of the y function". Haskell is obviously not the language for me.


I'd be very interested in trying to clarify some of the difficulties you're having with Haskell, if you'd be interested.

I'm working on a tutorial to take people from zero to functional programming thinking and understanding the intuition, and I need people that are interested in learning Haskell but are currently experiencing difficulties grasping the concepts in order to understand which things are the common obstacles.

If you're interested, please email me (my profile should have my email, if not just answer if interested and we'll figure out a way).

Thanks!


IMO the best way to introduce monads to the average programmer is to say that they are a kind of functional design patterns (which is a valid analogy if not an exact categorization).

DPs in object-oriented programming are also an abstract concept that is difficult to grasp at first, but most senior programmers have already gone through it and succeeded; so liking monads to them can inspire an initial sense of confidence that will be useful when learning the concept, and provides a useful frame to understand the purpose of using monads in functional code (which unfortunately is missing in most monad tutorials).


Very nice explanation of monads.

sigfpe's blog has the "original" tutorial: http://blog.sigfpe.com/2006/08/you-could-have-invented-monad...


Here's the recording (https://skillsmatter.com/skillscasts/4964-railway-oriented-p...) of the talk that Scott Wlaschin gave at Skills Matter 2014 using what I believe is the the same slide deck. It was probably my favorite one from the day.

Warning though, you do have to make an account to view the video.


He presented the same talk at NDCOslo, available here (without registration): https://vimeo.com/97344498

Was a pretty good talk :)


Reminds me a lot about 'optional chaining' in Swift (among other languages I'm sure), at least on a superficial level. What I'm not sure about at the moment is where best practices suggest you should use them–it seems like some functions are written to accept optionals, while others are not, and it's not clear why to me.


I'm surprised none has mentioned promises yet, since they're an example of this pattern applied to asynchronous code:

    getData().then(validate).then(update).then(send, handleError)
This reproduces the code flow of the first figure of the article.


The "railway oriented programming" is pretty much the default way of coding in Node.js using the flow control module async. The major feature that async has but that I don't see in this blog post would be that you can split up the "railway" into multiple parallel paths when needed or when it is more efficient to do so, and then join the paths back together again later in a step which depends on the results from each of those parallel paths.

Once I started using the async.auto() it completely changed my way of thinking about how code worked, and I've started coding in a much more functional style. Node.js makes this very easy.


I think the bypass approach is a commonly used technique when a failure in any component exits the routine ... but it's also common for each of the failures to be processed differently, which actually looks like the fan-out that happens in a railway switching yard.

The author's primary example uses a "layout" I've never seen on a railway (http://fsharpforfunandprofit.com/assets/img/Recipe_RailwaySw...). Railway switches are expensive, so you never see multiple switches going the same direction between a pair of tracks. If there are two consecutive switches (and going say from left to right), one shunts from track A to track B and the other shunts from track B to track A. When space is limited, a cross-over might be used by these are even more expensive (an provide additional gaps where derailments can happen).


In imperative programming, your code can execute in two states, valid and not-valid. You start in valid state, then any validation failure returns you to non-valid state and you remain there. Pretty much the core of any validation technique without bells and whistles.

valid = true

if valid and not check1(): valid = false

if valid and not check2(): valid = false

...


I much prefer the (equivalent) code:

if not check1(): return false

if not check2(): return false

...


We do a ton of railway-oriented programming at Snowplow, using the Scalaz Validation[1]. It's a great great fit for data processing pipelines.

When we are enriching raw events, we try a few separate validations/enrichments and then either accumulate errors or compose a valid enriched event.

[1] https://gist.github.com/justjoheinz/9184859 [2] https://github.com/snowplow/snowplow/blob/master/3-enrich/sc...


This is quite a fancy explanation of monads. But in the end, is it worth it?

Maybe Go's somewhat verbose way of handling errors isn't so bad after all. It seems easier than teaching your coworkers a bunch of combinators, no matter how you try to sugar-coat it.


It's a false choice. There are ways of handling errors without so much verbosity, and without teaching coworkers a bunch of combinators either.


Having more code with fewer guarantees is probably not worth it.


After thinking about it a bit, this seems to be almost exactly the behavior that try/catch implements.


This is very well done. It's refreshing to see graphical explanations of programming concepts. The analogy with tracks is a very helpful way to make the notion of monadic contexts concrete. Great job!

I'd be curious to know how the author came up with the idea of representing these ideas with tracks, as it is a very fortunate perspective.


Also, the swift slides (at wwdc) show something similar




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: