It saddens me that modern languages have basically implemented their own "monadic" syntax for specific cases without implementing something more generic. Rust, Kotlin, and Swift have `?` for the Option/Error monad, Javascript, C#, Python, and Rust all have `await` for the Promise monad. Both of these things and way more are possible with `<-` in Haskell, and while `do` notation isn't perfect, I'd really like another language that tried to build on that instead of doing ad-hoc syntax for the common cases.
One of my favorite secrets about C# is that query syntax is actually a form of do notation. You can create your own interfaces that conform to the monad laws, and the compiler will happily let you apply query syntax to them the same way you can to IEnumerable.
That said, it's not something to actually make use of if you're trying to make your code readable, since the keywords are so tightly coupled to the LINQ use case.
I had to do a project in C# many years ago, small piece of code that needed lots of concurrency and error handling. We were able to write it in a very clean monadic DSL (ala do notation in haskell or 'for' in Scala) by wrapping Task in lawful monad types, we wrote monad transformers, etc. We ended up being extremely productive and getting good performance.
I would almost never recommned doing something like this given the maintenance issues and how long it takes to bring the average C# programmer up to speed, but this was three fairly advanced FP folks working on the application. It was probably some of the cleanest C# I've gotten to work with.
Mozilla ought to chip in for the the research into what borrow checking is so it isn't maimed by function / abstraction boundaries. At that point do notation works fine.
Actually, I'm pretty sure it's a research limitation. It's not clear how lifetimes and locations (in the lvalues sense) relates to more traditional linear logic which is well understood. And this despite the existing research into regions.
I think lifetimes and lvalues are both reasonably straightforward to understand in terms of linear logic.
IMO, the main difficulty with adding monad-like abstractions to Rust actually lies elsewhere, in the fact that Rust doesn't really have function types -- or rather, it has too many of them. Roughly speaking, Rust does not have a function type constructor. Instead, every function you define in Rust gets its own special, unique type, which implements a function interface -- of which there are three, Fn, FnMut and FnOnce, to represent the different ownership states of the variables captured in a closure.
This means that monad-style interfaces involving higher-order functions and higher kinds will end up needing not just polymorphism over type constructors, but also constrained polymorphism over interfaces. This represents a much more substantial extension to Rust's type system than you might first expect.
The reason Rust does this is because if every definition gets its own type, then type inference can tell you very precisely which functions are called where, which makes inlining a lot more precise and effective. This is an important piece of how Rust turns (for example) iterator-heavy code into efficient loops -- there are actually many fewer indirect calls than it superficially looks like.
Personally, I think this was a design mistake, but it is also a choice that I would not revisit now. Once a system is in the wild, with users who depend on you, we are usually constrained to evolutionary development rather than radical redesigns.
I wouldn't think Rust's closure types are to blame. I would say Fn, FnMut, and FnOnce are a completely beside the point. In fact, Haskell, scared to worry about the differing structural properties of variables being closed over in its single function type, is about to adopt a substructure extension far weaker (and in my view inferior) to Rust's.
Rust has one (family of) function type(s) `fn(...) -> ...`. Well that and `for<...> fn(...) -> ...`. This is fine. Those traits are just sugar.
Bigger problems are that there is that there is
- No way to abstract over & vs &mut.
- Within a function, we can treat the initialized-ness of individual fields (really lvalues/locations broken down into trees), but across functions everything is initialized or some unsafe escape hatch.
- Lifetimes within a function are arbitrary subgraphs of the CFG, but between functions follow a LIFO discapline to model the stack. We have to break the so we have all 4 combinations. (Also, really the second type of lifetime should be a special stack management thing using with regular lifetimes and not an implicitly different sort of lifetime.)
My problem is not that "Rust just isn't dank enough", but that expressive power is great than its abstractive power: You can do more interesting things than you can hope to reuse. This I think creates some perverse incentives. Haskell (before this linear proposal at least) doesn't let you do anything you can't abstract over, even when it means less fancy tricks than Rust. I like those incentives better.
I’d be interested in some citations; I wasn’t aware of this. Then again it’s not my area of focus, so that’s not super surprising! We don’t do global type inference as a design decision, and since lifetimes are also generic types, I thought this was also the case. I think you can make a compelling argument that this inference would be bad, regardless of possibility.
https://arxiv.org/abs/1803.02796 has a good intro referencing other work. The paper doesn't go into much detail on the borrow checking side of things, but that author is both thoroughly steeped in the linear types research tradition and knows Rust and mentions it a bunch.
The fact that that paper talks about the affine types far more than the lifetimes obliquely illustrates my point in my view, I'm not sure if it's been stated more directly.
So in Rust today type inference strategies are less the point this hinges on. It's more about abolishing the difference between intra-procedural control flow / data flow / whatever, and their inter-procedural equivalents. Similarly, if we had an annotated version of the MIR which was borrow-check correct under composition that would also be good. NLL is great, and starts to clarify what borrow checking is, but running it in "whole" functions makes it to what clear it is a compositional analysis.
This way you can encode current binding environment into state and state changing code and have operations that use existing type checking and inference facilities of current Haskell (or Agda, or something else) compiler to rule out violations.
I keep saying that what is language feature in Rust, C#, C++, Java and some other currently hot programming languages is an ordinary library in Haskell.
It's more like it's hard to mix together built in and modeled notions together. Like I could hand-write some CPS and then compile that all go gotos, but if I forget to never return the invariant is broken and the compile fails. Monads in rust sort of work if you don't use any "native" control flow.
This actually gladdens me instead of saddening. Monads are just a big can of giving your code implicit magic superpowers: a way to make write-only code that doesn't look like Perl. I would rather have my semi-colons good old unprogrammable no-ops than guess which monadic mess is behind them. Explicit is better than implicit.
R Shiny is another example of a reactive programming mess. I spent about 7 or 8 hours of trying to write a reactive program but for which the use case was actually straightforward (and I would have preferred to write it imperatively).
My use case was basically filters: you have three dropdown lists that change based on the ones you select previously. Power BI and MS Excel filtering is good enough to solve the problem.
I wonder if there is something written about how MS Excel handles filtering internally; maybe it is something like a directed graph?
> I wonder if there is something written about how MS Excel handles filtering internally; maybe it is something like a directed graph?
Reactive programming is a directed graph underneath. Excel itself is a poster child of reactive programming.
Speaking of reactive programming, what is the name of the "fuller" form that works on a proper graph, with loops, and can handle fixpoints? Constraint solving?
Great! This means that I was right when I mentioned to a co-worker that Excel (and its filtering) would have an intersting implementation underneath (from my category theoretic perspective). In fact, I am quite interested to understand the formalisms behind it.
It's worth pointing out that Kotlin has a very powerful monadic abstraction besides nullable: suspend functions, aka coroutines. Arrow KT builds on those to bring you a quasi he notation. They even abuse Kotlins very limited destructuring support (.componentN functions) to bring monadic variable bindings into their syntax. The following is legal Kotlin code
fx {
val (x) = action() // A<B>, where Monad<A>
val y = x + 5 // "pure" function
y
}
The block evaluates to a value of type A<B>. It's going the get even better when Arrow meta is ready for prime time. Arrow is like a neat functional backdoor language hidden inside what seems to be just Java 2.0.
These languages use weaker monad syntax because of the difficulty conceptualizing programming in a way other than "Do arbitrary action A to object B in scope C". You can't gain from a more powerful monad abstraction without losing the constraints of impure functions, dynamic typing, and strict evaluation.
It is therefore incumbent upon Haskellers to make the superiority of powerful abstractions like the monad abundantly clear through the propagation of high-quality software using said abstraction. :)
Monads are not a superior abstraction. Their composition is not commutative and, worse, not even closed (a composition of two monads may not be a monad). Monad transformers are clumsy syntax-wise and have a performance penalty. When I used Haskell, monads were the most meh part: not a pain, but not anything too cool either. Just some syntax sugar to remove a repetitive wrapping function from a few lines of code.
> You can't gain from a more powerful monad abstraction without losing the constraints of impure functions, dynamic typing, and strict evaluation.
Those languages that choose to use options or results with ?. instead of null or exceptions are already choosing to avoid impure functions and use static typing. Monads work fine with strict evaluation (indeed we see post-Haskell languages e.g. Idris using a strict evaluation model).
Monads do work fine in languages with strict evaluation.
In and of itself, that isn't really any more compelling a statement than observing that the strategy pattern works fine in languages with first-class functions. Just because you can do something in a certain way doesn't mean it's your best option. The argument's got to go a bit further than that.
Monads rose to prominence in Haskell in order to deal with some specific technical challenges that stemmed from the project's decision not to make any compromises about lazy evaluation. Using monads for specifically that purpose in a language with strict evaluation would be a case of cargo culting Haskell.
That is certainly not the only use for monads. Haskell uses them as a general-purpose solution for things that other languages typically handle with an absolute zoo of special-purpose features. In that respect, you could argue that monads are, at least in the abstract, better than all those features, because they're one thing that can do the work of many.
However, if you're working in a language that already has some mix of strict evaluation, mutable variables, async/await keywords, exceptions, elvis operators, null punning, etc., introducing monads to solve those problems runs the real risk of running afoul of the xkcd.com/927 problem. Especially if those existing language have a tendency to not understand and respect the fact that they're now being used in code that's supposed to be observing the monad laws. Even implementing `Maybe` in a language with nulls can be surprisingly tricky to get right.
Which is where I stop having much interest in holy wars about this stuff. If you're lucky enough to be working in Haskell or a language inspired by it, that's awesome, you've got a whole menagerie of category theoretic abstractions to work with, and a language that's actually designed to help you use them with confidence. If you're not, then I'd argue that their use needs to be justified in much more specific, pragmatic terms, including an eyes-wide-open perspective on what the ergonomics and technical challenges are going to be like. Because, believe me, it's no fun having to work with a Try monad that can both return and throw errors.
> Monads rose to prominence in Haskell in order to deal with some specific technical challenges that stemmed from the project's decision not to make any compromises about lazy evaluation.
Disagree. To an outsider, IO is the most prominent monad because it's very visible in small example programs (and something that separates the Haskell versions of those small example programs from those in other languages). But to a practitioner it's not a particularly important or interesting example, most use of the monad abstraction is not about IO, and I would certainly hope that the language (in the broad sense) would have adopted it with or without IO.
> Using monads for specifically that purpose in a language with strict evaluation would be a case of cargo culting Haskell.
Disagree; even in languages which define an evaluation order, it's still very much implicit to a human reader. Being able to distinguish between accidental ordering and intentional ordering is really helpful to enable fearless refactoring. Like I said, post-Haskell languages like Idris often use strict evaluation but still see value in managing IO explicitly.
> However, if you're working in a language that already has some mix of strict evaluation, mutable variables, async/await keywords, exceptions, elvis operators, null punning, etc., introducing monads to solve those problems runs the real risk of running afoul of the xkcd.com/927 problem.
The special-case syntax is often more suitable to the special cases it was designed for (it would be pretty tragic if it weren't). But the general-case syntax is necessary for general-case code. It can work pretty nicely as long as there's a direct transformation between one and the other; compare e.g. different ways of iterating through a collection, where you'll generally have a fully general function/syntax but also sugar that's more appropriate to restricted special cases.
> Especially if those existing language have a tendency to not understand and respect the fact that they're now being used in code that's supposed to be observing the monad laws.
I'd argue that if your special-case solution doesn't conform to the monad laws then you already have a problem, even if you don't realise it yet. E.g. the fact that in many languages code that uses a map will break if that map contains null values, and the fact that implementing a valid maybe monad on top of null is difficult-to-impossible, are both reflections of the same underlying problem.
> Those languages that choose to use options or results with ?. instead of null or exceptions are already choosing to avoid impure functions and use static typing.
An impure "function" is any procedure which may have side-effects or produce different results for the same formal arguments depending on the runtime environment or evaluation order. Most languages with option or result types (e.g. Rust) still make extensive use of impure procedures. Haskell and its close relatives are pretty much the only well-known exceptions with first-class pure functions enforced through the type system.
Purity is always a spectrum - Haskell allows non-terminating "functions", for example. Choosing to treat certain kinds of failures as valid function evaluations that can be reasoned about under the normal rules of the language is a step in the direction of purity. More to the point, it's the aspect of purity that's salient when we're talking about whether it makes sense to regard these particular constructs as monads.
> Haskell allows non-terminating "functions", for example.
Non-terminating functions are still pure functions in the mathematical sense. While it does somewhat complicate the use of programs as proofs, a pure function doesn't need to have a defined value for every possible input. A better example might have been unsafePerformIO which, like "unsafe" in Rust, is meant to be used to construct a pure interface to impure code but depends on the programmer to handle it properly. The difference is that Rust doesn't require "unsafe" around all side-effects, just those that may impact memory safety.
I would agree that Option and Result types represent "a step in the direction of purity", but even a little bit of impurity nullifies referential transparency and inhibits equational reasoning.
> ... it's the aspect of purity that's salient when we're talking about whether it makes sense to regard these particular constructs as monads.
A construct is a monad if it obeys the monad laws for all well-typed inputs. In languages like Rust or JS which lack any type-level enforcement of purity the constructs are only monads under the condition that the inputs happen to be pure, not in the general case. For example, for any functor (which includes all monads) we have the law "map f . map g == map (f . g)". However, if f and g may have side effects then substituting one side for the other will interleave the effects in a different order and potentially change the result, so the monad laws are not satisfied.
> Non-terminating functions are still pure functions in the mathematical sense.
No they're not. Mathematically a function must evaluate to a value.
> I would agree that Option and Result types represent "a step in the direction of purity", but even a little bit of impurity nullifies referential transparency and inhibits equational reasoning.
It doesn't nullify anything; the "Fast and Loose Reasoning is Morally Correct" result applies just as well to a language with, say, nulls, as it does to the impure parts of Haskell that it was originally addressed at. The more relevant these aspects of your language are to practical programs, the less useful reasoning one can do, but it's very much a spectrum rather than a binary.
> In languages like Rust or JS which lack any type-level enforcement of purity the constructs are only monads under the condition that the inputs happen to be pure, not in the general case. For example, for any functor (which includes all monads) we have the law "map f . map g == map (f . g)". However, if f and g may have side effects then substituting one side for the other will interleave the effects in a different order and potentially change the result, so the monad laws are not satisfied.
Pure is not a binary; rather the law holds to the extent that the functions are pure (that is, the two sides of the law are equivalent to each other in the same sense that the function is equivalent to its evaluation). Even in Haskell you have cases of the same kind of law violation where two expressions should be equivalent (according to the monad laws) but one terminates and the other doesn't.
> Mathematically a function must evaluate to a value.
For every input within its domain, yes. Whether this is a problem for partial functions will depend on how you define the domain: either exactly the set permitted by the function's type, or the subset of well-typed inputs with an associated value. As far as I can tell the most common definition of the domain for a relation or function is the set of inputs which have at least one associated result; on the other hand, those definitions are not concerned with the function's type. I would say that the input type is only an approximation (superset) of the function's domain, with better type systems permitting closer approximations. (For perspective, I've never known anyone to argue that the result type must perfectly capture the function's range, which is defined much the same way as the set of results associated with at least one input.)
> The more relevant these aspects of your language are to practical programs, the less useful reasoning one can do...
There is indeed a spectrum of varying degrees of impurity among impure languages, depending on both language design and custom among its users. However, there is one key area where the classification is binary, and that is in the answer to the question: Does the language assume referential integrity or not? In Haskell the answer is "yes". The compiler will make substitutions under the assumption that evaluation does not have side effects; if you break that expectation, via unsafePerformIO or other means, the result is undefined. In Rust or Javascript the answer is "no", and various optimizations are prevented because the compiler cannot assume that the evaluation of an unknown function will not have side effects.
> Even in Haskell you have cases of the same kind of law violation where two expressions should be equivalent (according to the monad laws) but one terminates and the other doesn't.
If one side doesn't terminate then you can never get to the point of observing that they have different results. The point of "fast and loose reasoning" is that you only need to prove that the laws are never broken within the program. The condition "map f . map g == map (f . g)" cannot evaluate to False... but that doesn't mean it must evaluate to True. In Rust or JS, however, that condition could evaluate to false in the presence of side effects.
Late reply but I just had to thank you for your comment, I had something close to an epiphany reading it!
Instead of dedicated syntax and language features like try-catch exceptions for things that can go wrong, ? for potentially absent values, just use a single, consistent abstraction that covers all use cases and doesn't require any syntax or language features at all. (=monads)
Even a foreach loop in e.g. Java or C# bakes in a particular iterator interface into the language. The precedent was set long ago. Languages like Swift and Kotlin are just minor evolutions of these previous OOP languages.
Not only that, but React’s new API direction replacing component classes for functions is very monad-like. It uses a magical hidden context which gets switched out behind the scenes.
What saddens me even more is that language designers doesn't deeply investigate previous work (other languages) before designing a new language (that then turns out to be very far from what it could have been with a bit more thinking invested). Yes I am looking at you Go and Rust.
Rust is inspired by PL staples like Haskell and OCaml (borrowing concepts and syntax from them), and was partially designed by PL researchers. The lineage of the borrow-checker in Rust goes back a long way to lots of academic works in region-based memory management and linear logic.
Ah, we didn't ignore dependent types, though it's true that we don't have them. However, we are moving in the direction of adding them, so maybe in the future, Rust will be that language for you :)
LINQ is another clear win for monads. There is more to LINQ than monads, of course, but they contribute to the power of the library.
I'm also fond of of the data generation DSLs in property testing libraries, which tend to lean heavily on monads.
I continue to find that, outside of lazy functional programming, where you need them to live, monads are at their best when they're used in understated ways where the pattern may not even be mentioned by name. All too often, they end up feeling more like something Rodney Dangerfield's character from Caddyshack would use to play code golf.
Monads are not about asynchrony or side effects or error handling. They're about uniform ways of expressing composition, which subsumes all the above. Monads are not even really about FP; they're just more obviously useful there. And without good static typing many monads are almost more trouble than they're worth, so I guess I agree with your last sentence.
Now that I'm on the side of "I know what a monad is", let me just say 95% of what people say monads are is complete trash and unhelpful. This includes calling them "semicolons".
I don't think this analogy sets up anyone for success. For a long time now I've wanted to do a series on "FP for OOP folks".
There are already quite a few of those series. F# For Fun and Profit (fsharpforfunandprofit.com) does a good job of explaining the basic concepts, and the author's videos are a fun watch even if you already know everything he's talking about. Ploeh Blog (blog.ploeh.dk) has spilled countless pixels explaining the functional alternatives to various OO ways of doing things.
Monads are not "Programmable Semicolons". I understand the thinking behind the statement but it really does a disservice to people who doesn't know Monads using those kind of analogies.
I think it's a very good analogy. "What happens to control and data" between statements, expressions, and function calls is not taught in depth in a lot of undergraduate CS programs because it's usually handled automatically, and always one way. With monads, this stuff (which all falls under the rubric of composition) opens up to programmer control. So that's why I think the analogy is apt.
I remember elsewhere on HN I got totally shat on for this analogy. The main case where this doesn’t apply is that monads can decide whether or not to execute later computations and that’s why it’s actually more general than a programmable semicolon. I think the programmable semicolon is more applicable to applicative functors than monads.
That is absurd! Was this an April Fools joke (can't load the PDF right now)? I can't imagine any use case for that feature that isn't going to be hell to maintain.