> As for Haskell, do consider that while the base of it is quite simple, just like Lisp
Lisp is only syntactically simple. (Admittedly, it is syntactically the simplest.) Semantically, it is still a mess.
> it also has the massive ball-of-wax that is monads, which people have been trying for years to explain simply. [1]
That is a weird thing to say. Monads are simple: an endofunctor "T : C -> C" with two natural transformations "pure : 1_C -> T" and "join : T^2 -> T", satisfying three coherence laws that basically say "the Kleisli construction yields a category". Of course, explaining monads in terms of "bind" instead of "join" is bound (pun not intended) to result in a huge amount of fail.
> [1] (Though this is mostly because most people trying either don't have the required humbleness to admit they're a hotfix to a core failing of Haskell, or don't dare explain it in those terms.)
It is not a hotfix. It is a feature. Haskell's segregation of effects makes it possible to reason about effects in a compositional manner, using equational reasoning.
That's a simple description, not a simple explanation, and yes there's a difference. An explanation has the additional burden of being easy to understand, which your "simple" explanation is not unless you already have a background in category theory or other relevant experience. What's an endofunctor? What's a "natural" transformation? Is it something more specific than "just a transformation"? What in tarnation is a Kliesli construction? I'm sure you can give good answers to all these questions, but at that point your explanation is neither simple nor easy.
I'm not saying they're bad, I'm saying they're hard, and your pitch needs to be that they're worth the effort, not "come on, they're not that hard". Until I saw your reply to your other reply, I truly thought this was a joke. In fact, the "monoid in the category of endofunctors" "explanation" is a classic joke about haskellites.
My understanding of the notion of "simple" is based on the following principles:
1. Short definitions are preferable to long ones.
2. Reusable generic definitions are preferable to overspecific ones.
3. Case analysis should be kept to the bare minimum necessary.
The notion of "monad" fits these principles perfectly:
1. "A monad is a monoid in the category of endofunctors." Short and to the point.
2. You cannot possibly get anything more reusable and generic than category theory. (Contrast with "instanceof" and reflection breaking type safety, and essentially depending on luck and the stars being aligned in order to work.)
3. There is no case analysis whatsoever in the definition. (Contrast with: "if a pointer is invalid, dereferencing it is undefined behavior, otherwise...", "if a downcast is invalid, performing it will result in a ClassCastException being thrown, otherwise...")
Note that my understanding of "simple" actually encourages abstraction (for the benefit of genericity), rather than dissuade it. Abstraction might make things less "easy" (this is subjective, though), but in no way does it make things less "simple" (this is objective).
I literally cannot tell whether you're still being funny or serious. Poe's law is in full effect. (It's still pretty funny to me either way.)
That said, try:
Haskell tries to be a language where all code only does this: Take input, produce output from it; whenever input is the same, output needs to be the same, nothing else may happen, no exceptions whatsoever. Since this forbids things like printing to the screen, reading from a network connections and other useful things, there needed to be a single construct that is excempt from these rules, so Haskell can be useful. Monads are these constructs.
Monads are the house rules you bring to your Monopoly game to make it fun.
(Yes, that means Haskell is not a fully functional language, it's just more functional than most.)
> I literally cannot tell whether you're still being funny or serious. Poe's law is in full effect. (It's still pretty funny to me either way.)
I am dead serious.
> Haskell tries to be a language where all code only does this: Take input, produce output from it; whenever input is the same, output needs to be the same, nothing else may happen, no exceptions whatsoever. Since this forbids things like printing to the screen, reading from a network connections and other useful things, there needed to be a single construct that is excempt from these rules, so Haskell can be useful. Monads are these constructs.
Stop conflating monads with IO. Monads just happen to be usable for modeling IO, but they can model other things as well.
> Monads are the house rules you bring to your Monopoly game to make it fun.
Ironically, when I program in Haskell, I try to keep as much stuff outside of IO as possible. The reason is precisely that IO is usually not fun.
> (Yes, that means Haskell is not a fully functional language, it's just more functional than most.)
No, it just means that IO is a DSL for constructing imperative programs functionally.
===
Anyway, I have no desire for being trolled, so this discussion ends here.
Wow, that was a clever troll, didn't catch on until the end. Would've been better if you hadn't ended it on an obvious declaration of intent though. :)
--
Edit: In retrospect and for later readers i guess i should point out that i forgot one house rule Haskell brings along: Any function can only ever take one single argument. Some monads make it possible to bunch multiple values into one. So the monopoly analogy above is still perfectly accurate.
Taking multiple arguments has nothing to do with Monads. You can either take in a tuple of arguments
f (x,y,z) = x*y + z
or take them in curried form
f x y z = x*y + z
where f 3 is a single argument function that returns another function. This ends up being the same as functions having multiple arguments, in practice.
Lisp is only syntactically simple. (Admittedly, it is syntactically the simplest.) Semantically, it is still a mess.
> it also has the massive ball-of-wax that is monads, which people have been trying for years to explain simply. [1]
That is a weird thing to say. Monads are simple: an endofunctor "T : C -> C" with two natural transformations "pure : 1_C -> T" and "join : T^2 -> T", satisfying three coherence laws that basically say "the Kleisli construction yields a category". Of course, explaining monads in terms of "bind" instead of "join" is bound (pun not intended) to result in a huge amount of fail.
> [1] (Though this is mostly because most people trying either don't have the required humbleness to admit they're a hotfix to a core failing of Haskell, or don't dare explain it in those terms.)
It is not a hotfix. It is a feature. Haskell's segregation of effects makes it possible to reason about effects in a compositional manner, using equational reasoning.