Hacker News new | past | comments | ask | show | jobs | submit login

> I see no downsides whatsoever to reducing the extent to which one can be wrong.

Then you need more creativity. :)

Nuances make it possible to express extremely complicated (yet often used) concepts in a very concise manner that is still extremely clear. See for example Python Decorators [1].

Very rigid languages take less effort in learning, but at the same time can require the developer to spend a lot more time and effort expressing certain things. See Java.

[1] http://simeonfranklin.com/blog/2012/jul/1/python-decorators-... (Only 12, imagine that!)




> Very rigid languages take less effort in learning, but at the same time can require the developer to spend a lot more time and effort expressing certain things. See Java.

I do not think of Java as a language with few nuances. Null references, broken covariance for arrays, two non-orthogonal notions of modularity (class-based: public, protected and private; package-based: default visibility), value semantics for primitives vs. reference semantics for everything else... it is all very nuanced! Plus, for all the supposed rigidity, you can break type safety via reflection.

On the other hand, Haskell and Standard ML (especially the latter!) strike me as very simple languages, with a far more rigid notion of safety than Java programmers could ever dream of, but which nevertheless afford lots of expressivity. Far more than either Python or Java.


Good answer. :)

I'd like to point out that nuance density is dependant not only on the language as a whole, but also on the area of a language you're looking at. Compared with Perl's Moo/se Java's object system is ridiculously small and simple, which is corroborated by things that are extremely simple in Perl OO taking pages upon pages of code in Java.

As for Haskell, do consider that while the base of it is quite simple, just like Lisp; it also has the massive ball-of-wax that is monads, which people have been trying for years to explain simply. [1]

[1] (Though this is mostly because most people trying either don't have the required humbleness to admit they're a hotfix to a core failing of Haskell, or don't dare explain it in those terms.)


> As for Haskell, do consider that while the base of it is quite simple, just like Lisp

Lisp is only syntactically simple. (Admittedly, it is syntactically the simplest.) Semantically, it is still a mess.

> it also has the massive ball-of-wax that is monads, which people have been trying for years to explain simply. [1]

That is a weird thing to say. Monads are simple: an endofunctor "T : C -> C" with two natural transformations "pure : 1_C -> T" and "join : T^2 -> T", satisfying three coherence laws that basically say "the Kleisli construction yields a category". Of course, explaining monads in terms of "bind" instead of "join" is bound (pun not intended) to result in a huge amount of fail.

> [1] (Though this is mostly because most people trying either don't have the required humbleness to admit they're a hotfix to a core failing of Haskell, or don't dare explain it in those terms.)

It is not a hotfix. It is a feature. Haskell's segregation of effects makes it possible to reason about effects in a compositional manner, using equational reasoning.


That's a simple description, not a simple explanation, and yes there's a difference. An explanation has the additional burden of being easy to understand, which your "simple" explanation is not unless you already have a background in category theory or other relevant experience. What's an endofunctor? What's a "natural" transformation? Is it something more specific than "just a transformation"? What in tarnation is a Kliesli construction? I'm sure you can give good answers to all these questions, but at that point your explanation is neither simple nor easy.

I'm not saying they're bad, I'm saying they're hard, and your pitch needs to be that they're worth the effort, not "come on, they're not that hard". Until I saw your reply to your other reply, I truly thought this was a joke. In fact, the "monoid in the category of endofunctors" "explanation" is a classic joke about haskellites.

edit: typo


Do not conflate objective mathematical simplicity, https://news.ycombinator.com/item?id=6972986, with subjective easiness, which depends on your prior experience.


I'm not. That's the distinction I spent my whole post making.


> Monads are simple: ...

Ahahaha, please tell me that was meant to be comedy and that you're actually aware of the simple explanation. :D


My understanding of the notion of "simple" is based on the following principles:

1. Short definitions are preferable to long ones.

2. Reusable generic definitions are preferable to overspecific ones.

3. Case analysis should be kept to the bare minimum necessary.

The notion of "monad" fits these principles perfectly:

1. "A monad is a monoid in the category of endofunctors." Short and to the point.

2. You cannot possibly get anything more reusable and generic than category theory. (Contrast with "instanceof" and reflection breaking type safety, and essentially depending on luck and the stars being aligned in order to work.)

3. There is no case analysis whatsoever in the definition. (Contrast with: "if a pointer is invalid, dereferencing it is undefined behavior, otherwise...", "if a downcast is invalid, performing it will result in a ClassCastException being thrown, otherwise...")

Note that my understanding of "simple" actually encourages abstraction (for the benefit of genericity), rather than dissuade it. Abstraction might make things less "easy" (this is subjective, though), but in no way does it make things less "simple" (this is objective).


I literally cannot tell whether you're still being funny or serious. Poe's law is in full effect. (It's still pretty funny to me either way.)

That said, try:

Haskell tries to be a language where all code only does this: Take input, produce output from it; whenever input is the same, output needs to be the same, nothing else may happen, no exceptions whatsoever. Since this forbids things like printing to the screen, reading from a network connections and other useful things, there needed to be a single construct that is excempt from these rules, so Haskell can be useful. Monads are these constructs.

Monads are the house rules you bring to your Monopoly game to make it fun.

(Yes, that means Haskell is not a fully functional language, it's just more functional than most.)


> I literally cannot tell whether you're still being funny or serious. Poe's law is in full effect. (It's still pretty funny to me either way.)

I am dead serious.

> Haskell tries to be a language where all code only does this: Take input, produce output from it; whenever input is the same, output needs to be the same, nothing else may happen, no exceptions whatsoever. Since this forbids things like printing to the screen, reading from a network connections and other useful things, there needed to be a single construct that is excempt from these rules, so Haskell can be useful. Monads are these constructs.

Stop conflating monads with IO. Monads just happen to be usable for modeling IO, but they can model other things as well.

> Monads are the house rules you bring to your Monopoly game to make it fun.

Ironically, when I program in Haskell, I try to keep as much stuff outside of IO as possible. The reason is precisely that IO is usually not fun.

> (Yes, that means Haskell is not a fully functional language, it's just more functional than most.)

No, it just means that IO is a DSL for constructing imperative programs functionally.

===

Anyway, I have no desire for being trolled, so this discussion ends here.


>> things like printing

> Stop conflating monads with IO.

> I have no desire for being trolled

Wow, that was a clever troll, didn't catch on until the end. Would've been better if you hadn't ended it on an obvious declaration of intent though. :)

--

Edit: In retrospect and for later readers i guess i should point out that i forgot one house rule Haskell brings along: Any function can only ever take one single argument. Some monads make it possible to bunch multiple values into one. So the monopoly analogy above is still perfectly accurate.


Taking multiple arguments has nothing to do with Monads. You can either take in a tuple of arguments

    f (x,y,z) = x*y + z
or take them in curried form

    f x y z = x*y + z
where f 3 is a single argument function that returns another function. This ends up being the same as functions having multiple arguments, in practice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: