Hacker News new | past | comments | ask | show | jobs | submit login
Why GitHub used Haskell for Semantic (github.com/github)
242 points by yakshaving_jgt on June 5, 2019 | hide | past | favorite | 209 comments



> An example of this is the concept of resumable exceptions. During Semantic's interpretation passes, invalid code (unbound variables, type errors, infinite recursion) is recognized and handled based on the pass's calling context. ... Porting this to Java would require tremendous abuse of the try/catch/finally mechanism, as Java provides no way to separate control flow's policy and mechanism. And given Go's lack of exceptions, such a feature would be entirely impossible.

Not knowing much about FP, It'd be great to see a more in-depth article explaining this problem domain a bit more and showing some side-by-side examples of Haskell's specialized call sites compared to a Java try/except/finally solution (although Python would be a better procedural exception-based language to compare to)

One of the main reasons I don't care much about Haskell is because without any side-by-side comparisons of Haskell vs <insert procedural language> I don't understand what the Haskell advantages are, and I don't know when I'm dealing with a problem space where Haskell would help me.


I wrote a something[0] that demonstrates Haskell programs side-by-side with a Java program for a super toy problem. It goes on to explore more complex Haskell abstractions which implement the same simple program using less approachable techniques.

It is primarily meant for helping people understand how the abstractions work, rather than make an argument for when they are good to use, but it might give you enough background to understand the discussion around Haskell abstractions.

[0] http://reduction.io/essays/rosetta-haskell.html


Not Java or Python, but here’s some Ruby: https://www.honeybadger.io/blog/how-to-try-again-when-except...

The above article talks about exploiting Ruby's support for call/cc to do something similar.

A noteworthy difference: Ruby's support for call/cc is baked into it's runtime, while in Haskell you can implement call/cc as a normal library. This is done by leaning on Haskell's monadic "do" syntax and a suitable implementation of the Monad class.

http://hackage.haskell.org/package/mtl-2.2.2/docs/Control-Mo...

I'm rooting from my phone, so I can't type up a this-vs-that. But maybe that piques your interest and you can read up on Haskell and programming with monads.


I’m a bit skeptical of the difficulty ascribed to doing something like resumable conditions in a Java system.

You could have a per-thread singleton stack of handler objects. To raise a resumable condition you call a method on the stack that searches outward for an active handler to ask for a restart option and just returns it. If none is found an exception is thrown to unwind the call stack.

(Common Lisp has restartable conditions native to the language and this is essentially how it works. Conditions just don’t unwind the stack until no restart is available.)

Haskell monads can easily model other interesting control flow structures but this is basically just a syntax nicety of the do notation that lets you avoid writing nested lambdas to represent continuations. If you’re okay with heavy lambda nesting in Java you can do it all by just implementing whatever monad you like in Java—although the type system isn’t as glorious when it comes to higher-order polymorphism and inference.


This is trivial to do in most languages (including Java and Go) if you just stop to think about it for a couple of minutes. Just pass some kind of context object down the call stack (or make it available as a global — make it thread-local if you need). Then when an error occurs, just call this handler and voilà!

This is, in fact, pretty much what Haskell does. Except this handler is in a typeclass instance, and this typeclass instance is passed down the call stack, it just doesn't appear in the parameter lists but in the type signatures. It's a bit lighter syntactically, but ultimately it's the same thing.


I'm only a Haskell beginner, but I have solved similar task, with parsing that may fail, in CIS 194 course. The solution was based on applicative functors, and I really don't think that Java or other languages have anything like it.

Here's the lecture: http://www.cis.upenn.edu/~cis194/spring13/lectures/10-applic... and PDF of the assignment: http://www.cis.upenn.edu/~cis194/spring13/hw/10-applicative.... Here's my solution of it: https://github.com/golergka/cis194/blob/master/Homework11/sr...

I hope it's readable enough to illustrate the overall concept - but please bear in mind that I'm a beginner and probably have made dumb mistakes. If you have any criticism or suggestions, I would love to hear and learn from them: learning Haskell is the first time in my software engineering self-education where I wish I had regular contact with instructor or a mentor.


This guy wrote 3 really short blog posts about doing Haskelly-stuff in C++. [1, 2, 3]

They are pretty concise and [2] especially did a good job of making me understand some of the power of Haskell.

[1] https://www.syrianspock.com/functional%20programming/softwar... [2] https://www.syrianspock.com/functional%20programming/softwar... [3] https://www.syrianspock.com/functional%20programming/softwar...


I think it's hard to do just side-by-side comparison. Just imagine you just go back to 90s and convince C++ fans using Java.

They are all general purpose programming languages, just with some better/different design decisions to make things safer or relatively more/less expressive.

Similarly, the difficulty to explain Monad is Monad itself is pretty abstract and general. You can describe what it is, but it's hard to guarantee what the audience get from you. Someone says it can solve asynchronous programming issues, then someone will think it's something just for solving this kind of issues. Same as solving null pointer exceptions. After all, it's an abstract representation of sequencing computations with effectful context. But the last one is much harder to understand than the formers.

An equivalent thing is expressing the concept of the "variable" to a mathematical audience who never heard of a computer. They can definitely understand part of it, but not what you want to express.


The "Monad" typeclass in Haskell is a unification of a bunch of things that are treated separately in other programming languages. By "unification" I mean the same kind of thing that physicists mean when they say that Maxwell unified light and electrical phenomena, or that Newton unified physics and astronomy.

It is this unification that makes monads so powerful; one abstraction handles sequencing, exceptions, non-determinism, parsing, IO, transactions, asynchronous state machines and tons of other stuff.


i like to explain monads as the same sort of thinking process as a 'design pattern' in OOP. People who have done OOP would have heard of things like the visitor pattern. In OOP, patterns aren't unified in some class, but could be.

Monads (and type classes in general) are like design patterns in OOP, in that they encode a common use case. Except in haskall, you can encode this pattern into the language semantics.


>Similarly, the difficulty to explain Monad is Monad itself is pretty abstract and general.

A Monad is just a Monoid in the Category of Endofunctors.


For the non-Haskell readers, I should point out that this is just an old joke in the Haskell community. If you want it explained, see here:https://stackoverflow.com/questions/3870088/a-monad-is-just-...


damn... I missed "what's the problem?" haha.


This. What's so hard to understand about that?


Would I be correct in saying you just encapsulated the monad state type `this` in an identity `that` by creating a monad transformer out of `understand`?


> One of the main reasons I don't care much about Haskell is because without any side-by-side comparisons of Haskell vs <insert procedural language> I don't understand what the Haskell advantages are, and I don't know when I'm dealing with a problem space where Haskell would help me.

This is one of haskell's biggest problems. It's just enough outside of the normal flow of imperative languages (yet usable for the same problems) that you can't tell how much of an improvement it is unless you try it. Also, people who write haskell are more inclined to share lofty/abstract/interesting-to-other-haskellers code rather than your normal day-to-day code that is massively improved/safer and benefited from haskell's features.

I've mentioned this before, but one example of where it became apparent to me how much haskell had changed what I expected from language was non-nullable types. It's starting to be really common in languages now (typescript, kotlin, etc), but if you are used to writing imperative languages, the worry of nil/None/null is ever present, and a concept like Optional<T> is actually quite foreign looking. If you really think about it, it means that none of your language is safe -- none of your functions are safe because they said they wanted a String but you might have gotten a null that looks like a String to the typechecker and will blow up at runtime.

Another key improvement in haskell is the removal of class-based code-sharing (i.e. inheritance) -- the separation of behavior and data is really important, and most languages are starting to come around to this now (go w/ structs + interfaces, java w/ data classes, kotlin w/ data classes, rust w/ structs + traits), but haskell (and other ML languages) have been there for a while.

Yet another key improvement in haskell is the errors-as-values paradigm that is everywhere. If some function has a possibility of failure, then it should return `Maybe TheThing` or `Either AnError TheThing` (see how nice and legible those types are?) -- this forces explicit checks on failure and allows cases where there isn't a chance of failure (just `TheThing`) to speed ahead without nullchecks and be fairly certain. This actually pressures you into trying to sequester failure across your codebase -- you try to write functions that have signatures like `TheThing -> SomeArgument -> OtherThing` (see how legibile that is?), to minimize on the amount of `Maybe x` or `Either error x` you have to deal with -- this is often if not always good for codebases.

Maybe this is something I can help with, I write about pedestrian haskell a bunch, and I've been meaning to do a blog post on why haskell is better <your language>, something to really rustle the jimmies.

BTW, the quote about resumable exceptions is actually referring to a concept called a monad, which can be incredibly hard to grasp if you don't look in the right places (there are a lot of bad tutorials out there), or don't give your brain long enough to marinate in the concepts. If I were to take a stab at explaining it simply, in this case it's like a combination of exceptions-as-values (i.e. not go's approach, and not java's approach) and the value that is being passed around has enough state in it to continue stop, fix itself, whatever else. When something goes wrong in most imperative languages, you kind of get the hell out of dodge, and you lose access (usually) to whatever work was done up until the function boundary -- it doesn't have to be this way but it usually is.


> Also, people who write haskell are more inclined to share lofty/abstract/interesting-to-other-haskellers code rather than your normal day-to-day code that is massively improved/safer and benefited from haskell's features.

When Rust started getting flooded with the "web" crowd of ex-Rubyists and the like there was a lot of push back from the traditional systems people (for better or worse). But one of the benefits is that these guys are typically far better at communicating and selling languages to the general developer public.

I too have run into countless examples of these "beautiful" Haskell code examples but when it came down to doing real work I felt like I was left to either figure it out myself, try to connect a more abstract blog post to more practical applications, or left reading some auto-generated Haskell/library API documentation (75% was the last one).

Maybe Github and Facebook et al can lend some of these resources to teaching Haskell to the public and releasing well-documented libraries which set a standard for others to follow? It may have a high learning curve like Rust, but it's far from impenetrable for your average developer.


> When Rust started getting flooded with the "web" crowd of ex-Rubyists and the like there was a lot of push back from the traditional systems people (for better or worse). But one of the benefits is that these guys are typically far better at communicating and selling languages to the general developer public.

I 100% agree -- this is the crowd that brings the hype (for better or for worse). I guess it's another one of those life lessons, but projects need both types of crowds (and to be honest it's not like there's a strict separation, lots of people fit in both camps).

I think Rust is actually going to eat a ton of what could have been Haskell's lunch -- it's a great typesystem for the traditional imperative language crowd and awesome performance for the ML crowd, and a completely new paradigm of data safety that neither of those crowds had before. These days I struggle to choose Haskell, but have settled on Rust for "performance critical" things (I don't really write truly low level software so take that with a grain of salt), and Haskell for everything else.

> I too have run into countless examples of these "beautiful" Haskell code examples but when it came down to doing real work I felt like I was left to either figure it out myself, try to connect a more abstract blog post to more practical applications, or left reading some auto-generated Haskell/library API documentation (75% was the last one).

> Maybe Github and Facebook et al can lend some of these resources to teaching Haskell to the public and releasing well-documented libraries which set a standard for others to follow? It may have a high learning curve like Rust, but it's far from impenetrable for your average developer.

Hugely agree, but I think it's gotta be a community effort. FPComplete is out there doing stuff, and there are lots of individual bloggers, but Haskell needs more people writing "pedestrian" programs. I think it's one of the main ways of contributing to a language that is often overlooked. I don't have any numbers, but learn you a haskell for great good has probably lead to thousands of new haskell devs over it's lifetime, even if the information in it is outdated (and some consider it not a good starting point).

To compound all this, haskell also has a documentation problem -- the machinery is there but it often doesn't get written, or people don't include the "getting started" use cases. Most popular libraries are workable but some others aren't, so it's intimidating until you really start to see the types as sufficient for understanding.

Small shameless plug, I try to write about haskell and am in the middle of a post where I make a CountMin data sketch right now, I'm not quite done with it but hope to have it done this weekend. I feel in that way I'm at least doing something to help the haskell community.


> (and to be honest it's not like there's a strict separation, lots of people fit in both camps).

In my experience it always has to be both (a developer with good communication/marketing skills). Any non developer pushing a language or platform is always the wrong choice and will probably scare away more of the devs, who want to take specific code not just genetic benefits, than help. Too many “developer advocates” have rubbed me the wrong way.

Besides most of it is good web design, writing good newbie friendly documentation and guides, answering questions on HN/Reddit (which Jose from elixir is really good at).

Then once you get past the early adopter phase you need to convince the CTOs, who listen to their developers but also take a strong long term risk analysis when judging it. Including things like hiring and support for core libraries.

None of this will happen without the initial group getting drawn in. So hopefully we’ll continue to see more blog posts like above by Github giving their honest practical feedback and publishes libraries.


> Small shameless plug,

No point plugging if you don't provide a link for us! :-)


I hear you! I was going to try and get the CountMin post done this weekend and update this but I guess I gotta let it roll with what I have already done:

One of my better Haskell posts is a series on writing REST APIs. It kind of gets off the rails type wise as I try to get more and more clever, but I rein it in:

https://vadosware.io/post/rest-ish-services-in-haskell-part-...


Nice, I will have a read.

I think you need to get rid of the margin-left and margin-right styles on '.article-content pre' for '@media screen and (min-width: 989px)'. It pushes the code off the bounds of the page on my screen.


Thanks so much, I will get that fixed this weekend!


The selling point for me to get into Haskell was how much better it's lowest bar is than pretty much every other language I've used so far. It's not perfect but it has a very good effort to power ratio.

For example, I have a database with millions of unstructured, schemaless JSON documents. The documents are mostly similar but the schema is defined by Javascript code that has been maintained and modified for many, many years and so the documents have many, many edge-cases.

I've dared my team to write a JSON-Schema document that could validate our format. It would take a lot of effort to get that going. And it would be verbose and hard to work with: JSON-Schema itself is written in JSON and there exists no tool that can understand the types in the schema, validate them, etc until you run the program on some documents.

Instead I spent a few hours and wrote a type that loosely described some of the more common types I've come across in Haskell. I used the wonderful JSON libraries to parse documents from our database using my small, limited type in a test suite. It failed initially but the type system pointed out where it was failing and why. So I added more cases to my type, improved the parser, until I could parse a handful of real-world examples.

From there I wrote a tool that scans the whole database, collects the parse results, and displays the top N parse failures with example documents to add to my test suite. I use the test suite to interrogate the parse results, add more cases to my type, extend the parser, etc, etc...

I've spent very little time working on this and got my tool successfully parsing almost 90% of the database. When I add new examples the type checker guides me as I interrogate the new case, fix the edge cases, and get the tests to pass. Once I can parse 100% of this database I can start writing a tool to migrate my messy data structure into a more clean, consistent one with a more simple parser and prove that the migration is total. And I suspect it will take even less effort to do that.

I'm not even leveraging anything more advanced than ADTs, type classes, and functions. For very little effort Haskell has given me the ability to solve valuable problems. Problems that scared other developers away using other languages.


This is an awesome usecase and a really good use of the expressiveness of haskell -- if your org has an engineering blog, please write about it I'd love to read more.


> changed what I expected from language was non-nullable types

I got the same revelation from the a lot more conventional [] looking Crystal, which don't solve it through optionals but through union types. Exposure to that kind of type safety to enforce non-nullability is really a watershed moment.

[] For values of convention that look like Ruby. Not everyone think that look is conventional enough.


Optionals are union types: Optional<A> is the union of Unit and A.


It's interesting that any time I read about Haskell, I realize most of the features can be found in other languages.

Functional programming and lazy evaluation are common in Apache Spark ("analytics engine for large-scale data processing."). You cannot write a good pipeline if you think in terms of imperative language.

Non-nullable types can be found in Java (@NonNull) and in C++ (references). C++17 got std::optional type.

We have languages without inheritance, like Go, Rust and so on.

Errors-as-values are pretty common as well (C++'s boost had boost::error code for a while now, and of course there is Go again)

Even monads find themselves in other languages -- using org.apache.spark.rdd.RDD is pretty close to IO monad.

I find this an unfortunate downside of many Haskell tutorials -- they often claim there are unique features that are present in Haskell only, but on the closer inspection, it turns out those features are present / can be trivially added in many other languages as well


Hey that was kind of my point -- but I think you have it in reverse, Haskell has had a lot of this stuff for a long time (as in most of them since it's inception), and it's trickling down to other languages now.

But to make some concrete counter points:

- Apache Spark is not a general purpose programing language (you're totally right about FP and lazy evaluation being important in DAG-land of course)

- "Non-nullable types can be found in Java", yeah except them being the default is the big innovation, along with the recognition of the problem, and facilitation of the worldview that recognizes the issue. Optional didn't show up until a few years ago (Java 8?), first class functions weren't a thing without subclassing till around then too, Function references, Functional interfaces, etc. I'm less familiar with C++ and it's commendable that it's adopting new things and people are moving forward, but it's basically the gold standard of footguns with type-system scopes attached (again I don't write C++ on a daily basis and haven't felt just how much better the new editions are).

- Go and Rust learned from Haskell, Rust heavily so. BTW these days I'm more and more of the opinion that Rust is the one more worth praising of the two

- What go does is kind of error as values, but it's also kind of not -- I mean a near complete lack of use of exceptions at all. The distinction is subtle, but coding to always handle the error case (because it is the result) is different from having a sometimes-present error code that you sometimes check.

- Again, you're right that Monads are everywhere -- Haskell didn't invent the concept, but it is one of the places you can go to see it actually used functionally and learn from what people are doing with it (never mind all the novel papers).

Haskell is one of the few places that all these features come together to form a coherent whole.


> Functional programming and lazy evaluation are common in Apache Spark

Spark ain't a language, it's an engine. Anyhow, you could make the point for lazy evaluation in stream libraries in many languages. It's not really comparable to having this as a first class citizen in the language, just like Guava didn't make Java7 equal to Java8.

> Non-nullable types can be found in Java (@NonNull) and in C++ (references). C++17 got std::optional type.

The reason people talk about it isn't about having non nullable types, it's about not having nullable types (or, at least, not having them as a default).

> We have languages without inheritance, like Go, Rust and so on.

I don't believe not having inheritance is a language feature. People may say that inheritance was a mistake and that languages without it are better off, but you will rarely hear about no inheritance being a feature of a language.

> Errors-as-values are pretty common as well (C++'s boost had boost::error code for a while now, and of course there is Go again)

Haskell has an error system. Errors as values are a pattern enabled by other things of the language (ADTs, functors/monads), but not having errors is not a Haskell feature. Elm would be a better example of this.

> Even monads find themselves in other languages

See above point about first-class things in a language.

Your analysys about features is misguided because languages aren't things that can be compared feature by feature. They are a coherent set of things that produce a specific dev experience. Haskell is offering a specific experience that people enjoy, and therefore, people talk about it, and some other languages tend to adopt some of the features in an attempt to reproduce the experience.


This is a catch-22 of asking people to explain the advantages with simple side-by-side examples. If you want an example of something truly unique to Haskell, we'd have to talk about e.g. using the cataM function in a kind-polymorphic way. But you'd have to get a certain depth into the Haskell mentality before you could understand why that's useful or valuable. So we talk about the simple examples - but lots of languages solve those simple examples with limited/ad-hoc/special-case versions of things that Haskell does with more powerful general features. https://philipnilsson.github.io/Badness10k/escaping-hell-wit... has some examples of this phenomenon.

Apache Spark could probably never have been created without using the only other mainstream language with higher-kinded types (Scala) - now that the design has been proven a lot of it has been rewritten in a verbose Java style, but I doubt the work to come up with that design could have been done while thinking solely in Java.

Working without inheritance is only practical if you have typeclass derivation. Rust and Scala approximate this with macros. I don't think any other mainstream language has that functionality at all.

Non-nullable types and errors-as-values are only practical if you have higher-kinded types. (Rust has an ad-hoc macro that works for a handful of error-as-value types, but you can't write the general monadic library functions that you'd want to work with them properly).

Haskell is not entirely unique, but it's pretty close. The only other remotely mainstream language that has the combination of higher-kinded types and typeclass derivation (ish) is Scala, and even then you'll eventually get bitten by the lack of kind polymorphism.


There are no tutorials that claim these features are unique to Haskell or that they cannot be added to other languages.


> > During Semantic's interpretation passes, invalid code (unbound variables, type errors, infinite recursion) is recognized and handled based on the pass's calling context. … And given Go's lack of exceptions, such a feature would be entirely impossible.

Nonsense! It’s completely possible — you just have to be willing to abuse a few of Go’s features in order to do it.

First off, Go does have a standard calling-context (i.e. dynamic context) mechanism: context.Context[0]. All you need to do in order to track stuff in the calling context is stash an object in the … context.

Given this dynamic context mechanism, you next need a way to represent handlers. That’s easy: a handler is a function which takes an exceptional condition and either handles it or returns; handling consists of a transfer of control. Fortunately, Go has a control-transfer primitive as well: panic. So to handle a condition, a handler just panics — something higher up the call stack can use defer to catch the panic and continue execution.

That leads to the next component necessary: a way to provide resumption points, or restarts. A restart is just a function which is invoked by a handler, potentially with arguments, and which when invoked continues execution from its establishment point. This can be done with defer.

So it’s perfectly possible with tremendous abuse of Go’s panic/defer mechanism, no different from Java.

See this gist: https://gist.github.com/r13l/2911f93cbe66fb4ed50f9d9eb1eb252...

Honestly, I don’t even know if I’d call it tremendous abuse, although it is somewhat abusive. Abstracted in a library, it might even be somewhat useful.

It’s off-the-cuff — I haven’t fully considered the semantics of the different context objects being passed around.

0: https://golang.org/pkg/context/


I'm really curious why Haskell has seen so little adoption in industry. Is it just the difficulty? Or a chicken-and-egg effect with tooling and libraries?

One thing I've wondered is if it actually isn't ideal for a lot of cases. FP is beautiful for certain things. But in some domains (or pieces of domains), state and mutation aren't just unfortunate implementation details, but a core element of the problem space. For these cases, the FP answer is usually "recompute and replace" (generally with immutable data structures that make this efficient). This can be syntactically clunky when it's a major part of your application, and not just a necessary evil to be swept out to the edge. The most successful languages let you be pure-functional where it makes sense and then stateful where it makes sense. Haskell doesn't, really (from my cursory reading about it).


It's 100% the difficulty. Anyone saying otherwise is lying because they want Haskell to be popular, adopted it very early on in their career when they could really invest in it, or is a natural at this type of stuff and simply doesn't know any better.

I've learned about 10 different languages now and Haskell easily had the highest learning curve. Most languages I was able to get semi-usable in within a couple days, at least to do some minor stuff that works. But Haskell was a real commitment that took months until I was comfortable doing real stuff.

You have to learn about how to mutate data and how to manage state using monads and functors, how to query through complex objects using lenses, even getting it to parse some complex data coming from a JSON feed into a usable form requires a good understanding of the type system.

But ultimately besides when I first learned Clojure (my first exposure to FP), I don't think there's a language that has taught me as much about programming as Haskell. It was very rewarding and something I still continually dabble with and learn from on the side.

I've yet to pull the trigger and actually build a full side project with Haskell. Which is typically my biggest test. I've found Erlang/Elixir to be the far better middle ground from my day-to-day work in Ruby/JS when I want something modern, fast, and functional.

Once PureScript becomes stable I have a feeling I'll be diving harder into Haskell and may finally make that full commitment it requires for a real project.


I agree. You master monads and IO, but then you want to use a web framework and there a bunch of other category theoretical concepts and/or Haskell advanced features you need to understand to serve up "Hello World". Compare that to expressjs. I love functional programming, but given the free choice of what to use to knock up a side project, I chose JS at home. Nothing like grabbing some data from a server, and it being an object you can use immediately in your code and access using indexers, properties etc (rather than some strange lens operator like ~~!!#).

Typescript cures most of the JS ills. Sure TS is more akin to Java/C# and not the much better Haskell type system. But it is good enough and catches 99% of the real world problems of naked JS.

To me Haskell is training for your brain. Once trained you are a better JS/C#/Java/C++ programmer.

I could go on about the economics of getting a Haskell job:. Haskell and Elm developers are taking a paycut compared to what they would earn if they used any language. Even if they are paid well they could get paid more. Supply/demand at work there.


> I agree. You master monads and IO, but then you want to use a web framework and there a bunch of other category theoretical concepts and/or Haskell advanced features you need to understand to serve up "Hello World". Compare that to expressjs.

I started using haskell at my last job after having done a couple years of functional programming in scala, but I honestly never found it required that much knowledge of category theory. The FAM trio of classes (functor, applicative, monad) are the most common ideas used from category theory in an explicit way, and they're frankly so prevalent at this point in the industry at large that they're in basically every language in extremely common libraries. Most programmers are pretty familiar w/ map and bind (which is also sometimes called flatMap, then, and_then, or chain). Applicative functors might be a little more exotic, but they're easy to pick up if you understand the other two. And I don't think you really need to understand what a morphism is or how to read arrow diagrams to really use any of these things. If you can use arrays in javascript, you can use the IO monad.

Most of the challenge in haskell for me was figuring out stuff like how I should structure my code to enable mocking out effects for testing (there's multiple answers to this w/ different tradeoffs), or how to get good editor support without ghc-mod breaking. I haven't used haskell in industry in about the past two years, but I imagine these are still some of the biggest challenges w/ using it.


+1 everything you said.

Do you think it would be dramatically improved if Haskell had a bigger community of people contributing user-friendly libraries and tutorials? That was something that made Ruby the perfect newbie language for me. And something I feel is downplayed in Haskell given it's more advanced user base who is a little too obsessed with it's power and demonstrating their knowledge as such, rather than help make it accessible to others.

I can't count the amount of times I came across a popular Haskell library with very little documentation rather than it's type/function API and general blurb about what it does. This is very different that most popular JS/Ruby/Python/etc libraries which include quick-start/getting started/usage examples/etc etc.


> Do you think it would be dramatically improved if Haskell had a bigger community of people contributing user-friendly libraries and tutorials? That was something that made Ruby the perfect newbie language for me. And something I feel is downplayed in Haskell given it's more advanced user base who is a little too obsessed with it's power and demonstrating their knowledge as such, rather than help make it accessible to others.

Absolutely. I think it's been getting better at that on the tutorial front. For a long time there were very few books I'd actually recommend for haskell, but Haskell Programming from First Principles [1] changed that for me (personally). And I think Stephen Diehl's excellent What I Wish I Knew When Learning Haskell [2] is a fantastic general resource for newbies. But I think the community could stand to have more of this, absolutely.

And I definitely agree about libraries. I'd like to see more haskell libraries with extended tutorials, and written w/ a non-expert audience in mind. I think the community has for so long been dominated by long-time haskell developers and people ensconced in functional programming, much of the documentation is written for those kinds of people.

[1]: http://haskellbook.com/

[2]: http://dev.stephendiehl.com/hask/


Ergonomics is probably the number one thing that matters on the long run for computer languages.

Make the frequent things easy and safe. And this means syntax should be easy to read and write, documentation should be plentiful and easy to read, easy to start using, even if you are not expert in the area that that particular library covers, etc.

Rust also suffers a bit from this, as more and more libraries are just the generated docs. Here are these 30 structs and 100 impls, godspeed. Yeah, but what is this library, when I would use it? What's the most 100 common use cases?

And a 100 might seem like a lot, but people will chose that instantly works for them. And there are a lot of strange stacks out there. Sometimes people just want the low-level bits of your library. No docs/API for that? Damn. Sometimes people just want to use it as a one-liner, no config files, no import-server-deploy, no binary? Damn.


The more flexible a language is, the more documentation its libraries need.

One often-ignored benefit of constrained languages like Java where there's really only one way to do most things, is that when you get a library you immediately have a reasonable idea of how to use it (given the entity names and types). If a language has a more advanced type system (or none at all), you can't lean on that existing framework and the author needs to do more legwork to explicitly lay out exactly what it is they've made. This doesn't make more advanced languages bad, but library documentation is often neglected and that probably contributes a lot to the inaccessibility of those ecosystems.


How far can you go without monads and category theory? I was looking at Clean which developed in parallel with Haskell and uses uniqueness typing instead of IO/mutation monads. Seems like it would be less offputting for a newcomer. Clean lacks community and a package manager, I believe which makes it less attractive. The language itself seems like a sweet spot for me.


If you want to be productive in Haskell, the Monad typeclass is an important tool to familiarize yourself with. That said, unless you are working on the internals of a few libraries, you don't really ever need to know serious category theory in order to be very productive. If you don't already have a background in abstract algebra or category theory, I think a better approach to learning these abstractions is slowly work through the typeclassopedia[0] while solving problems the naive or clunky way and then start to use the fancy-name abstractions (e.g. Functor, Applicative) as you see how they could be useful.

On that front, the Monad typeclass is far more general and useful than just for IO and State, so if you are thinking of it as primarily a hack to deal with those, you probably won't get the hype. In addition, it's really useful to work with a large number of examples of different Monad instances (IO, State, Maybe, List, STM[1] if want to get a bit further into the deep end) instead of just staring at the methods in the typeclass and hoping it make sense. It's a pretty broad abstraction, so it will only make sense if you are familiar with what it's abstracting.

[0] https://wiki.haskell.org/Typeclassopedia

[1] http://book.realworldhaskell.org/read/software-transactional...


The other partial reason for not wanting to know is that I can't then unknow. Willfull ignorance it is. Can those who know say they're happy using languages on the daily without the features you miss and think in?


Thanks for this info. What I really want to know is the value prop for learning/using monads etc. The examples of IO, State, Maybe, List, STM given seem like they'd be dealt with just fine in Clean.


I think it's important to be precise about how the monad abstraction and type system features interact in order to combine pure and impure code. I wrote a comment elsewhere in the thread (https://news.ycombinator.com/item?id=20112333) where I conclude that while the monad abstraction is useful for making a usable interface and writing programs which are agnostic to how their state is implemented, the fundamental work of distinguishing pure and impure computations is accomplished with a combination of type system features and compiler magic.

The case is exactly the same for the Clean language as for Haskell and indeed Clean has Monad instances for all of the types I mentioned aside from STM, so there are no concerns with dealing with those monadic abstractions in Clean. (https://imgur.com/a/sjsDiZq, https://cloogle.org/#using%20Monad) Since IO is Haskell's type that marks impure computations, we can compare Haskell's implementation to the equivalent one in Clean (interface: https://cloogle.org/src/#Platform/System/IO;line=10, implementation: https://cloogle.org/src/#Platform/System/IO;icl)

In Clean, IO is implemented as follows:

    :: IO a = IO .(*World -> *(a, !*World))
In Haskell, it is:

    newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))
(for full context see: http://hackage.haskell.org/package/ghc-prim-0.5.3/docs/src/G..., the related monad instance is here: http://hackage.haskell.org/package/base-4.12.0.0/docs/src/GH...)

These both implement IO as a function which takes the state of the world as its input and returns a new state of the world as it's output. This is encoded using the state transformation that I mention in the other post (https://acm.wustl.edu/functional/state-monad.php) Both implementations also go on to define Monad instances for their new IO type. The major difference is that the Haskell standard library only exposes bindIO and returnIO to the user and hides the internals of IO from the user and Clean allows the implementation to be a normal library.

That difference is Clean's uniqueness types showing their strength. Clean can explicitly expose it's predefined World type (https://cloogle.org/doc/#CleanRep.2.2_6.htm;jump=_Toc3117980...) to the user with the guarantee that you can't write a function of type IO a -> a because it would violate the uniqueness properties and thus be a compilation error. Haskell instead uses the module system to keep State# RealWorld from being exposed to the user. This means that if you as a user want a different set of abstractions for impurity in Clean, you can build from the World level rather than needing to construct it out of what can be done with bindIO and returnIO. For details on what's going on with State# see https://www.fpcomplete.com/blog/2015/02/primitive-haskell

From the perspective of "the value prop for learning/using monads", this discussion leaves us in a worse place than where we started because the conclusion is that uniqueness types aren't a get out of monads free card and that Clean uses the many of the same Monad-related abstractions as Haskell does and uses them for the same purposes. In order to not leave you out in the cold as to the value prop for the monad abstraction, you can see how it works for a number of different Monad instances in Tikhon's answer on Quora (https://www.quora.com/What-are-monads-in-functional-programm...), though he chooses to use join, fmap, and return as the fundamental parts of a Monad, rather than bind and return, as I have here. As he touches on in his discussion, it's common and straightforward to implement one definition in terms of the other, so anything you learn from about that definition can be ported the definition I use without much fuss, so don't worry if it doesn't match at first. What this means is that if you have a tools in the standard library that only depend on features of Monad, you can use the same small collection of functions to solve a ton of problems.


This is want I needed to hear. I haven't used either Clean nor Haskell other than playing with them and my introduction to Clean seemed more lightweight where there some syntax to make uniqueness seem easier than the same in Haskell. On further reading, when seeing the u:[...] syntax rather than * I can see they're really quite the same. They way the uniqueness attributes are described seems like a separate axis than data types where in Haskell there's just so much type. Also the docs for Clean avoids using category theory terms for the most part, getting you started by showing you the syntax to do certain things.

I expect to be coming back to this comment and the references quite a few times until it clicks for me.


Finally think I get monads. And I don't believe it's hard to explain or hard to understand just that almost all the explanations are bad and you have to go through so many of them to put the pieces together.


> How far can you go without monads and category theory?

Without monads? Not pretty far, but they're far simpler than the wide web would have you believe.

Without category theory? Sky's the limit. I say this as experienced Haskell programmer occasionally dabbling in category theory for funsies. The practical impact on "how easy is it to program Haskell" of category theory is basically zero.


I didn't mean how far in Haskell without monads. I meant how far in something else like Clean that uses uniqueness typing to handle some of the same aspects where monads would be used with Haskell.


Not used Clean, but it sounds interesting.

What i'd love is an imperative language where you declare what effects a method can have and the compiler enforces them. E.g. if it promises not to mutate any parameters, it can only call functions that make the same promise etc.


Idris isn't imperative (Haskell-like with dependent types), but offers features like those:

http://docs.idris-lang.org/en/latest/effects/depeff.html

    readInt : Eff Bool [STATE (Vect n Int), STDIO]
                       [STATE (Vect (S n) Int), STDIO]
http://docs.idris-lang.org/en/latest/st/machines.html

    logout : (store : Var) -> ST m () [store ::: Store LoggedIn :-> Store LoggedOut]
They're both implemented within Idris, as libraries/modules, rather than being compiler magic:

- https://github.com/idris-lang/Idris-dev/blob/master/libs/con...

- https://github.com/idris-lang/Idris-dev/blob/master/libs/eff...

I think it'd be possible to write similar effect systems in other dependently typed languages like ATS, which is a relatively imperative language (C+ML-like).


That sounds very similar to Rust. Mutations and even memory lifetime are specified in the type system.


This is how Pony does it, basically. See [Reference Capabilities](https://tutorial.ponylang.io/reference-capabilities.html).


Over time, I don't think the amount of stuff you have to learn in Haskell is more than in other major languages. Haskell requires that you learn some math, and that math had some large humps to get over. But it's a small number and you're actually just learning math, which is general and useful outside of one language.

In C++, I had to learn a bunch of corner cases and committee decisions. In Java it's a huge library and stack of idioms to get anything done. In both cases, that information didn't make me any better a programmer in the general sense, just a disappointed one in each language. I'd argue that the sum amount of information I had to learn for either of those languages was more than Haskell, because I can leverage the abstractions much more, and only need a few of them.

Monads, applicative functors, lenses, and you've got your primary toolbox sorted out. That's three humps. C++ has few humps, but it's got miles of an uphill march. I have no polite way to describe the Java experience.


The Java experience is a vast vista of beautiful sci-fi-ish landscape, elegant industry laid out according to a master design. Big enterprises linked with glowing pipes and service buses, nice portals pulsating from the radiance of alien beans. Stylish OSGi towers reach to the sky in the background. And the Mavens offer everything that is good on the mvn-central plaza.

But as you get closer, as you try to set up your own factory, you find that you need to use that thing, that nobody uses anymore, from the dark side of the moon, and to interface with that you need to venture deep into the core, and it's factories all the way down. Layers and layers of boilerplate, and inventors screaming in horror at the banality of the gaping holes in the type system that holds the huge sphere of backward compatibility on its shoulders.

And then there's no way back. You are already versed in the dark arts, you have been to the MetaSpace, your heart is no longer yours, but it's a slightly patched G1GC and you dream with the lush murmur of cybernetic trees from Shenandoah.


The problem with that explanation is that Haskell is no more difficult than Rust. (Someone who knows Rust well must learn about the IO monad and then can start being productive in Haskell. Unlike what one comment on this page implies, there is no need to learn any category theory to be productive in Haskell.) Yet Rust has been used in about as many successful projects as Haskell despite Haskell's having a head start of about 30 years on Rust.


I agree Rust is similarily difficult to learn to the point of production real-life software people use (rather than toys). But it's still very much in early-adopter territory. Even today on HN was the first stable version of a web framework that seems to be the best of the pack in Rust.

Haskell has been around for 29 years. It's really not a fair comparison.

I never said you have to learn category theory or any math with Haskell either. My learning curves I mentioned were strictly practical (mutation/effects, state, lenses, more than basic types etc).

The quality and quantity of books, tutorials, community, libraries, etc plays a big role in any languages learning curve no doubt. I believe this is something that could still be greatly improved in Haskell.

But at the same time I don't think it's surprising that Rust was able to get those all up to Haskell's level of quality in a short time. Rust also has far more analogies and similarities to what most C/C++/Go/Python/Ruby users have been exposed to which makes the initial get-a-basic-script-working far easier.

So I'm not dismissing Haskell like it's destined to unpopularity just because it's hard. The larger investment in docs, websites, libraries, and similar languages like PureScript/Elm the more popular Haskell would be.

Rust had a bunch of smart marketing-friendly people join in over the last couple of years which took it from a niche systems language into something far more mainstream. Other successful languages had similar growth patterns early on while Haskell people were comfortable with it's fringe academic position for a long period (which it seems to have grown out of finally). Haskell could still achieve a similar trajectory but adoption by early-adopter non-academic developers will be critical for that growth. Even if that must include the trendy Ruby/JS they tend to look down upon that group knows how to sell a language to the public and make it practical.


You seem like someone who could answer this: why use PureScript over Elm?


Despite being functional, Elm is quite minimalist when it comes to type system features. For example, functions can have generic type parameters, but there's no good way to require that the type be able to support certain operations, e.g. being printable as a string. Haskell's solution to this is "typeclasses", which are the same thing as Rust "traits" and Swift "protocols", and somewhat similar to Java "interfaces". Elm has a handful of builtin typeclass-like things that work by compiler magic, but there's no way to define your own. For a somewhat colorful rant about this, see "Elm Is Wrong":

https://reasonablypolymorphic.com/blog/elm-is-wrong/

Elm is also very opinionated, e.g. the compiler itself restricted the ability to bind to JavaScript libraries to only approved organizations:

https://discourse.elm-lang.org/t/native-code-in-0-19/826

The community is also fairly toxic; check out this passive-aggressive response by the /r/elm moderators to a (mildly worded) blog post complaining about the aforementioned restriction:

https://www.reddit.com/r/elm/comments/9a0hc6/elm_019_broke_u...

Personally, I decided I would never touch Elm again after reading that.


I'm currently using Elm at my day job, and I agree 100% with what you are saying.

Elm lacks extensibility, tooling, and documentation is not that great. The biggest pain point however is the people who run the Elm language. The design decisions they took hurt the language and the users a lot, breaking more and more with every version bump, restricting freedom and creating a walled garden that people are getting tired of.

What you say about JavaScript libraries is not 100% technically correct though. You can still access any native JS library you like, but you got to use ports. You can't hook into native elm functions bound to the global scope, but that's always been a very shady, undocumented and terrible thing to do.

The following reasons are what, I believe, really ruined elm adoption:

1) You can't create so called effect modules (like the http module of the standard library, and so on) if your package is not within the `elm` namespace.

2) As a company, you can't have shared, common elm modules if they are not published in the Elm package public registry. You can't install a package from GitHub without resorting to ugly hacks like ostracized elm package managers written in Ruby.

3) No development timelines, no companies publicly endorsing or using Elm to develop open source libraries besides the one where the language founder is employed.

I've never tried anything purely functional and typed to do frontend programming, so I'd like to hear if Purescript, ReasonML, etc share the same struggles with Elm


Always liked this post on Elm in production: https://www.pivotaltracker.com/blog/Elm-pivotal-tracker


I've worked in Elm a decent bit, and used PureScript a little.

Elm is a very opinionated language - it's very deliberately missing some abstraction power (typeclasses), and some functions that are the bread-and-butter of every functional programmer have steadily been getting removed from the base libraries, so if you're used to Haskell, you'll find yourself falling back to duplicating code by hand a lot. Elm also makes certain stylistic choices into parse errors - "where" clauses are strictly forbidden, and indentation preferences are strictly enforced. It's basically taken an awkward edge-case from Haskell's indentation rules and made it not only a requirement, but a prerequisite to seeing if there are any other errors in your program. The back-and-forth trying to get the compiler to accept things that would just work in Haskell, but don't because of someone's stylistic preferences, is absolutely maddening.

PureScript, from the bit I've used it, is like a strict version of Haskell with row polymorphism, a feature Haskellers have been hoping for for a while. I've chosen Elm over PureScript in the past because of PureScript's dependency on Bower (which I think has changed since then), but that's the only reason.


> indentation preferences are strictly enforced. It's basically taken an awkward edge-case from Haskell's indentation rules and made it not only a requirement, but a prerequisite to seeing if there are any other errors in your program.

Some parsing is less efficient than Haskell because Elm doesn't have 20+ years of PhDs working on it, but there is no such thing as compiler-enforced formatting. I can't think of compiler errors regarding format that are the expression of a choice, as you put it, rather than the expression of less manpower.

Likewise, `where` clauses aren't forbidden, they are simply not implemented, which, given that you can already use `let.. in`, is not especially shocking.


The omission of where clauses was explicitly a stylistic choice[1].

This is perfectly valid Haskell:

    #!/usr/bin/env stack
    {- stack script --resolver lts-12.19 -}
    data Test = Test {count :: Int}
    
    test = let
      something = Test {
        count = 5
      }
      in 5+(count something)
    
    main = putStrLn $ show test

To get the equivalent Elm to compile, it must be indented like this:

    test = let
               something = {
                 count = 5
                 }
      in 5+(something.count)
Note that `something` must be indented beyond the beginning of `let`, and the closing curly brace must be indented to the same level as `count`. These are both not a warning, but a parse error - you can confirm it with Ellie[2]. If that were due to a lack of resources, it would absolutely be understandable, but this also was an explicit choice[3] which developer time was spent implementing.

[1] https://github.com/elm/compiler/issues/621

[2] https://ellie-app.com/5KRg4g5ZMkba1

[3] https://github.com/elm/compiler/issues/1573


While there was nothing particularly wrong with bower for PureScript use case, it's always failed to attract people for that reason sadly. However, you can use Spago now to great success.


Honestly I've never fully given Elm enough of a run-through to say yet (ditto with OCaml). I remember 2yrs ago I was evaluating it but decided to learn React/Vue.js for professional reasons. Additionally I wasn't convinced that Elm would be an ideal long-term commitment and more of a compromise between JS, FP, FRP (functional reactive programming) and more of a framework competitor than a full-blown language.

While I see PureScript as a full long-term language-level commitment to Haskell style FP minus the purity. But again my exposure to it has been too limited to have a strong opinion.


ReasonML is a much more reasonable (pun intended) approach to FP from the front-end side that doesn't appear to be as dogmatic as Elm. By making some concessions over interoperability (namely supporting raw JS and npm libraries), Reason thinks it will be easier to win existing JS devs over. Check it out - https://reasonml.github.io/


PureScript has a (much) more advanced type system but that's about it. The tooling and general developer experience of Elm is probably as good as programming gets in 2019. (I've also quite enjoyed Rust.)

That may sound like hyperbole but the combination of elm-graphql and elm-ui is something else. I'm from a JS background and the whole React/TS/CSS-in-JS soup just seems like a bad dream now.


> PureScript has a (much) more advanced type system but that's about it.

There's a lot in "... but that's about it.". Elm has a very low upper bound on abstraction by choice.

As an additional note: like many other communities in programming it has a very cult-like feeling to it and like others noted in these threads the mere mention that maybe this low upper bound on abstraction could be bad usually draws people a lot of fire.

In my experience most of the community in Elm is made up of people who don't really know what type classes can give you, for example, but they'll happily argue that it's too advanced or not needed. Most of that comes from parroting the popular in-community opinion instead of informing themselves.

This kind of inbred opinion is not unique to Elm: you can find it in Elixir, Clojure and pretty much every other community that relies too much on the benevolent dictator or the prominent founder/inventor paradigm.

In my opinion this is something that PureScript got right: Phil Freeman actually left the community to some extent and is not involved in the compiler anymore. He also does not flood the community with opinions that people give too much weight and so there is no cult of personality formed around him. The same cannot be said for the aforementioned languages.

I also find it interesting that a lot of these languages that rely on this paradigm have leaders that constantly complain that it's hard to run this kind of community. The reason it's so hard is because they've made themselves a benevolent dictator and they keep that status quo because presumably they like that they can sort of control opinion in the community that way as well.

I have absolutely zero sympathy for people who do that kind of thing because there is a very clear solution to it and they're just unwilling to commit to it. You can't have your cake and eat it too. If you enjoy this cult of personality you'll have to take the bad parts of it as well. I find it interesting that a lot of these people end up being babies about it as well, but I guess you have to be somewhat immature to end up in this position from the beginning.


I more or less agree with all of that, but at this stage Elm's BDFL has earned my trust, and he is well within his rights to do whatever he wants with his own creation.


Elm is a language + a framework whereas Purescript is just a language.

There are a number of different frameworks you can use with Purescript from copies of the Elm architecture to wrappers over React to Halogen which can be thought of as a componentized Elm with multiple update loops. Halogen is awesome, really hits the sweet spot for me.


I wrote some Purescript and would definitely recommend anyone try it for themselves.

What eventually turned me off was tooling/workflow things like no accepted code formatter, poor graphql support, too many competing ways of doing basic tasks, too many libraries that were just JS wrappers.

Also I got a vibe from functionalprogramming.slack.com that there was more interest in the latest FP whitepaper than beginner friendliness and what the realities of making an app in Purescript are like. Which is fine, but will limit the adoption of the language in the face of Typescript (Microsoft) and ReasonML (Facebook).

It's funny, Elm gets criticised as a 'DSL for building SPAs', even though that's exactly what it is, and that focus is the reason it is so productive for that task, and has the smallest asset sizes of any front end solution.

And for all practical purposes, no runtime errors.


I think that's why I like it so much. There are lots of ways to do things. I don't think I am using any significant libraries that are JS wrappers. Halogen is written in pure Purescript. But the advantage of PS over Elm is that it is easier to wrap JS, hence the many options..

For sure its focus is not as a beginners language and it will never reach Typescript levels of popularity. But it has found it's niche, and it is in a good place.

Elm is a very nice language too.


I actually think it's seeing quite a bit of adoption in industry. What I'm seeing is a class of developer that won't learn it or thinks they can't learn it, of course "to each their own", but I truly think Haskell/PureScript/Idris/Agda are onto something remarkable: making the software industry more like an engineering discipline and less of a craft (i.e. like the difference between civil engineering and carpentry).

Your argument RE: immutability and purity hampering implementations: Haskell excels at allowing developers to start with impure IO-heavy code, allowing you to later refactor chunks of the implementation out to pure code. In-fact, I talk about Haskell as the language you use to "move fast and _not_ break things". It also encourages a more principled approach to software design and reasoning than many other programming languages.

I've used Haskell (and seen it used) in production for many different problem domains for the last seven years of my career. I have yet to see something Haskell/GHC cannot handle well with a few niche exceptions.

In my time using Haskell I've come to think of "mutation" as a cardinal sin in software and you better have a good reason for committing it instead of letting the compiler do it for you (you can write mutable code in Haskell, too, btw - it just strongly discourages you from doing so and it makes some classes of mutable code impossible to write, which is a very, very good thing).

As a (very) fallible generalist, Haskell is a godsend and we use it extensively where I work for shell-scripts, for code generation, for (performant) network packet parsing, HTTP web servers, gRPC micro-services, and even our "build bot".


Yep! Another good example: the pretty nifty Postgres API interface postgREST is implemented in Haskell


Have you used it for anything interactive? All of the above seem to me like natural fits for a pure functional style


I think you mean, by interactive, something with a user interface like a gui. First, see functional reactive programming.

The only project I used it for that was interactive in that sense was an interactive CLI tool. However Oskar Wikstrom wrote a screen cast editing tool so he could make his Haskell screen casts.

I don't think functional style prohibits interactivity (see: purescript which is strongly influenced by Haskell, we use it for all frontend web work now).


Very encouraging.

Could you discuss the niche exceptions you mentioned?


Real-time applications and really low-level systems software (however, there are some EDSLs that enable you to write real-time applications with Haskell's type-safety guarantees that can generate C-code: Haskell's Ivory library https://ivorylang.org/ivory-introduction.html).

Cross-compilation of GHC used to be a huge pain in the ass but that's improved significantly these days, on a project a few years ago I had to choose another language/ecosystem due to that limitation but I wouldn't have to now.


If you'll forgive the analogy, Haskell is kind of like the Mercedes of production-ready research-grade languages. You may not actually use haskell, but the features that haskell is pioneering are what you'll see trickling down into other languages, as language designers look over and borrow things. Some examples (though they're not all invented by haskell, but haskell has popularized them IMO):

- non-nullable types

- immutability as a default

- typeclassses + structs

- abstract data types

- errors-as-values

- monads

Haskell or any other ML language didn't come up with all these things of course, but Haskell is one of the best languages

Take a look at rust -- it's basically got a near haskell-grade type system (and there are lots of ML languages with more flexible type systems than Haskell as well), with C++ like performance. It would have been a lot harder for rust's designers to incorporate such a nice type system without the exploratory work haskell did and continues to do.

All that said, I'm pretty sure Haskell is seeing little adoption because of the learning curve.

> The most successful languages let you be pure-functional where it makes sense and then stateful where it makes sense. Haskell doesn't, really (from my cursory reading about it).

Haskell almost certainly does this, this is actually one of the best features of haskell -- it lets you do functional things and stateful things separately and encourages you to keep them separate.


> - abstract data types

I suppose you meant to say "algebraic data types"? (which have been adopted by Rust and Swift, and even the C++ stdlib)


C++ doesn't actually have ADTs. std::variant is not a true sum type (1 + 1 + 2 = 3, so to speak).


Can you please explain further why std::variant is not a true sum type?


If you form a variant with multiple copies of the same type they will "collapse". E.g. std::variant<int, int> behaves like std::variant<int>.


Why would you want to list int more than once in a variant? I must be missing a use case here; my variants have consisted mostly of structs.


It's less that you specifically want an int/int case and more that you want consistent behaviour in a generic context - you might have some logic in a template that uses variant<int, T> and treats the int case specially (e.g. the int is an error code), but then you get a nasty surprise when it gets used in a case where T=int.

A common example is validation/result types, which often look like string (error message) or valid result. So e.g. you might have a username validator that returns Result<String, String> and then various other user creation validation things that return e.g. Result<String, EmailAddress> and in the end you compose them all together to get Result<String, User>. That's a very powerful style that has the advantages of exceptions (the "happy path" through the code is obvious and not obscured by all the failure handling) without their disadvantages ("magic" control flow, seemingly trivial refactors changing the behaviour). But it's less practical if you can't have Result<String, String> at the base level.


Yes definitely, I meant ADTs and GADTS -- Algebraic Data Types

They're a super good idea, so I'm glad that other languages are adopting it, but this is the kind of thing that haskell has had for a long time and is just second nature.


I once saw (here on HN) Haskell described as “primordial soup of type theory, from which other languages draw their good ideas from”.


The learning curve is one thing; dealing with purity and immutability is another. But the real stumbling block (besides the tooling issues) is the effect of lazy evaluation.

Lazy evaluation can make it quite difficult to reason about the time and memory resource requirements of Haskell programs, and debugging those isn't a whole lot of fun.

It is do-able, and like anything, gets better with experience, but it's hard to push things to production when you aren't sure they won't OOM or capriciously start doing some long thunk chain evaluation.


Does it have trap-doors for optimization like Clojure's transient data structures?


There is unsafePerformIO, but it's generally considered a really bad idea unless you really know what you are doing, and even then it is probably a bad idea. It is useful for debugging though and putting trace statements in.


I havr never noticed immutable data structures to be more difficult to deal with. For the most part, because of monads and effects and such you can essentially write better imperative code in haskell.

Im noy sure why youve decided that haskell is incapable of the syntactic appearance of mutation.


To be fair I haven't really used it, only read the core of the guide, so I may be mistaken.

But I've done a bit of Clojure, and I've done a bit of Immutable.js, and particularly when you have a deeply-structured piece of data, "mutating" something a few levels down gets really ugly. Now, maybe something about Haskell Enlightenment obviates this case entirely in a way I'm not seeing. But I also remember Haskell seemingly forcing you to push all your imperative code up to the surface layer of your otherwise pure program, which sounds great unless you need to do really meaningful things that are by nature imperative.


"Now, maybe something about Haskell Enlightenment obviates this case entirely in a way I'm not seeing."

Many times the answer is that with a different structure you don't need deep mutation to be a critical part of your program. After all, "deep mutation" isn't considered a great idea in object-oriented languages either, where it constitutes a violation of the Law of Demeter [1], either in letter or in spirit (i.e., creating a chain of methods to set some deep value may in letter follow the Law but can still be a violation in principle).

But if you do need it, Haskell does have a rather nifty mechanism for mutation patterns to be first-class elements themselves through "lenses", which capture as a first-class value some access pattern and mutation pattern on a given value. And while one of its original purposes is to allow Haskell like

    value %~ property1 ^. property2 =. newValue
such that in a monadic context that will pretty much do what you'd expect as an imperative programmer, it also means (property1 ^. property2) is itself a value that can be used and passed around like any other, and allows for things like creating generic functions that take "a thing, and a thing that will extract a Name from that thing, and will return a new copy of the original thing with the name all uppercase" or something like that. And there's a whole bunch of other ways to make that stuff sing and dance too, if you're in the mood.

You can even do really melty stuff like have a lens that will expand an int into its bits, allow you to manipulate those bits as if you had an array of bool, and then will re-pack them into the int for you. Lenses can take any arbitrary slice out of an object as long as you can express the extraction and the creation of a new object putting the stuff back, and then they can be composed together as-needed. It can be powerful, but it can get pretty brain-melty too.

[1]: https://en.wikipedia.org/wiki/Law_of_Demeter


Let me add that lenses are not just a Haskell thing. It's a simple (and beautiful IMHO) concept from functional programming that can be introduced in many languages. Also, you can have lenses without the cryptic operators.


The cryptic operators are by far the worst thing about learning the basics of Haskell.

The language is already hard to learn because of all the wonderful and mindblowing concepts but the syntax is super frustrating and takes the difficulty to another level.


I hadn't seen that kind of thing; that's good to know about


Haskell obviates the need to do deep manipulations in an ad hoc 'ugly' way usually via lenses. Personally, i think most imperative languages are sufficiently less sophisticated than a state monad plus lens thay it is substantially more difficult to use them.

> But I also remember Haskell seemingly forcing you to push all your imperative code up to the surface layer of your otherwise pure program

I have no idea what you're talking about. You seem to be confusing effectful code with imperative code. Haskell's do notation -- which lets you write with an imperative syntax -- can appear anywhere, including pure code. On its own imperative code does not necessarily mean effectful code and mutation does not require us to give up on purity. Moreover, if you do want machine level mutation for performance reasons or because an algorithm is more easily expressed in that way, you can always drop into the (again pure) ST monad.

Basically, I think you are criticizing haskell from a place of ignorance.

Frameworks like immutable.js are not comparable to haskell. These are immutable data structure libraries built for languages where immutable data is an afterthought at best, if its even considered at all. Obviously these are going to be clunkier to use. Haskell is not that though.


But I've done a bit of Clojure, and I've done a bit of Immutable.js, and particularly when you have a deeply-structured piece of data, "mutating" something a few levels down gets really ugly.

As opposed to what languages?


  (assoc foo "bar"   
    (assoc (get foo "bar") "alice"   
      (assoc (get (get foo "bar") "alice") "bob" 12)))
vs

  foo.bar.alice.bob = 12;


  (assoc-in foo ["bar" "alice" "bob"] 12)
Of course, in clojure you mostly use :keywords instead "strings" as keys which has a ton of benefits.

There's also specter if want something more powerful at the cost of an additional lib.

https://github.com/nathanmarz/specter https://www.youtube.com/watch?v=rh5J4vacG98


You're lucky enough if you can find a developer who knows mutable data structures outside of SF. If you want immutable ones you need to add a zero to their wage.

Most shops aren't prepared for that.


> You're lucky enough if you can find a developer who knows mutable data structures outside of SF.

That's one of the most arrogant statements I've read on HN in quite a while.

Nearly all developers know mutable data structures, it's bread and butter. Immutable ones aren't some exotic life form, it's just that using them effectively (read: efficiently) as mutable ones can be really tough, and leads to more complex code.

You should maybe get out of SF for a bit.


I am out of SF. The number of times I've seen people start trying to parse XML with regular expression tell me all I need to know about their ability to code.


You probably should take a visit to UK.

Standard Chartered has been a Haskell shop for a long time. There are some others in London I think.

And both England and Scotland have many universities teach Haskell. Let alone GHC and Idris were born in Scotland.


I've had random interns enter a purely functional codebase with 0 FP background. With a normal amount of onboarding, they were using immutable data structures with ease.


In fighting games, characters are sometimes described in terms of their “skill cap” and “skill floor”. The “skill cap” is how well you can play, if you really invest in this character. The “skill floor” is about how bad things can get if you don’t play that well.

Some characters are very approachable and easy to play. If you don’t know what you are doing, it’s alright — you can muddle through. If you play them in a really exceptional way, it doesn’t make all that much difference.

Some characters are really tough to play at all; and once you figure it out, they aren’t particularly exceptional and don’t reward further investment.

A “high skill cap” character is one where you can keep learning and learning and your performance at the game will actually get better and better. Some of these characters are also approachable — they have a gentle learning curve. Some of these characters are basically unplayable until you can play them really well — there is an inflection far to the right where you go from dying all the time to actually winning a fair number of matches.

Haskell is like one of these characters. Until you’re really good and know a lot, you’re basically going to ship nothing. This effect is sort of invisible to senior programmers learning Haskell because they are already so skillful and are used to having to skim a CS paper or two, once in awhile, to be able to get their work done. Once you are good at Haskell, vistas really open for you in terms of the kind of programs you can design and build. Many year after setting it aside, I still rely on what I learned about Haskell API design and effects modeling to design reliable, transparent and modular distributed systems (it all starts with the types).

“High skill cap” characters tend to be admired, but not frequently played.


I have never read a CS paper.

I do not know Category Theory.

I failed high school maths.

I run three business on Haskell.


In Super Smash Brothers, there are some people who play and win with Zelda. Just not that many.


I don’t say “I failed high school maths” because I’m proud of it. I’m not special. I’m not particularly clever. I don’t know how else to drive the point home that you don’t need to be a genius to build software and enjoy doing it in Haskell.


No one is arguing that you need to be a genius to use Haskell. How does my argument come across that way?


> No one is arguing that you need to be a genius to use Haskell.

I’ve come across this sentiment so many times. Haskell definitely has a reputation for being an “ivory tower” language.

> How does my argument come across that way?

I’m not too familiar with Zelda, but it sounded like you were saying “some people are just able to do these things that most others can’t.”

If that isn’t what you were saying, then I am sorry for misinterpreting. I’m genuinely not trying to argue or take you out of context or anything like that.


I think anybody can write Haskell and anybody can play Zelda or similar characters in fighting games.

What the skill-cap / skill-floor thing is about, is how often do people bother. When the base level of skill required to play at all -- not necessarily play well -- is really high, often those characters don't get used as much. People find a character that demands less up front investment and play that character instead. It's not about ability, it's about time.


Ok, I think that's fair. I won't deny there was a significant time investment on my end.

Thanks for clarifying.


Nice. Tell more?


Summaries of my projects are here: https://jezenthomas.com/


Great, thanks Jezen!


> But in some domains (or pieces of domains), state and mutation aren't just unfortunate implementation details, but a core element of the problem space. For these cases, the FP answer is usually "recompute and replace" (generally with immutable data structures that make this efficient).

It can get worse than that. My problem space has mutable state that is shared between multiple threads. FP's initial answer is "shared mutable state is evil". And they're right! But if that's the nature of your problem, then you're kind of stuck with it.

But the problem with the "recompute and replace an immutable data structure" is that I now have to notify all the relevant threads that they need to replace their reference to the data structure (avoiding race conditions in the process), and that seems at least as nasty as the problems I have doing it the imperative way.


You might be interested in software transactional memory: https://en.m.wikipedia.org/wiki/Software_transactional_memor.... I believe it addresses some of the issues you mentioned in a functional programming way.


Interesting!

But in my world (embedded systems), those I/O writes aren't just writes to a file or a network socket. They're writes to device hardware, which has to be in the written-to state the next time that another thread interacts with it. That is, the I/O operation has to be part of the transaction, not queued up to run after the transaction commits.

Still, this approach goes farther than I thought possible to solve the problem...


> That is, the I/O operation has to be part of the transaction, not queued up to run after the transaction commits.

That's the kind of thing Haskell excels at - explicitly sequencing things that need to happen before other things, so that you can have code that does stuff in the right order without getting into the trap of "can't ever refactor this in case I change the order something runs in".

(I can't believe that threads are actually part of the problem statement unless you're doing something strictly tied to C. Concurrency or even parallelism might be a requirement, but there are other ways to achieve it than OS-level shared-memory threads)


The Haskell philosophy is easily misunderstood. It's not really that shared mutable state is evil but that it's difficult and so should be treated seriously and explicitly. The language still supports it—quite well. GHC's green thread scheduler is top-shelf. The main author of the GHC runtime wrote an excellent O'Reilly book called "Parallel and Concurrent Programming in Haskell" (which you can read for free online).

https://simonmar.github.io/pages/pcph.html

Of course that's not to say that Haskell is optimal for your work in embedded systems!


Was going to say exactly this. Major common misunderstanding about even Haskell. Mutable state has to eventually be a part of basically every program that does something useful. The Haskell philosophy is more about having an explicit and predictable boundary between the pure and effectful parts of your code/system.

Though you'd be surprised at how much you can do while forgoing mutable state entirely.


Yeah, embedded systems isn't a domain where functional programming is going to work well. Good luck with those locks! :)


That’s a classic example of a problem that can be trivially solved with another level of indirection (reference to a reference).


Clojure's atom primitive solves exactly this problem.


> But in some domains (or pieces of domains), state and mutation aren't just unfortunate implementation details, but a core element of the problem space

True, but I would argue that it is far, far more common for people to writing stateful code to do things that are better done in functional style, than the other way around. This comes from my own experience of learning functional programming in JavaScript (before knowing about Haskell et al) and refactoring a project into a functional style that relies heavily on immutability.

I think Haskell is not more widely used in the industry because it has a very academic reputation. If you say you write Go, people think you are a practical programmer who turns coffee into solid business logic that powers some profitable website. You write Haskell? Then you are more likely to be perceived as some eccentric scholar or their like.


> I would argue that it is far, far more common for people to writing stateful code to do things that are better done in functional style, than the other way around.

For sure. But still, there exist cases. Many of them in JavaScript, actually. In my JavaScript UIs I relish every opportunity to write something as a pure function. But I also need to manage a lot of deeply structured, non-homogenous state that's genuinely meaningful to the application. Separating the two is crucial, but both exist. I've really enjoyed MobX, as it allows you to make the most of both types of programming and hook them up together in a cohesive way.

> I think Haskell is not more widely used in the industry because it has a very academic reputation...you are more likely to be perceived as some eccentric scholar or their like.

And yet Clojure has gotten traction :)


For me, the reason why I abandoned Haskell (for a couple years I was writing about half of third projects in it) is the complexity associated with laziness. FP and immutable data structures and monads are cool and mostly understandable once you grasp the concepts, but laziness is a double-edged sword. Laziness is cool, it enables a lot of nice things, but it makes me unable to easily reason about space complexity of any nontrivial algorithms, and that repeatedly bites me when I have to write such code.

In imperative code, space complexity is obvious and explicit, time complexity is intuitive for me, and correctness is shaky and needs to be tested.

In Haskell, correctness tends to be obvious (if there are no typos causing syntax to fail, it almost always gets the exact result I intended 100% correctly in the first try), but the space and time taken by the algorithm may be and often is surprising to me; If I make tiny modifications to the code, the execution may suddenly explode from 0.001 second to an hour because suddenly processing n entries involves creating and disposing n^2 thunks, taking all available memory and extreme amounts of time.

And despite trying a bunch to wrap my head about it, it keeps happening to me, it's just not intuitive to me - for me, eyeballing efficiency and exact time/space execution of a nontrivial Haskell function is just as hard as eyeballing whether random C code does everything correctly without a memory leak or overwriting being possible in an edge case. I also have trouble with Prolog for the same reason.


> The most successful languages let you be pure-functional where it makes sense and then stateful where it makes sense. Haskell doesn't, really (from my cursory reading about it).

This is absolutely wrong. Sorry to be so direct, but I want to make sure other people not familiar with Haskell don’t get the wrong impression.

Haskell is explicitly designed to separate purely functional and stateful code with a clear interface between them: monads. It does exactly what you are asking for.


Whenever I've used Haskell, the monads infect things like a virus, similar to the way async/await metastasizes across a C# codebase.

Note: I am not very good at Haskell.


That's what happens when you're working in a pervasive-mutability-by-default (or pervasive-IO-by-default) language as well - your whole codebase is full of hidden state mutations and hidden interactions with the outside world. You just don't have any idea where they're happening. You can write Haskell in the same style with monads everywhere and you're in essentially the same situation (just with more visibility into it) - but then you can actually start isolating the parts where mutation or outside interaction happen, and separating them from your core business logic. Which is the same thing you'd do in a high-quality codebase in any other language, but in Haskell you can do it in a way that's actually enforced and visible rather than just convention.


Probably because async/await IS a monad. Avoiding that infection just takes some design experience.


Sorry for the additional pedantry, but I think this important to be precise about given the target audience of your comment.

Monads aren't the separation between purely functional and stateful code. The Haskell type system maintains that separation. Anything that's doesn't return IO a for some a appears to be a pure function from the perspective of the programmer. Once a function returns IO a, there aren't any* functions provided by the compiler that can make a function that uses those results not also return IO b for some b. For example, the type of getLine is IO String (because it impurely produces a String) and the type of putStr is String -> IO () (because it takes a String and mutates the world without returning anything).

If the compiler provided a function for computing on the a in the IO a, for instance, bindIO :: IO a -> (a -> IO b) -> IO b and a function to wrap the results of non-IO functions, such as returnIO :: a -> IO a, you could do arbitrary computation with these IO-wrapped data types, but know at a glance if your functions were impure.

This approach doesn't require the Monad typeclass at all, just a magic type called IO that tags impure computations that are implemented with compiler and runtime magic. It happens to be the case that this is exactly how GHC implements the IO type. bindIO is implemented here[0] and returnIO is implemented here[1] and the compiler magic used to implement them isn't* exported, so all IO operations have to go through those functions. It is not a coincidence to that these functions have the right types to form a Monad instance for IO and indeed, that is also present[2], but the IO type and the type system that ensures it can't be sneakily hidden are doing the heavy lifting, and the Monad instance (and accompanying syntactic sugar), are just there to make it nicer to work with and easier to abstract over.

If you have a passing familiarity with Haskell, the phrase "state monad" is the obvious place where my claims stop making sense. In fact, the State type only supports computations that are entirely pure. If you want to simulate global variables in a language that didn't have them, you could always pass all of your global variables to every function and get updated ones back from the function along with the nominal results of the computation. The State type is just a regular data type that wraps stateful functions constructed by such state passing. A type of the form State Int String is just a function that takes an Int and returns and String and an Int, no compiler or runtime magic needed.

You can play the same trick as in the IO case and provide functions bindState :: State s a -> (a -> State s b) -> State s b and returnState :: a -> State s a in order to compute on these "stateful" values while making sure the result state got passed to the next function in the chain correctly. Like IO these two functions can be used to create a Monad instance for State. Unlike IO, State is just a data type holding a regular Haskell function, so it's extremely reasonable to write a function of type State s a -> s -> a which runs the State s a computation with an initial value of type s. This is written by unwrapping the State type and then passing the initial state value to the function inside and return the result while ignoring the returned new state. More details on how State is implemented are available here[3].

A complication to this is that if you want stateful mutation for performance reasons, the ST type[4] also exists, which looks identical to the State type from the programmer's perspective, but plays similar tricks to IO in order to actually mutate under the hood while not exposing the implementation details to the user, so it can be reasoned about exactly as if it was pure and using the same implementation as State.

These Monad instances for IO, State, and ST start to pull their weight when you write functions that only use features provided by the Monad typeclass and they work seamlessly with any implementation of stateful computation despite their very different internals. Monad is quite general, so if all you care about is abstracting over stateful computations, you can also use the methods from MonadState[5] which allow you to interact with the state along with the results of the computation independent of the implementation of stateful computation.

* In the name of not getting bogged down in details, there are a few parts of this discussion that are not entirely accurate, particularly around functions like unsafePerformIO[6].

[0] http://hackage.haskell.org/package/base-4.12.0.0/docs/src/GH...

[1] http://hackage.haskell.org/package/base-4.12.0.0/docs/src/GH...

[2] http://hackage.haskell.org/package/base-4.12.0.0/docs/src/GH...

[3] https://acm.wustl.edu/functional/state-monad.php

[4] http://hackage.haskell.org/package/base-4.12.0.0/docs/Contro...

[5] http://hackage.haskell.org/package/mtl-2.2.2/docs/Control-Mo...

[6] http://hackage.haskell.org/package/base-4.12.0.0/docs/System...


Note: The approach of structuring the interactions with the IO type with the functions (bindIO :: IO a -> (a -> IO b) -> IO b) and (returnIO :: a -> IO a) is still using the abstract idea of monads to organize the impure code and make it ergonomic to work with, so "monadic I/O" or "monadic state" aren't entirely misnomers. The thing I wanted to emphasize is that you don't need to know the word "monad" or understand anything in particular about the design process for the Monad typeclass in order to use these libraries.

I think focusing on the "monad" part over the "IO" part of "monadic IO" is particularly confusing to new users because the abstract idea of a monad is very general, so if you assume all places where it shows up are basically like the case of IO, you will be very confused. Further, it makes the idea of a monad seem like a Haskell-specific hack, rather than a general abstraction that can be used in any programming language you want to.

This is particularly important to emphasize because the abstract idea of monads only makes the IO approach to impurity nice to use, it doesn't make it possible. Haskell had I/O (and other impure capabilities) before the monadic way of organizing impure code was introduced. The heavy lifting for IO is done by having a type system strong enough to prevent a function of type IO a -> a from being written by an end-user. If you have written a monad abstraction in a language without such a type system[0], it can still be a nice abstraction, but it doesn't guarantee that pure and impure computations can be distinguished on the type level.

[0] https://www.nurkiewicz.com/2016/06/functor-and-monad-example...


Very few programmers are proficient in Haskell, and operating systems and language tooling are built around imperative C-style semantics.

Combine that with the fact that most of the software industry does not care about correctness or stable software and generally lacks professionalism. "Just ship this half-assed software as soon as possible" is the attitude at the majority of software companies.


Software engineering is all about trade-offs and making sure stakeholders are fully informed thereof. Pressure to deliver is one of the most challenging problems an engineer can face, because it stands in opposition to every ideal. Yet it's about as normal as death and taxes.

Haskell sounds amazing. I would be thrilled to learn it, and I hope the ecosystem flourishes. I hope there will eventually be millions of jobs to write code in the language. I'm a little bit envious of those who speak fluently about monads and set theory, and I've learned a lot from brushing shoulders with those people.

Meanwhile, I'll continue solving real-world, extremely stateful problems in an as-purely-functional-as-I-deem-convenient manner with the tools I already have under my belt. You can pry my precious semicolons from my cold, dead, carpal-tunnelled hands.


Haskell is useful whatever your priorities are (unless you have a really low quality requirement, like a script that fits on a single page and gets run only once). If you think of the project management triangle, switching to Haskell gets you a bonus that you can distribute between the points as you wish: you can produce higher-quality code for the same scope/cost/time, wider-scoped code at the same quality/cost/time, code at the same quality/scope/cost in less time, or so on.

IME a lot of Haskell advocates spend this windfall in a way that's poorly aligned to business requirements: we spend it all on increasing the code quality (and perhaps even overshoot, taking more time than users of another language to produce code of the same scope). But that's not an inevitability. (I would speculate that it tends to happen because most people in the software industry claim to value quality a lot more than they actually do, and a lot of Haskell programmers take them at their word).


Hi. I run three startups on Haskell.

One of them is VC funded. We have the stakeholders. We have the pressure to deliver.

Haskell is making this easier, not harder. We can maintain pace as the software grows because the language is generally well-principled, and the compiler keeps us in check rather than us having to rely on human discipline.


I read a StackExchange question basically asking "Why formal method is not popular?"

A typical response is "Most of softwares is not building aircrafts".

And... After googled for a while, it turns out companies building aircraft don't use formal method.


Honestly this is true. Most of the world doesn't give a shit if a page on their web store is broken. They get an exception email and then fix it, no real loss. While switching over to haskell may make your software more stable, at the end of the day the amount of extra time spent writing it in a more stable language is going to cost the business a lot more than a slightly buggy website will.


Completely false. Many “real world” businesses are shipping web apps in Haskell. Anecdotally, they take less time to write than the equivalent Rails app.


I don't know. I've been slinging Haskell on the side for the better part of a decade and do most of my day to day in RoR (trying to start moving clients over to Elixir/Phoenix.)

Haskell is an amazing language. I would totally buy that Haskell teams probably win in the medium to long-term as the wins you get in terms of support/maintenance/extensibility are pretty obvious.

However, anecdotally, Haskell forces me (and I imagine other programmers) to invest a lot more time up-front into getting your design in order. Haskell punishes an "oh I'll just hack that out" attitude pretty badly. Which, as I said above I would completely believe leads to wins in the medium to long-term. If I need to bang out an MVP over the weekend, I'm probably not choosing Haskell unless it's well-trodden Haskell territory.

Additionally, while the language itself is amazing, the ecosystem has issues (enumerated in the article.) Tooling sucks, there aren't enough examples of people doing normal things, there are frequently not libraries for basic things, obviously integrations with popular services are lacking, and the list goes on.

When Haskell has a decently mature and actively developed web framework that has reasonable docs, examples, and a not pathetic ecosystem (by the standard of modern web frameworks) I'll happily jump into using Haskell in production. Unfortunately, these aren't things enough of the community seems interested in to have significant movement on.

Servant looks very interesting with regards to what I'm looking for, but I'd be lying if I said I understood the types.


Yesod is plenty mature. There are reasonable docs and examples.

I'm not sure what else to tell you. I run three web businesses on Yesod, and it accounts for 100% of my income.

I'm also not a great programmer, and have a long history of just "hacking things out".

The stuff works, and it's ready to go, today.


I spent 7 months trying to build a json api using yesod with a friend. We had nothing but headaches. Its insanely hard to find example code or search issues when using yesod. Hardly anything exists about it on stack overflow already so any time I had an issue I had to post it on SO and wait a day for someone to answer it which meant I could only do about an hour of programming a day. I ended up giving up used RoR and replicated more than the 7 months worth of work in a few weeks.

There is a very good reason RoR is far more popular than haskell web apps. Its just so easy to get started with rails, there is a near infinite amount of information online.


…Seven months? I can't imagine what you were doing for that length of time. It's not that hard[0].

[0]: https://pbrisbin.com/posts/writing_json_apis_with_yesod/


Many web apps might be written in haskell but I assure you many many many more are written in ruby. I very much doubt you will find a haskell developer willing to build your web store in haskell for $15/hour. With rails you can just import Spree, make a few modifications and host it and you are done.


I write Haskell web apps, and I used to write Ruby web apps. You don't need to assure me; I'm well aware.

If you're in the realm of "import Spree, make a few modifications and you are done", then sure, use Ruby.

The products I work on just aren't that generic/trivial.

> I very much doubt you will find a haskell developer willing to build your web store in haskell for $15/hour.

The market is larger than you think. I have hired people before at $15 per hour. I have people working for me now at $23 per hour. Not everything needs to be stupid SV money.


I think Haskell actually has a few qualities that can potentially turn people off at different points, all of which have been mentioned by others:

* Lazy Evaluation -- usually becomes an issue at some point if the project isn't trivial. * Lack of dependent types -- while Haskell's types system is great, this is a missing piece that proves a sore absence for some. It's not very surprising that several functional languages developed after haskell, inspired by haskell, and implemented in haskell (idris, agda, coq) * category theory -- While you can get by plenty well just knowing a few basic abstractions from category theory, it's true that many libraries rely on highly-theorectical concepts, and that a solid portion of haskell development efforts are still tinged with an academic flavor. Just take a look at some of Edward Kmett's coding videos--he typically has one (or several) mathematics/highly theoretical comp sci papers open in one window, while in the other he's implementing a hugely popular library like lens. This sort atmosphere of academism turns a lot of people off. It pegs the language as a theoretical exercise from the get go, and plenty of people are either don't believe in the value of theory or are far too busy to take the time to dive into how it applies to their situation.

I think the academic veneer around haskell is its biggest weakness when it comes to adoption, but that's said, it's still had more success than a lot of other academic, purely functional languages thanks to the efforts of core researchers/users/contributors to break it out of the ivory tower. The ideas are good. Almost every other modern programming language has stolen concepts from haskell at this point--so in some sense even though it's still somewhat niche its influence is practically ubiquitous (just take a look at the list of languages its noted as having influenced on wikipedia! https://en.wikipedia.org/wiki/Haskell_(programming_language)).

In a broader sense, the functional programming paradigm has proven not only that it's a viable one for industry applications, but often superior to object oriented or imperative techniques when it comes to fidelity of expression and preventing bugs. If nothing else, haskell is great because it provides a fairly rigorous environment in which you can explore functional programming concepts, which you can bring with you to almost any problem and any language and manage to derive some benefit.


> even getting it to parse some complex data coming from a JSON feed into a usable form requires a good understanding of the type system.

This is a good litmus test for the usability of a language. There’s no reason parsing json should be difficult.


It isn't, I don't understand where this comment comes from. You can derive the entire thing without writing a line of parsing code.


> Is it just the difficulty?

It's loads to unlearn for most of programmers and a perfectly natural knee-jerk reaction of a refusal to. I mean, the more one knows already, the more is the desire to reuse such knowledge. Well, at least I have been there.

> Haskell doesn't, really (from my cursory reading about it).

Of course it does, https://en.wikibooks.org/wiki/Haskell/Mutable_objects looks a good start.


Cannot speak for industry, so speaking from experience in academia when we were learning functional programming (FP).

The key difficulty there is the paradigm shift in thinking. For most people our thinking matches imperative programming. Ask a person to do something like run an analysis of an accounting book using paper, pen, and a calculator, and you'll see them keep some tallies which they keep updating as they go along. Even if there was a way to convert their work into a method which just involves repeatedly tapping out the numbers into a calculator in a formulaic way, almost every time it's going to be easier to reason about the logic in terms of stored state and progressive steps.

When you get hit with the paradigm of functional programming, which is beautiful when it works btw, you need to switch your thinking from pure logic to formulae driven logic. That's an unnatural shift and given how poorly early schooling helps shift our thinking to that mindset when we do maths, it's a hell of a leap.

Anecdotally we had a couple of pros at math - people who understood the fundamental beauty of math and proofs - and they took to FP like ducks in water. Even when we built a library management system in Haskell.


My theory is that the FP folks would have seen more success had they figured out ways to bring their features to mainstream languages, rather than asking people to adopt wholesale their weird languages (from an average programmer's point of view). For example, why can't I annotate functions as lazy? Swift takes one step in that direction, with lazy var, but why not a lazy func or lazy class? Even the lazy var can't be used for local variables.

People are generally more willing to accept incremental changes than an entirely new way of doing things. For good reasons: they're skeptical, availability of talent, tools, libraries, IDEs, help if they run into trouble...

An analogy is the spread of yoga in the west the past decade or two, because it was easy to do so without changing your lifestyle. If Westerners were asked to fly to the Himalayas for a month to learn yoga, how many would have learnt yoga?


> My theory is that the FP folks would have seen more success had they figured out ways to bring their features to mainstream languages, rather than asking people to adopt wholesale their weird languages (from an average programmer's point of view).

I'd argue this has been happening for years and years now. If you want a pithy saying, you could say that over time languages become closer and closer to Haskell.

Option types, pattern matching, pure functions (see: Vue/React with computed properties, or any other frameworks doing things close to functional reactive programming), type inference (even Java is getting a var keyword!), more powerful type systems (things like the Typescripts of the world have) are now becoming trendy, but Haskell had them years and years ago.

The above is definitely oversimplifying (a short HN comment is no place to get into the fundamentals behind the various kinds of type systems), but I've found all the things I love in Haskell and other FP languages seem to slowly drift on over to other languages, albeit often a little suckier.

(I'm still waiting on software transactional memory to become mainstream, though. Of course, in languages that allow mutable state and without some way to specify a function has side effects, you're never gonna get quite as nice as Haskell. Oh well.)


I'd bet a lot of this has resulted from improvements in computation power. It used to be you couldn't afford to abstract your hardware away in a lot of real-world cases. Whereas now we can afford the programmer benefits of immutability, laziness, dynamic types, and first-class functions. Static things like var can probably be attributed to the same advancements happening on developers' workstations.


This is a good point which I hadn't thought of, but I think there's a lot more opportunity to bring concepts from FP to mainstream languages. For example, why don't imperative languages have an immutable keyword to make a class immutable, given how error-prone it is otherwise? https://kartick-log.blogspot.com/2017/03/languages-should-le...


> over time languages become closer and closer to Haskell

That's a statement worthy of highlighting, supported by examples below it. I'll remember that as I continue to study more languages.


Python has tons of functional features [1] while not being a purist FP by any means. This article liked to say quite often that "X would be impossible in Go or Java!" and I kept thinking, "I bet it's possible in Python, just not as academically wonderfully as you were hoping".

As it turns out, Python continues to be one of the top three languages in terms of popularity. It's not clear if this has helped feed a bigger userbase into purist FP languages though.

[1] https://docs.python.org/3/howto/functional.html


This seems to be the path that the C# language team is on, one step at a time, boiling the frog slowly.


I don't like these posts (for any language, not only Haskell). Invariably they will list many points, most of which are subjective or tangential.

There are one or two core points (here I guess it is "Control Flow") that would benefit going into much deeper. I think, you'll find these points aren't as solid as they appear (cf. other threads in the comments).

Using Haskell is fine. You don't have to justify it. It's enough to like it, being comfortable and productive with it.

Own your gut. Don't make pseudo-truth argument lists to justify your decisions.


I agree with you. It's sufficient to like using a tool and be productive with the tool to justify using it.

But sometimes, solid engineers who are productive with certain tools are working at the behest of unenlightened managers. Articles that say "BigCo uses X for Y because of Scientifically Sounding Reasons" help those engineers use the tools they like and are productive in.


Seems to be the same reason Facebook used it and one of the well-worn areas that Haskell has proven itself (language parsing/analysis).

The section about their day-to-day experience of programming serious software with Haskell compared to other languages is really interesting though.


> it's worth mentioning that Semantic, as a rule, does not encounter runtime crashes: null pointer exceptions, missing-method exceptions, and invalid casts are entirely obviated, as Haskell makes it nigh-impossible to build programs that contain such bugs.

This I think is the real drug of Haskell.

There are lots of challenges involved with moving to Haskell for production software, and the article notes some of them. But once your code builds and you ship -- man, there is nothing remotely like it. v1.0 software running for weeks in prod without a bug.

Then, the super-sauce is: maintainers don't break your already-working code. Sure new code might have bugs, and deep semantic changes to code can break anything, but workaday fixes to one corner of the codebase simply can't break all the other corners. v2, v3, v4 all migrate to production with none of the working stuff falling over.

PS I don't buy that the "control flow" feature couldn't have been done in Java -- however I would bet huge dollars that you couldn't do it in a way that wouldn't require massive investment in maintainers understanding a complex and subtle data-driven control flow pattern that any single dev could easily subvert, intentionally or not. I was a huge fan of FP-style coding in Java, but it required buy-in from all maintainers; as soon as somebody got lazy and went back to mutability etc there was nothing to stop them.


There's another "real drug" hidden in the subtext of your post and Semantic's post, right in between

"Sure new code might have bugs, and deep semantic changes to code can break anything..."

and

"Its language features allow concise, correct, and elegant expression of the data structures and algorithms we work with."

- the notion that fundamental abstractions (not just the ones someboy thought up like the Gang of Four) compose well. This is something that we've seen the industry slowly migrating towards in the form of functional JS flavors like typescript and purescript, Java's lambdas, Scala's Cats and so on.

The ability to refactor and as importantly be confident that your refactor is not messing with the semantics of anything upstream is a hidden feature of static types applied to expressive abstractions.

So far, I have only seen this achieved with Haskell's libraries, type system, and reliance on fundamental mathematical concepts while also being able to avoid any "there's something rotten in Denmark" moments. The content in the Hackage ecosystem is a wonder of the modern programming world, warts and all.

There are many rough edges to the development process, as always, but at least in Haskell you can limit the language's contributions to that edgeset. It actually makes you want to produce good code and be a better developer as a result!


Apart from being a great language, Haskell also has a great community. I recommend learning the lang and interacting with the people (irc, #haskell on freenode for example). :)


"Semantic" looks pretty neat. Are there are previous threads or announcements about Github's goal in developing it (i.e. services for which they plan to use it)?


I saw on Twitter[1] that they're using it to show which methods/functions changed in a pull request[2].

[1]: https://twitter.com/rob_rix/status/1134537990095720450

[2]: https://github.blog/2017-07-26-quickly-review-changed-method...


> Editor tooling is sub-par (especially compared to language communities like Java and C#) and finicky - we often end up just compiling in a separate terminal.

https://github.com/ndmitchell/ghcid is a great tool that does exactly the last part.


This week I need to do some crazy validation in C#. I wrote a bunch of plain imperative code. Then I thought I could use applicative validation with LinQ.

I tried to wrote Maybe, Either and Validation, and wanted to extract an applicative and monadic interface. Then I found there are huge pitfalls and difficulties to do it properly.

I also tried the same thing in JavaScript, while it's not a safe language, but the equivalent idea is quite easy to express.

The thing I found disturbing is typical static typed languages limit our expression of imagination. Every now and then I work in Java or C# project, when I tried to express some abstract I would mostly face some difficulties. When I tried to express those ideas to colleagues they mostly didn't think about those things ever.

On the other hand, the dynamic nature of JavaScript (of course Ruby and Python, etc) make things easier to express and try new ideas. I can express new ideas to dynamically typed languages programmer much easier. There's definitely "box" for programming languages which prevents people from thinking "out of the box".

A lot of people tend to think of Haskell as a Perlis language. And I think statically typed languages without a flexible design makes Anti-Perlis languages.


This is an interesting perspective. I write Haskell for fun, and sometimes for profit. My first language that I wrote in professionally was Perl 5, so I definitely understand what you are saying about the dynamic types. However, in practice I have come to the opposite conclusion. When I use a dynamically typed language, I can't escape the looming fear that I am creating a mess for myself that, if I want to use for anything non-trival, I will have to clean up later. In Haskell, I find it very freeing because I can keep writing and as long as the types work out, the "mess" is contained and my play code has a much better path to maturing.


It sounds like you're trying to judge Haskell based on your experience of Java and C#? Don't do that. The Java/C# type systems have substantial limitations, but that's not a problem of type systems in general, it's a problem of C++-family languages specifically.


On the contrary, I'm not judging Haskell and Haskell is one of the most Perlis languages to me.

I'm judging Java and C# etc (statically typed but not flexible) are Anti-Perlis because it draws too many imagination boundaries to their users.


The control flow section perplexed me a bit, just because you can embed a DSL in Haskell with monads doesn’t mean that “control flow isn’t embedded in the language”. You still have a main entry point and all functions are executed top to bottom albeit with lazy semantics. You could write a DSL and interpret it with C# with all of the properties they want. Haskell is better for this sort of task but their reasons seemed off. Am I misunderstanding that section?


Using C# as an example, foreach and IEnumerable are baked into the language as is try/catch and more recently Async/Await. These are all just library functions in Haskell and often more general (e.g. forM in Haskell works for any Monad not just IO). Because they are library functions they can be changed/customized very easily. In C#, how could foreach be made to support a different stream type, exceptions be made checked or Async/Await be made to suspend across stack frames?

For this reason, Haskell is probably closer to a general purpose language than many of the imperative systems programming languages (especially Go).


C# has LINQ. Implement Select and SelectMany extension methods for whatever you like, and you can use the LINQ syntax with your type just as easily as with IEnumerable. Foreach and async/await are baked in, that's true, but the LINQ syntax is easily extendable to new use cases.


I wonder why this is?

https://www.infoq.com/interviews/erik-meijer-linq/

(from "3. How does LINQ work?")

What we have done, and what the mathematicians call Monads, we have identified these sets of operations, we call them standard query operators; we have a list of about 25 standard operators that you can apply to any data model.


> all functions are executed top to bottom albeit with lazy semantics.

No? There is no "top to bottom" execution, that's the whole point of the lazy semantics.


My point was that you still call functions and return from them, you have conditionals, and you still have a call stack with lexical scoping (and Main). You have a very similar control flow model compared to any other language. The other comment explained pretty well what they meant by having the control flow not being dictated by the language - with the expressiveness of haskell’s core language you can dictate your control flow for your embedded dsl.


Request change to

Why Github uses Haskell for its newly released Semantic package


Why?


Because in its current form the subject and verb don't agree.


Only in American English. Groups are plural in British English. "The police are corrupt" vs "the police is corrupt".


I have no idea if what you're saying is generally true, but I can say with certainty that no one would ever say "the police is corrupt" in American English.


A better example might be a British soccer club, e.g. Chelsea are corrupt.


The police force are corrupt vs the police force is corrupt.


That second example is not valid in any English.


It could be valid if the subject is a British, new wave band, but otherwise, I agree with you.


Good try but an errant example. American English treats "police" as a plural noun (although maybe not always in Baltimore).

In the British style, "Microsoft have released a press briefing"; in American, "Microsoft has released a press briefing."


> Semantic is a singular project and we often find ourselves at the edges of modern computer science research.

This is exactly where I DONT’T want to be when creating production software. If that’s thrilling to you then by all means. As for me, I like my software boring.


I don't understand why they didn't use Java with ANTLR. It can also generate parsers in many other languages, but Java version supports more advanced stuff.

There's already parsing support for many languages, and the parser itself is world-class. It's used internally by tons of systems. https://en.m.wikipedia.org/wiki/ANTLR#Projects

I mean, I guess Haskell is cool, but their critism of Java sounds like more of a design choice than a deal breaker. Did they really need to reinvent this wheel in an unpopular language where almost nobody can reuse/improve their work?

I don't mean to be dismissive, but when your plans are to open source something corporate sponsored, you should do it in a way that benefits the community significantly. There's 20 other languages they could have chosen that fulfilled that objective better


> I don't mean to be dismissive, but when your plans are to open source something corporate sponsored, you should do it in a way that benefits the community significantly

This is a super weird objection to me; is your argument that open sourcing something that might not be useful to others is worse than just keeping it closed-source? Even if literally nobody else ever gets any use from this, I don't see why it's harmful for it to be open sourced. More generally, I feel like companies releasing everything that they don't have a strong business reason not to as open source is strictly positive.


More if you're going to open source something that widely useful you should write it in a common language.

Otherwise the public benefit to open sourcing it is less. In this case, the project is almost as useful as binaries. It's unlikely that many will able to integrate into existing systems or even have the Haskell skills to work with it


I guess I just have a fundamentally different view on open source than you. I don't see anything wrong with a company writing a tool in the way that is best for them given their current circumstances (e.g. skills of the team working on it) and then open sourcing it. The team that wrote this obviously felt that Haskell was their best choice for this project, and I don't begrudge them for open sourcing it just because it might not be useful for others.


> given Go's lack of exceptions, such a feature would be entirely impossible.

Haskell is a nice programming language, but ultimately if a program written in language A can run on your computer, a program can be written in language B that can run on your computer and do the same things.

If you think "return foo, err" is a lot different than return "Left foo" or "Right err", then you might want to think more about how you think about computer programs.


"return foo, err" can return 4 kinds of responses:

- foo, nil

- nil, error

- nil, nil

- foo, error

What's the right behavior when the function returns both a result and an error? What's the right behavior when it returns two nils? At best, you make everybody agree to language conventions to prevent that from happening.

In Haskell (or even Swift and Kotlin with their optional types) the compiler guarantees those cases cannot happen. In Go, you just hope they never happen because almost no code handles them.

I code in Go every day. But I can't defend its decision here. I'd much rather have a richer type system where the compiler can guarantee that I'll get an error or a result, but not both or neither.


I think you just treat an error being returned as the value being ignored, which is what 99.9% of go programs do.

As for two nils being returned, I think it's reasonable for a program to run successfully without returning a result. Search for some work to do, return a list of tasks -- if there are none, the list is empty (nil) but there was no problem checking, so there is also no error. I don't see a problem.

As for the compiler checking, try running "func f() (value, error)" like "foo := f()". It blows up. What you do with the error is up to you; all Haskell adds with the Error monad is that it short-circuits to the end of the do {} block with the error value. No different than a Java exception; all of the upsides, all of the downsides.

Regardless, I stand by my original point that if you want to handle runtime errors, Haskell doesn't really add anything over any other language. Semantic has cases that handle errors. So would the program written in any other language.

The author should have just said, "I wrote it in Haskell because I felt like it" instead of making up reasons that simply aren't true.


If you think those two things are the same, you're just in the wrong. This isn't an opinion thing - I can easily prove that the Haskell way is strictly more precise than the Go way!

Not all Turing complete languages are the same. Curry-Howard shows that. Haskell's type system subsumes Go's and allows you to express richer propositions about program behavior.


If you think x * y is a lot different than x + y, then you might want to think more about how you think about algebraic expressions, yeah?


implement x * y with no * using only +

    ans = 0;
    for(i = 0; i !=y; ++i) { ans += x; }
Turing completeness is a thing. For a particular data transformation language A might be easier to write than language B. Language C easier to read. Language D easier to maintain, Language E less chance of a latent bug. And language F might be brainf&^k. As soon as you think about it as a data transformation and note you can write an interpreter an any language A,B,C,D,E... to execute any other of those languages source code it becomes obvious.


Turing Completeness is an important and interesting thing. It doesn't have very much to do with fitness-for-purpose of a programming language. Of course you can write * using +. That doesn't mean using one mightn't be significantly more appropriate to the problem at hand, or that returning error codes in a product type isn't the wrong call.


This is a discussion of the phrase "such a feature would be entirely impossible"

You're now explaining to me in response to my previous post that some languages are a better fit in certain dimensions to perform a data transform? I think this is a discussion that needs to end now.


I'd been addressing the snarky finishing line. The feature that was called "entirely impossible" was resumable exceptions. The difficulties are practical not theoretical, but it's not the difference between (r, err) and (Either r err).


Needed to stop but didn't for which clearly it is all my fault because someone is wrong on the internet!! Being snarky about features being only possible in $language is one of the few entirely appropriate uses of snark and you don't object to snark per se, do you?

Which feature of this discussed data transform is called "template meta-programming?" Excuse me, my bad, is called "resumable exceptions?" Or are resumable exceptions a language feature/idiom used with a particular choice of language to perform a data transform? Really. "The feature of using C++ is impossible with a different language selection." Come. On. It must rankle. It must.

Maybe this isn't just yet another "any criticism of a language boosterism piece must be met with fire and fury because I've invested time and want career payoff." Yeah maybe there's something else going on here but I genuinely can't see it. And it's a pattern also visible with rust boosterism articles (and I like rust too!) and clojure (again, it has some merit!) and even Haskell. Yeah, Haskell is fun! (No! Silence the Heretic. The emperor's threads are fine indeed! "Let me repeat what you just said but with me explaining it to you to show how stupid you really are." Yeah well I know I'm pretty stupid, often, but not so stupid as all that, yeah?)


As far as I can tell, no one said resumable exeptions are only possible in Haskell. That would be pretty silly, as they didn't even originate there. In fact, they indicated it probably would be possible-but-awkward in Java. What was stated was that it wasn't possible in Go, and that seems to be correct, and have nothing to do with Haskell's choice to return (Either a b) where it's appropriate instead of (a, b).

It's certainly the case that whatever you wanted to use resumable exceptions for is very probably plenty possible in Go. But no one denied that. It's also possible to implement a language that's like Go but has resumable exceptions, either in Go or by modifying the Go compiler, but not only is that a huge project (which may permit "entirely impossible" as hyperbole) but the result is quite arguably not even Go anymore.


Pedantry is unhelpful. Technically this project could be written in assembly. Have fun doing that.


The point isn’t that the software couldn’t be written in Go; it’s that the language feature that makes that particular problem easier to deal with doesn’t exist.

You could write this tool in Go, but it would have an entirely different flow.


Left foo and Right err alone do not act like exceptions: you need more baggage for that, which Haskell supports and encourages. When you define this in terms of a monad--not just a data type that has the same features, but part of a reusable category of types--you can provide special syntax for it, which Haskell does: if you have a language claim to have "monadic exceptions" without having a type class for monad and some reasonable support for working with monads, you should frankly question whether the people who designed the language understood what monads were in the first place. In Haskell, you can define operators on Either which allow you to use the generic do notation to achieve something with the feel of exceptions but with fully programmatic behavior: languages like Go and even Rust seem to want the downsides without gaining the benefits, which is just depressing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: