I used Java, C and C++ before coming to Python. Unlike theirs, Haskell's type system is complete -- even a function of functions can specify exactly what kinds of functions it uses for inputs and outputs. That makes higher order programming much safer -- and higher-order programming might be the best way to move fast.
Haskell is also astoundingly terse. In Java and C my data type declarations were too long to fit on a page, full of redundancies and boilerplate. In Haskell if you want to make, say, a data type that is either an X or an O (suppose you're writing tic-tac-toe), you could do it in four words: 'data XO = X | O'. (Notice that there's not even a natural way to do that in Java or C, because they don't have sum types; you'd have to make a type that has a flag to indicate whether it is an X or an O. That gets really complicated if they're supposed to have different data associated with it -- but in Haskell, if the X is supposed to carry a float and the O is supposed to carry a string, you just add two more words.)
Pattern matching also helps with terseness. I don't have time even to write the last paragraph so I'll skip this one.
Purity keeps you from tripping up on IO-related errors. It lets you be much more certain that things are working. It also forces you to keep the IO in a thin top-level layer of your program, which might sound like a pain but once it feels natural you'll find yourself moving faster than you could before.
To be sure, Haskell has features that I don't use. But purity, sum types, pattern matching, and the unusually rigorous (it's complete!) type system are all critical to its value to me.
That complicated type system w/ type inference does come with it's own costs, usually bad compile speed issues. It's what I've noticed when looking at languages like swift, rust, scala & haskell.
I'm currently dealing with it in a large swift project, and I would much rather go back to the extra verbosity of objective-c than have type inference at this point.
The type system is probably not the bottleneck in any of those cases. As a sibling comment points out, ocaml has very good compile times, and the inference problem is basically the same as in Haskell.
In the case of rust, I suspect one of the biggest issues is the way parametric polymorphism is implemented. Basically, if in your program you end up using e.g. Box<usize>, Box<MyType> and Box<Result<String>>, you're compiling Box 3 times.
I don't know enough about swift to hazard a guess as to where the build is spending its time.
My experience with Haskell is that compile times are neither great nor terrible.
Check out OCaml (BuckleScript or ReasonML on the frontend, depending on which syntax you prefer). It has a super-fast compiler with almost total global type inference. The Facebook Messenger team report incremental builds of less than a second.
Data point : I’m working on a several hundred thousand line server using Scala. The incremental compile time
is usually 2 or 3 seconds. A clean build of the project is around 2 minutes
I've only used swift at that size, but I've heard stories with all of those languages once you have a large code base.
I remember vaguely reading about how at a haskell conference people were basically cornering compiler maintainers about compile speed, but that was several years ago.
When I have to recompile, it's usually because I've only modified a few files, and it's usually really fast. I particularly enjoy that if I refactor things without actually making any changes in the way it works, GHC won't blink; it knows there's nothing to do.
Yeah the codebases I'm talking about are something around 1 million lines over several apps, libraries, all of the tests and codegened models, mocks and network services.