I'm learning Elm now and I'm really liking the syntax to the level that other languages feel rather cluttered to me now.
The more I'm playing with types and learning to leverage them, the more I appreciate their power (yes, I'm late to the game) so making this statically typed is very interesting.
However, there seem to be a saturation of new languages and not sure if there is enough eyeballs left for a new language that does not have a large corporate backing (FB, Google, Apple) or happens not to arrive on a perfect time with the right set of answers. Maybe BEAM, ML/Elm syntax and static typing is what everyone else is looking for.
Edit:
Video posted today of creator of Alpaca (Jeremy Pierre) giving a talk at Erlang Factory. It gives a nice overview of the state of the language -
What's hard is cracking into the very, very top tier, the C++, C#, Java, etc. tier. I am also increasingly of the opinion that it simply takes massive corporate backing to get to that level, based on the observation that I haven't seen anything get to that level without it. Python's the only one that has arguably gotten there, I think, and it's still debatable.
That said, I do think that if you want to make a new language right now and really see it take off, you do need to find some problem that isn't well-solved, or come up with a reeaaalllly novel combination of things that didn't exist well before. It seems to me that this project is going to be shadowed by Haskell in a lot of ways.
But that's only if you want to see it take off. Not all languages are put out there with that intent.
Seems a bit contradictory. Which is it, corporate backing or novel ability?
I would like to see language advantages better quantified. How confident can we be a language is a practical improvement, in what contexts is it true, and what do the improvements buy in cost, quality, innovation, etc.
If we had all this data for a new language it would probably be easier to gain critical mass.
No one is funding these studies, so I wouldn't hold your breath. It is down to you to decide for yourself.
As an anecdote, I switched to Haskell professionally 5 years ago and am both happier and more productive.
Interesting. Would you say it's the most productive language you've ever worked in?
What's your best guess as to how well this would apply to developers in general?
Once I used a tool chain with a steep learning curve, but I felt the rewards were clearly worth it. However, with that particular team it was difficult to get buy in. It seems not everyone is interested in a little pain for a lot of gain, especially if the concepts are very different.
For me personally it is the most productive and expressive language I have worked in.
There is a steep learning curve, which will make one a better programmer, but not without significant buy in. There is no free lunch.
Haskell is very expressive with its types, especially with regard to when effects happen, which makes it excellent as a shared design language. It's interesting for me to see Java/C# programmers struggle to explain some of their more modern stream abstractions to each other:
The answers above are unable to explain succinctly what the APIs are doing, because the authors lack the necessary common language. They have to answer with wordy essays describing various scenarios and use cases.
Not the OP, but I don't think Haskell will ever be drop-in replacement for mainstream langs for cultural reasons, but that doesn't mean that those who do engage w/ it don't get get tremendous value from it or fail to find it their favorite language (I.e. Doesn't contradict what willtim said). And I think that's OK. (No, you're probably not going to convince a bunch of rubyists to use haskell). You can even train entire teams of willing Haskell developers from scratch, if needed. But the desire / buy in has to be there. IMHO anyways
Part of the problem is that dev is so large that it is hard to make "in general" statements anymore.
It's easier to talk about concrete/specific instances or cases
However pulling these over more than 1 core is still a problem. OCaml 4.05.0 should have infrastructure for that (although OCaml multicore has been somewhat a `duke-nukem forever` story)
Compared to Go, OCaml is unfortunately a rather large language. It has many non-orthogonal features, some of which are not used widely. The impression I get from Go programmers is that the small size of the language is one of the chief attractors.
> Compared to Go, OCaml is unfortunately a rather large language.
I agree. That said, ML is definitely a small language like Go, without OCaml's extras like the object system.
Alas, ML lacks Go's awesome and very modern standard library, which is a key part of Go's allure.
But yes, I would adore a functional language with Go's best features, particularly the standard library, solid concurrency, simplicity/ease-of-learning, fast compiles, binaries, static, etc.
There's also Standard ML, which smaller, and fully specified, with multiple implementations.
But I think part of the problem with both is tooling. Build and dependency tooling in particular. Opam was a good step in the right direction, but I think OCaml and SML could both benefit from a Cargo-like tool, that made managing projects and their dependencies simpler.
Some of it's parallelism and concurrency features should look familiar to you (it has an M:N threading model), complete with channels, and some stuff you've probably never heard of like STM. It compiles to a binary, has a type system much more powerful and expressive than Go's, and the community is very helpful.
I will say the compile times aren't very speedy, I assume you want fast compile times in order to type check your code, and for that there is ghc-mod.
That's one of the things I love best about Go. Compile for your platform, then copy the binary somewhere else and run it.
Awesome.
I really want this to be true for Haskell too, but there's a glaring exception with libgmp. Google "haskell libgmp" for many stories of people thinking they could just copy their haskell program to a new system and run it, only to realize they were wrong.
I'm sorry I don't understand, the results seem to be about people having difficulty installing GHC, not deploying a binary. I can say anecdotally I've never had any problems.
edit: Ah ok, apparently libgmp is dynamically linked in binaries, but you can pass a flag to GHC to statically link all runtime dependencies. Is that what you were talking about?
It's significantly faster than before for type checking etc during development, which is I assume the point at which most people complain about compile speeds.
Rust doesn't have fast compile, and I think it's hard to argue that a language without TCO is a functional language. Recursion is a critical part of the functional paradigm.
Also, I think a lot of people are attracted to Go because it's very simple to learn and use. Rust with its borrow checker is definitely not simple to learn and use.
That simplicity comes at a cost. The cost is duplicate code, less strict error handling and the billion dollar mistake.
I know that some people downplay the importance of these things; I find that because Rust has strong guarantees in these areas it helps to reduce work, reduce bugs and increase confidence in the software.
And use 'cargo check' during Dev for faster compiles.
> That simplicity comes at a cost. The cost is duplicate code, less strict error handling and the billion dollar mistake.
In Go, I agree. But that's not necessarily the case for all simple languages. Take ML, for example. It's a very small, easy-to-learn language with excellent abstraction features that make it easy to avoid duplicate code, as well as excellent static checking and error handling.
Unfortunately, ML lacks a comprehensive, modern standard library like Go has.
> enough eyeballs left for a new language that does not have a large corporate backing
It's a solid point if the goal is winner-take-all style competitive victory. But I'm not sure software should co-op SV-startup-business exponential growth-or-die mindset. What happened to hacker culture? Are open source developers corporatist now?
The point I was making is for something to get enough traction, so that it would get active contributors who help mature the language, tools, etc.
I think there are only handful of people out there who can contribute in a meaningful way for a project like this. If they are consumed working on open source Swift or doing pull request on many things pushed by FB or Google or working contributing to existing projects like GHC, etc. Then the Alpaca project wont get the contributors it needs to show progress. If there is no progress, it falls into a vicious circle of no progress -> no traction -> no contributors -> no progress
The good news is that you don't need that big of a community for a language to do well. It does, of course, need to be big enough, but you don't need to compete too hard with the big corporation-backed languages to have your language community grow enough that it can sustain itself.
Of course, I guess I don't have any real data on it, so this is just my intuition based on observing various languages. So, you know, just my 2 cents.
Open source is massively corporatist. Developers working for $0 to create tool chains that other developers spend their weekend learning so that the shareholders of their employers can increase their wealth.
There does seem to be a significant push to contribute to open-source at many large companies. Having money and people contributing as part (or all) of their day job can be quite a boon for projects.
I think Elm is great, but the language designer has really crippled it IMO by not including abstractions for types. Something analogous to typeclasses would be hugely beneficial. As it is the language gets around it by making some built in things magic, I found this frustrating after a while.
I've been trying to crack Haskell for a while, but didn't find it approachable at first. Somehow Elm and building simple web apps made grasping FP easier.
So now that I've learnt a bit of Elm, I find I can grasp Haskell more and spend a bit playing with. The best part is, after reading this [0] I'm finally grasping Monads.
I learnt more about Haskell in ten days of using Elm than I would have in another ten days of studying Haskell, personally.
I think Elm makes a better introduction to FP concepts because there's much less you have to absorb before you reach the point where you can start practicing by doing useful work. Obviously part of that is the fact that Elm removes or hides certain things Haskell has, but an even bigger reason is that you can just say "...and then 'main' returns the HTML element or Html.program that actually gets displayed" and not have to go down the road of IO actions, functors, etc. You can stop at that point and start making working useful applications while getting comfortable with immutability, purity, the type system, and control flow and iteration under those constraints.
Learning Haskell first, you don't have that opportunity to stop and start practicing. You need to move on an understand at least IO actions, functors, applicatives, typeclasses, and other higher-level concepts before you can construct even a simple practice project. Dreaming up a coherent program structure/flow in this weird new immutable and pure world seems hard enough to a beginner without also having to understand how applicative functors fit into the equation. Having that opportunity to stop and stretch your legs by actually doing a project is a major help to a lot of people, that's what makes all the difference. And then 95% of what you've learned transfers directly into Haskell.
I would recommend having a look at Purescript too. One of the very nice things about Purescript is that it generates very readable Javascript. One of the things I was always wondering about is how exactly tail call optimisation works. Looking at the output of Purescript for about 10 seconds answered any questions I had.
Also, though it has been posted here before (and I don't think it has been worked on for a while), I recommend playing with the Monad Challenges[0]. Well, specifically, just do set one (random numbers). You can easily write your own rand function that returns the seed value as a "random" number and then increments the seed. This will generate successive integers (1,2,3,4...). It makes it very easy to test. Then once you've done set one, go back and write map, apply, etc for Gen. One other nice thing you can do is to make a union type/ADT for your random number (i.e., (Int, Seed) ) and then try to see if it is a functor and applicative (also try to understand why or why not). Finally, you can figure out why Gen is structured the way it is.
I've played with that kata over and over and over again. It is simply beautiful.
I'm learning Elm now and I'm really liking the syntax to the level that other languages feel rather cluttered to me now.
The more I'm playing with types and learning to leverage them, the more I appreciate their power (yes, I'm late to the game) so making this statically typed is very interesting.
However, there seem to be a saturation of new languages and not sure if there is enough eyeballs left for a new language that does not have a large corporate backing (FB, Google, Apple) or happens not to arrive on a perfect time with the right set of answers. Maybe BEAM, ML/Elm syntax and static typing is what everyone else is looking for.
Edit: Video posted today of creator of Alpaca (Jeremy Pierre) giving a talk at Erlang Factory. It gives a nice overview of the state of the language -
https://www.youtube.com/watch?v=cljFpz_cv2E