In fact, experienced functional programmers become adept at encoding correctness constraints directly into the Haskell type system. A common remark after programming in Haskell (or ML) for the first time is that once the program compiles, it's almost certainly correct.
A difficult position to defend but the reason I end up favoring strictly typed languages over dynamically typed (I'm ambivalent about duck-typed, so Python ends up being my favorite language for other reason). I love Haskell, as you may be able to tell.
In my opinion, we need to split 'statically/strictly typed' languages further.
I love pythons dynamic types, and I love haskells typesystem, because once you get used to it, it becomes pretty much as painless as having dynamic types.
On the other hand, I utterly hate Javas typesystem, because it tends to be pretty obnoxious and in the way (probably I am not enough of a type theorist to actually understand why this is the case, but I can see that there are a lot of cases in the java type system which are just annoying roadblocks).
The problem with java's type system is not that you are not enough of a type theorist, but that the people who designed it weren't. It is exteremly weak, inconsistent and confuses some fundamental concepts (like the difference between inheritance and subtyping). Many weirdnesses in generics and complexities in making modern languages interop with java on the JVM are for example just workarounds for the fundamentally broken semantics of java arrays (covariant, can you believe that?)
Imagine this setup: You use a fully dynamically typed language. You write unit tests and ensure 100% code coverage. You run those unit tests during the build phase.
Now, is there really difference in terms of guarantees that can be given about the app after it compiles? This arguably gives you better guarantee than static typing because it is also protects you from variety of errors such as using one expression of type FOO in place of another expression of the same type.
You write unit tests and ensure 100% code coverage.
That sounds like a lot of extra effort; it would be nice to automate the testing a bit. Maybe some sort of simple declarative system to check that the code obeys certain constraints. Ideally, you could make your auto-test system clever enough to actually deduce a lot of the constraints itself, based on the structure of the code--you'd really save time that way! Then you could focus your effort on just testing the things your auto-testing system can't check.
It might be more effort, but having tried it I can tell you it's not that much effort. It also has tangible benefits in that it forces you to think through the edge conditions, leads to better interface abstractions and reduces fear of making changes. I don't want to turn this into religious thing, but if you never tried this I encourage trying it out at least once.
As to your question, that sounds very much like Haskel. They even have a search engine that lets you search library functions by type signature, and often time you find the right function just based on that.
Once you have the constraints, or contract, in place, you probably don't even need the unit-tests because the function will not work at all unless the contrainsts/contract are met.
Well, for a program of sufficient complexity there'll probably always be some aspect that can only be checked easily by unit testing--if only stuff where the program works as intended but the programmer's intent was mistaken.
Could you give an example of that? I'm not sure what you mean.
If the programmer's intent was mistaken, then the program needs to be re-written. You don't really need unit-tests for that, you just need to think about it. Maybe the unit-tests force you to think while writing them?
"Force you to think" is basically the idea, yes. The goal is to catch places where your high-level conceptual understanding of the task to be accomplished doesn't match the lower-level model of "how to get from here to there" you've mentally constructed, usually at weird corner cases.
There's roughly three places where a programmer can make a mistake--understanding a task to be done, creating an algorithm to perform the task, and describing the algorithm to the computer. Type checks help insure that the algorithm you intended to code is what you actually coded; tests can help ensure that the algorithm you coded actually performs the task you think it does.
"Beware of bugs in the above code; I have only proved it correct, not tried it." -- D. Knuth
The thing that I've found ironic as I've been spending more time with duck-typed languages is that all of the supposed concision and rapid development games are pretty much wiped out by the necessity to write unit tests just to assure even basic functioning.
Is my Ruby code more concise than my C++? Yep. But the C++ is smaller than Ruby + unit tests.
You write unit tests and ensure 100% code coverage.
Let's image this counter scenario:
You write a couple unit tests for your most common cases and a couple corner cases. You don't even try for code coverage over ~80%.
For me, the second case is more common. I simply don't have the time to keep the unit tests up to date with every iteration. In these cases I like static typing as it finds errors I miss.
From a Haskell background, I'd say static languages of a sufficient power still have the advantage due to things like QuickCheck. 50% of the battle with a unit test is writing up good edge cases, thus smart typing and QC systems can make (up to) half the problem disappear.
It seems like you are assuming that QC can only be done in statically typed languages. John Hughes, one of the authors of QC, implemented QC in Erlang.
I'm aware of the Erlang implementation. Actually, people have even implemented it on Python and Javascript.
So far as I know in order to use these implementation you have to specify static type information about the test functions. Moreover, I don't think any of the alternative implementations have the flexibility of Haskell QC's Arbitrary typeclass for generating test data.
Obviously anything can be implemented equivalently across any Turing equivalent language. QC is deeply tied to the kinds of power you get from strong static typing, though. I'd be willing to bet that even though it exists on a few other platforms, it is not as useful or easy as it is in Haskell -- perhaps measurable by the average test coverage divided by the time spent writing tests in similarly developed programs.
"In reality when you move beyond very simple properties, then you almost always need to specify generators. Generation is more complex than just saying, give me an int. [...] So as soon as you move beyond very simple properties, you have to write the generators in Haskell also, so that you have to write them in the Erlang version is not a disadvantage."
Hm. I'd agree there. It's always very important to write a decent Arbitrary typeclass instance. I've never found it terribly hard -- it's always exactly as hard as trying to decide what the potential input space you support should be -- and I don't expect it to be in Erlang either.
The difference is that once you start combining Arbitrary instances you need to have strict, careful control over these input spaces, something "trivial" when you're abusing the Hindley-Miller type system. I've never written QC tests in Erlang, let alone at higher levels of complexity, but I can only imagine that it's a short while before you're forced to manually do inference a static type system auto-infers.
I don't think Hughes is talking about writing the implementation of the generators. He's talking about writing then down in your tests, e.g.
?FORALL(N,int(),
?FORALL(M,int(),
N+M==M+N)).
In Haskell you can specify the `int()`'s in the type instead of in the code. What I think Hughes is saying is that for real world testing scenarios, the standard type generators aren't fine grained enough. For example, you need to specify `positiveint()`, or `elementinlist(L)`.
If you have 100% code coverage you don't necessarily have 100% path coverage, so there could still be errors that a statically typed language would catch at compile time. Realistically, most programs aren't going to have extensive test suits that get anywhere near 100% code coverage, so static typing becomes even more valuable. Also, running a full test suite for every build would take much more time that running a type checker.
But it is easier when you're dealing with a "pure" function. In something like Haskell, a function that doesn't touch any of the stateful sequencing monads is guaranteed to neither alter nor depend on any sort of global state.
In my experience, the times that lack of path coverage gets you into the most trouble is when one large-ish function depends in subtle ways on global state, and inevitably there are bugs involving one or two global states that you don't catch in testing, and then you get errors out in production and have to figure out what on earth is causing it. Not fun.
There are limits to unit tests. For example, suppose I have a function I need to be pure (i.e. it does not modify any global state). No amount of unit testing will prove that it is pure. But a static type system check by the compiler can prove it is pure.
Unit testing and static type checking have some overlap, but it is not complete overlap. Both are desirable.
Yep. There's nothing that prevents unit tests from being used in a program written in a statically typed language.
The difference is that in a program written in statically typed language you'd have to write fewer unit tests than you would were you to use a dynamically typed language, as in the former much of that testing is going to be automatically done for you by the compiler.
Satisfying the static language's compiler usually doesn't require more code; it requires more-correct (ie. less buggy) code.
And the statically typed program is probably going to contain a lot less code than the dynamic program, especially if you consider all the extra code you'd need to write in those unit tests which simply duplicate the guarantees that you get for free when you use the static language's compiler.
Satisfying the type-checker can take a huge amount of extra code if you use type annotations everywhere. Even more so if you don't make good use of polymorphic functions.
Which is of course ludicrous--no one would create a static-typed language that made you code that way; it'd be practically unusable.
Most modern statically typed languages (for example, Haskell, ML, and OCaml) feature type inference; which means you don't have to annotate types except in very unlikely corner cases.
My own experience with some of these languages has been that when the compiler complains, it's because of a bug in the program, not because it can't figure out a type without annotation. So, a statically typed language's compiler becomes, in effect, a bug detector (to a much greater extent than a dynamically typed language's compiler or interpreter can be).
Having such a bug detector can be worth its weight in gold. But just how useful it is depends on how good the error messages are (or how good the programmer is at deciphering them). Unfortunately, this is an area that leaves a lot to be desired. But if you're seriously concerned about safety, statically typed languages (with all of their shortfalls) are still clearly the way to go.
You may want to reread my post with the sarcasm bit flipped to "on"; I was trying to make fun of static-typed languages that don't have type inference. Sorry about that.
I was actually hacking on some Haskell just now, in fact, trying to fix a "bug". ("Couldn't match expected type `ProgrammerThatUnderstandsMonads' against inferred type `Newbie'")
Depends on the expressiveness of the language type system.
Duck-typing may save quite a bit of code when dealing with generic functions without the complexity of a sophisticated type system (See "Scrap your boilerplate" for the Haskell approach).
A unit test covers a some particular properties hold for some cases (e.g. you verify that foo gives bar). A type system guarantees that for all instances of the types, certain properties hold.
Given the choice, I know which I'd choose!
In practise it's not quite that simple, but designing around the types and shifting the burden of testing on to the compiler seems like the way forward.
So you write unit tests for all possible scenarios and you write unit tests and unit tests and unit tests ... most of which you wouldn't have had to write if you had used a statically typed language right away.
I don't think it's supposed to be an exhaustive list. There are thousands of programming languages, and (arguably) hundreds of them could be considered "advanced". I think this is meant to be more of a sampling.
Still, given all that, Erlang would be on my list. :)
I actually found it well-written compared to other such articles that have been linked here before. Although it should have been called "Functional languages every programmer should know."
My personal list would include: Erlang, Forth, Scheme, Haskell, Prolog and omit Scala and *ML.
Why Clojure when Haskell and Scheme have already been mentioned? I think the latter two are more educative than the former.
A difficult position to defend but the reason I end up favoring strictly typed languages over dynamically typed (I'm ambivalent about duck-typed, so Python ends up being my favorite language for other reason). I love Haskell, as you may be able to tell.