Yes, static typing is not enough. Which is why a good statically typed language like Haskell also has great testing facilities. So you don't just write well typed code, but you also write unit tests (HUnit) and property-based tests (QuickCheck). The really neat bit is that static typing actually makes writing tests easier--QuickCheck is much easier to use in Haskell than it would be in some dynamically typed language.
QuickCheck is great and can be found in many languages - dynamic and static alike - nowadays. Another tool that is currently unique to Haskell and is easier to write tests for is Lazy/SmallCheck. SmallCheck shines in the left long tail of failure. Its core idea is based on a powerful principle: "If
a program does not fail in any simple case, it hardly ever fails in
any case."[]
So SmallCheck works exhaustively to find a minimal counterexample within some depth bound. This makes sense - when trying to characterize a structure for failure (or anything) you can rarely do better than the simplest explanation. However, when the problem lies in larger examples SmallCheck fails where QuickCheck catches a result so something like Type Check -> SmallCheck -> QuickCheck -> probably ok.
Quickcheck exists in a bunch of different languages, but Haskell's type system makes it easier to use and implement because it can rely on type information to figure out which generator to use for any given input.
Doesn't anyone view static typing as done in a language like Haskell as a form of documentation? I am not saying ALL Haskell code can be deciphered by its types - hell no. But a useful technique is to express the semantics of the program into the type system (as much as possible anyways) to communicate intent. This is a huge benefit over dynamic languages imho. So there is more too it than just verification, even if that is extremely nice.
This is actually a rather underrated property of good static type systems. Some types (like (a, b) -> a or a -> b -> a) are actually so constraining, they can only have one valid (e.g. not ⊥) implementation. There is actually a program that, given a type signature, can write the trivial function for you [1]. It's pretty awesome, but admittedly more a curio than anything practical.
Happily, there are practical programs that take advantage of the types' expressiveness. My favorite example: Hoogle[2]. It's brilliant: a search engine that given a type signature gives you a list of functions either with that type (modulo renaming) or a similar type. The beauty is that it can work even if the function is more general than what you're looking for and has a weird name (so a normal search would not be very helpful).
A perfect example: let's say you're looking for the equivalent of JavaScript's join method. It would have the type [String] -> String -> String. Now, this particular function does not actually exist in Haskell. Instead, there is a more general function called intercalate that works on any list. Moreover, intercalate's arguments are reversed: it's type is actually [a] -> [[a]] -> [a] (or String -> [String] -> String if made specific to String). And yet, if you search for [String] -> String -> String, the second result is intercalate! It's magical.
In short: types are basically a form of self-documenting code, so you get a lot of cool stuff for free given a good type system.
I could give a few dozen more examples in these 3 languages, incluindg Coq which is written in C and OCaml, and agda and idris and others, which are written in C and haskell, or compile down to some stage of haskell compilation.
(+),(-), (*) all have the type (Num a) a -> a -> a
abs, signum both have the type (Num a) a-> a
In the first case it works as documentation because I already know what they do. In the second case I read signum as sig num instead of sign num, and was left scratching my head as to what a signature number was, until I read the actual documentation.
So I would say types make a good reference documentation but not a good learning documentation. Contrast this with python doc strings which can be both a good reference and a decent source of learning.
This is as much a usability problem as it is a "getting it right" one. We could eventually "get it right" by plugging in raw machine code through switches or punchcards if that's really what it took, but what we're arguing over is the _price_ of getting it right.
Static types are implemented and available for use at the language level, which isn't true of all of those features in most environments.
Reliable dot completion without some batshit insane state-dependent VM-as-a-dev-env-or-constantly-running-tests system is... well, let's just say made much easier with static typing.
I really could give a shit about what people think is "enough" (IMO, keeping stuff simple, as in simplicity of implementation, as in worse-is-better, is the only answer) I just want to hit dot and have a pretty good idea what I'm allowed to do. And static typing, in practice, appears to help there.
Pry for ruby gives me great dot completion and isn't batshit insane. Most diehard dynamic typists live in their repl. When I use java with IntelliJ I miss my repl. I do gain some things from the IDE, but I feel hamstrung compared to my repl.
> I don't want a repl for editing. I want to write code in a friggen editor and have dot completion work.
I don't either, which is why I use Emacs with something like SLIME or at least inferior-lisp. But I'm sure you've tried this already, and it's not for everyone of course.
I find autocompletion and Intellisense-like features useful, but not really useful enough to make me pick a language based on them. I don't type terribly quickly either, but that doesn't worry me as most of my time is spent debugging and not typing.
Of course static typing does help catch some bugs before you get to debug them, but I think talking about autocomplete and dot-completion is kind of a distraction.
Pry knows its place, it doesn't pretend its a nice environment for writing real code - instead when you want to edit a method definition you type "edit-method my_method"[1] and pry will open the source file of that method in an editor and take you to the exact line where the method is found.
Technically true, but I write most of my code in the REPL, then use edit-method to get it into my editor.
Fom the link: NOTE: If you're using Ruby 1.9, you can use edit-method on methods defined from the Pry console. As these methods have no associated file, this implies the -p switch. This functionality doesn't yet work for Ruby 1.8.
Minus the part about t only working well for java and having to write your code in java ;)
I also use rubymine for ruby development and it gives me a lot of the IntelliJ stuff, but it doesn't always work. I.e cmd-b to jump to the method definition.
I think the grandparent knew that, and was just repeating what you said to contrast it with his following statement about what static typing is good for.
And dot-completion is not completely unique to static languages, but I would say it is “particularly unique” – it is more common for static languages. And that’s not just coincidence – it is generally easier to write a tool like dot-completion for a static language than a dynamic language, because dot-completion involves static analysis of the code.
> dot-completion involves static analysis of the code.
Dot-completion involves having a living model of the program loaded in memory. You can do this with any language--static languages just partition the program model such that you get a certain phase of interpretation (compilation) past which you have a completed model of all of the code, without yet having seen any of the data.
On the other hand, you can easily dot-complete method calls in a dynamic language's REPL--because there, at runtime, is when and where a dynamic language forms a complete model of the program it can use to do such things.
Now, the real question is why we're still programming dynamic languages in text-editors instead of treating them as congealments of transactionally-replayable interactive REPL transcripts. (Are there any modern examples of programming in this style, anyway? I can only think of ancient ones: Emacs Lisp, Squeak Smalltalk, and the "immediate-command-line" of some Telnet-based simulated-world MOOs.)
When you create a method which takes an argument in a dynamic language, there's no declaration of the type of that argument or which methods it will be known to support. So how does dot-completion work for any target other than "this"? The REPL isn't doing whole-program dataflow analysis to infer the only type(s) which could possibly be passed, is it?
I code in both ruby (with a repl in the IDE) and java (with a traditional statically typed IDE). There are times I wish I had a repl in java (I typically use debugging in a test to fake it) but I'd give my left cortex for reliable code completion in ruby.
There's no reason we can't have both. It just takes a reasonable terse statically typed language like C# or, dare I say... gosu.
It's a response to a previous article, which really did make the argument "unit testing does not guarantee quality, therefore you need static typing". So not a strawman.
The previous article really didn't make the argument "all you need is static typing". It was specifically refuting the idea that "all you need is unit testing" - or at least that unit testing is a complete replacement for static types.
The paper makes a point of reporting that most unit tests could not be replaced by static types.
What of that looks like a claim that you only need types?
The previous article said "unit testing isn't enough, you need static types /as well/". This article is extending that argument to "unit testing and static types aren't enough, you need xyz /as well/".
I believe the point is that there will always be something your current test strategy doesn't cover. At what point do you draw the line?
If unit tests and static types don't cover everything, what says that's enough? And can the justification for that, whatever it is, be applied to "unit tests are enough" also?
That would be great if I could just work by myself, happily use Haskell and not worry about bosses mandating something really annoying like Python or Java. In fact, that's one of the main reasons I'm working at a tiny startup right now. But I digress; my main point is simple: since chances are your choice of bullet affects others' choices, especially in corporate settings, it would be much better if it were a good choice.
Well yes. But my point is not that you should use technology X but that just letting you "get back to shooting" could very well be detrimental. In this particular point I'm not defending any particular technology but rather having discussions and disagreements about technologies. After all, you do not program in a vacuum.
I've got no time for bitching but I think serious attempts to study the pros and cons of different approaches are both welcome and needed in our profession.
I'm indifferent, frankly. I think it's valuable to know the pros and cons of different approaches, but most of the time discussion around degrades into unproductive flame wars. It's understandable, though. Dedicating years to becoming good at one approach over another will make you defensive.
In a conference 15 years ago I heard a guy say The problem with software is we don't know what it is. He meant that: we know what is a car or a bridge, we know why and how they do their purpose, not so much with software, which is a strange beast our minds are not ready to grab yet.
There might be something to that. But maybe some construction and automobile engineers feel the same way about bridges and cars respectively. God, I hope not. :)
The first half is a bit easier - if you can show some implementation meets the specification it is at least consistent.
The second isn't so clear, but sometimes you can get a bit more confidence by showing that other good properties are implied by the specification alone.
I'm really curious what Love Driven Development would look like. Is it kind of like TDD in that before you write a single line of code you write a sonnet about it?
Accidentially, almost everything that the author suggests, except luck and user testing, can be provided by static type systems - e.g. Haskell's type system is Turing-complete, so you can make up any kind of contracts/tests and embed them into the type system...
With appropriate compiler flags, you can make up any sort of contracts you like in Haskell. But there's no guarantee that the compilation process will terminate if you do, even putting aside the impenetrability of the resulting code. Haskell is not a general purpose proof assistant, and the typing definitely has limits both practical and theoretical. Abstractly, static types can represent anything, however we (by which I mean humanity in general) certainly do not know how to use such powerful types in real, general purpose code.
Why do people worry about compilers hanging or giving up? If I know how to kill the failed compile and fix my program so that it is provably type-safe before shipping, I don't really care how many undecidable programs could also exist because I no longer have to ship one of those.
Because if the compiler hangs, you don't get any executable code. And in the specific case of Haskell, we are talking about type constraints that are perfectly cromulent, are perfectly correctly expressed and mean something in some abstract mathematical world in which your have infinite compiler time, but in the real world will simply never finish compiling. It's not because you've failed and you can fix it by twiddling something, it's because Haskell is not actually that sort of language, and if you try to actually use the type system as a Turing-complete constraints satisfaction language, you'll get exponential performance... or doubly-exponential performance, or worse, though it stops mattering pretty quickly.
In both theory and practice, this means you can't really use Haskell that way. It's a typed language with the ability to occasionally skirt the limits, ride the line, and dip into a bit of undecidability when useful (and with a real, in-practice risk of it blowing up in your face even so), not a language in which one routinely uses the type system in a "Turing complete manner", if I may be allowed to gloss over precisely what that means. Richer type systems is a topic of ongoing research, but is not a solved problem.
So the problem is that there are too many useful programs which can't be expressed in a practically decidable form, or there's no clear way to get from a form that isn't to a similar-enough form that would be, like knowing what strategies ghc uses to narrow the search space or something?
While we're at it, what would be really cool is if we could design a program to automatically look at the Haskell program, decide if it's undecidable, and fix it.
So? What does that buy you in the face of attempting to prove that your system can run with two simultaneous users? How does the type system prove that it can withstand a DoS attack? Likewise, how can it prove that your system properly handles the case where the user types some garbage into a textbox? How does it help prove that your system keeps running if someone pulls the network cable out of the wall? How does it help prove that you wrote the system that your client wanted?
True, and unit tests don't help with those concerns either. I'm confused by the original post and these comments. What are you trying to convey? Can you state your point directly? I am not understanding.
Incorrect. Haskell's type system cannot ensure arbitrary correctness guarantees. Hence the research into dependently typed languages, like Agda and Coq.