Hacker News new | past | comments | ask | show | jobs | submit login
Experiment: Unit testing isn't enough; You need static types, too (evanfarrer.blogspot.ca)
189 points by fpgeek on June 20, 2012 | hide | past | favorite | 267 comments



The sample size of this study is statistically too insignificant to draw the conclusions given. The counterfactuals to the claims of dynamic type advocates were already true and provable, so even if the sample size of the codebases studied were statistically significant (not to mention vetted for quality of unit test as well as coverage) the conclusions are nevertheless trivial.

In addition, the hidden assumption is that all static and dynamic typing are created equal, i.e., since Haskell is statically typed and Haskell appears to have caught Python bugs that unit tests did not, therefore Java will catch bugs in a Ruby codebase, C++ will catch bugs in a JavaScript codebase, etc. Of course this assumption is gratuitous. Haskell in particular has a specific sort of type checking that is far different from Java's or C++'s, for instance.

Further, not all dynamic systems are created equal. Ruby, for instance, I think can be shown to require fewer lines of code to achieve similar functionality to, for instance, Java. Fewer lines of code, should in principle mean fewer opportunities for defects. Dynamic languages with metaprogramming features like Ruby's or Smalltalk's should in principle be able to eliminate more code duplication than an environment like C++. This aspect of dynamic languages should be taken into account, again with a statistically significant sample size, and weighed against bugs caught by static typing.

The study is interesting as a preliminary investigation, but the conclusions should have been much more modest, proportionate to both the sample size, in terms of % of production codebases and the extremely important idiosyncratic nature of Haskell vs. other static typed environments. Something like: "The study has shown Haskell's type system will catch some bugs not caught in an otherwise well-covered Python codebase. These bugs could in theory have been caught by unit tests, therefore it is recommended that when using a dynamic language, more care must be taken to cover these types of bugs."

That would have been a more appropriate and modest conclusion, consistent with the data, than the sweeping generalization "You need Static Typing."


I (the author) appreciate the feedback. I believe that many of your criticisms are addressed in the actual paper. First of all I completely agree that my sample size is too small for a conclusive proof. I mention in the paper that I hope that others will try and replicate this experiment on other pieces of software. I do think it's appropriate when conducting an experiment to publish a conclusion, not that the experiment will constitute proof (or an established scientific theory), but as a conclusion to the study that others can try to confirm or refute.

I also mention in the paper that it would be beneficial to conduct this experiment using different type systems for the reasons that you stated above.

The argument against static typing that I was testing didn't mention any particular type system nor any particular dynamically typed language, it was a general argument that stated that unit testing obviated static typing. Because the argument was so general and absolute I felt that any static type system that could be shown to expose bugs that were not caught by unit testing would be enough to refute the argument. I was not trying to prove that any type system would catch bugs not found by any unit tested software. The paper also points out that I'm trying to see whether unit testing obviates static typing in practice, in theory you could implement a poor mans type checker as unit tests, but my experiment was focused on whether in practice unit testing obviates static typing.

Finally I believe that my conclusion in the paper was at least a bit more modest than that of the blog post. The lack of apparent modesty in the blog post was caused more by a lack of ability on my part to accurately summarize than an inflated sense of accomplishment and self importance.


Thanks for the response! I appreciate the effort you went to here, this was no small task you set yourself to.

I appreciate the clarification. I think now I see better where your emphasis was: the purpose of the paper was to refute an argument, and of course the level of burden of proof is different and far less in that case. I think this misunderstanding on my part is what caused me to call the conclusions 'trivial' -- too strong and dismissive language on my part anyway.

The irony is, you were attempting to do to the unit-testing-is-sufficient argument what I was attempting to do to what I assumed yours was: provide one counter-example to falsify a broad and generalized thesis.

That said, I think I would have liked to have seen your original unit-testing-is-sufficient argument punched up and qualified into something a little more reasonable and real-world. As you stated the argument, it seems like a straw man to me. It seems one could reduce your version of the argument to something like: "Dynamic languages with unit test coverage will always catch errors that statically-typed environments will." And of course this is far too broad and unqualified a statement, and that is precisely why all you needed was one counter-factual to refute it. You didn't even need a handful of Python programs, or 9 or 20 or 100 errors to prove your point. You only needed one, as you stated above. This is why the burden of proof for your thesis was so small, but also why, in my opinion, even with that reduced scope and more modest conclusion, we haven't really learned much.

As someone who has spent most of my career in statically-typed environments and the last 6 years or so mostly in dynamic environments, and also as someone who has made something like the argument you were attempting to refute, I have to say I would definitely never have made such a brittle and unqualified statement as the one you refuted in your paper. To put it more directly, I think I'm probably a poster-child for the kind of developer you were aiming your thesis at, and I don't feel that my perspective was adequately or reasonably represented. More importantly, having looked at the examples given in your paper, I may have learned a bit about the kinds of errors that Haskell can catch automatically that some coders might miss in a dynamic environment, but not much useful to me in my everyday work context.

I think a more reasonable version of the argument, but more qualified and therefore requiring a far larger sampling of code to prove or refute, would be something like: "Programs written in a dynamic language with adequate or near-100 percent unit test coverage, are no more prone to defects than programs written in a statically-typed language with a comparable level of unit test coverage."

I agree this is a very important conversation to have, and again kudos to the work you put in here. Obviously people have strong opinions both directions, and the discussion, however heated at various moments, is an important one, so thanks for this!


I see both sides of this argument. The OP -- at least in this blog post; I haven't read the paper -- spends most of his time talking about how he's demonstrated the insufficiency of unit testing. For the purpose of that argument, it really doesn't matter that he used Haskell as opposed to some other type checker.

It's only in the last two sentences of his "Conclusion" section that he turns the argument around, and here is where he oversteps:

While unit testing does catch many errors it is difficult to construct unit tests that will detect the kinds of defects that would be programatically detected by static typing. The application of static type checking to many programs written in dynamically typed programming languages would catch many defects that were not detected with unit testing[...]

Clearly, this is overbroad. For starters, he should have used "could" in place of "would". And it wouldn't have been a bad time to remind the reader that Haskell's type system differs from those of other statically typed languages with which the reader may be more familiar.

I don't quite agree, though, that the conclusion is "trivial". Maybe I'm just out of touch, but I wasn't aware of a good test of how true the dynamic argument was in practice, as opposed to theory -- particularly claim #2.


I think I should clarify what I meant by "the conclusions are nevertheless trivial." Let's look at the key statement in the conclusion of the study:

"Based on these results, the conclusion can be reached that while unit testing can detect some type errors, in practice it is an inadequate replacement for static type checking."

As I've already pointed out, this seems to me an ambitious and over-reaching conclusion, given the scope of the study.

But, equally important, it is simply an example of something that was already provable. It should be axiomatic that in principle automatically-generated validation like that provided by static typing should in theory be able to catch type errors not caught manually in a dynamic context, either for reasons of human oversight or human error.

In other words, it seems to me that all that has been done here, is to provide a few concrete examples of what was already true and uncontroversial: auto-generated coverage of specific types of validations can be more comprehensive than some human beings will be in some environments and contexts. It has not shown that the perceived benefits of dynamic typing with good unit tests are outweighed by this fact, nor that, statistically-speaking, errors of this type are common enough to warrant a preference of static typing over dynamic typing with unit tests in all contexts.


>In addition, the hidden assumption is that all static and dynamic typing are created equal, i.e., since Haskell is statically typed and Haskell appears to have caught Python bugs that unit tests did not, therefore Java will catch bugs in a Ruby codebase, C++ will catch bugs in a JavaScript codebase, etc.

That assumption isn't hidden, it is made up. By you. The question was "can static typing catch bugs that made it past a decent (and common) test suite". The answer to that can drive interest in static typing, and thus more language with useful static type systems. Just because java has a crappy type system, doesn't mean we should be content with that.


> That assumption isn't hidden, it is made up. By you.

Not at all. The assumption is clearly implied by the conclusion of the study, which makes an unwarranted equivalence of all languages that have 'static type checking':

"The translation of these four software projects from Python to Haskell proved to be an effective way of measuring the effects of applying static type checking to unit tested software."

> Just because java has a crappy type system, doesn't mean we should be content with that.

I don't know what this means. If the study was meant to comprehend such a broad category as 'static type systems,' and from the explicit language of the study, it clearly was, then absolutely Java must necessarily be included. Otherwise, the study, as I noted, should have restricted its conclusions to a scope of Haskell vs. Python, with at most modest and well-qualified statements regarding the broader implications of static vs. dynamic in general.


>which makes an unwarranted equivalence of all languages that have 'static type checking':

No it doesn't. Read what you quoted, it says nothing even remotely resembling "this benefit applies to all languages with static typing". It is testing static typing, not a specific language. It uses the best static typing system to do so. You are entirely inventing the notion that this must then apply to java.

> If the study was meant to comprehend such a broad category as 'static type systems,' and from the explicit language of the study, it clearly was, then absolutely Java must necessarily be included

No it mustn't. Comparing the best of dynamic vs the best of static is a useful test. Just as nobody is complaining they didn't use a worse language than python, it makes no sense to complain they didn't use a worse language than haskell. You don't draw conclusions about the potential of X by examining the worst example of X possible.


This. Not all statically typed languages are created equal. Java's type system is old and is not state of the art. I wish people would stop using it as a straw man when anybody brings up static typing.

Java was state of the art 20 years ago, but it's definitely not the case any more.


Java wasn't even state of the art 20 years ago. ML dates back to the 70s.


I agree with you, but I think we might be in the minority.


Had the study qualified itself to merely "Haskell vs. Python" with deference given to the statistical significance of the sample size, you'd have a point. It wasn't me that brought all static typing, which of course includes Java, into the question at hand -- it was the study itself.


Yes, it was you. Why do you think the comparison should be "really bad static type system" vs "really good dynamic type system"? In what way does that make the test more useful? Allow me to say this again, as I do not know how to be any clearer:

You do not test the potential of something by using the worst possible example of it. The only point of your desire is to reinforce the strawman that java = static typing. A test of "do airbags help prevent deaths" would be a very poor test if it used anything other than the best possible airbag technology.


Since this hasn't been already mentioned, and I run the risk of really flaming things up. Java has a very high propensity of generating runtime type errors. This is easily done by skirting the type checker with casting, which is commonplace. The upshot of me saying this is that I'm actually on the fence of even considering Java to be a statically-typed language for this reason...which is part of why I disagree with the parent even using it as an example of a statically typed language equivalent to the one from this post in a counterexample (also included in this is C, C++, and the rest of that family).


As soon as you start using reflection in Java, you're doing non-statically-typed programming. Since a lot of popular Java frameworks use reflection implicitly - such as Spring, Hibernate, etc - that includes a lot of Java code that's out there.

And also, even if you carefully put a layer of explicit typechecking between the reflection based code, and the statically typed stuff, you're still throwing out the Java generics typechecking since none of that exists at runtime, and so your ArrayList<String> can mysteriously contain non-String types when you finally access it.


I don't think davesims is saying that should be the comparison. This particular complaint is about the conclusions, not the methodology. (I recognize he also criticized the methodology.) Conclusions should be useful. People shouldn't have to squint at the wording of your conclusion to determine what that means for them. So, you should bend over backwards in your conclusion, and err on the side of being clear.

With that in mind, I agree with davesims that the conclusion in the blog post is too strong. It is: "The application of static type checking to many programs written in dynamically typed programming languages would catch many defects that were not detected with unit testing" I say it is too strong because the author has not bent over backwards to make clear that this conclusion only applies to the "best" type systems, like Haskell.

For the record, I like the study, and once I run the author's conclusions through my bend-over-backwards-filter, I find them interesting. I upvoted this article. I also upvoted davesims' post because it is academic-reviewer level feedback.


> You do not test the potential of something by using the worst possible example of it.

So? Folks don't use the "potential", they use the real. They're asking questions like "should I use Java or Python".

> do airbags help prevent deaths" would be a very poor test if it used anything other than the best possible airbag technology.

That's not how things actually work. You decide between what's available. The performance of the best possible airbags is irrelevant. The real question is the cost and benefits of airbags that are likely to be deployed.


And the answer to "should I use Java or Python" is: no! Use Haskell ;). If you're entirely tied to Java (and, in that case, Python would probably not be ideal), you can still use Scala.

The question the study was asking was not "what language should I use for my lowest-common-denominator workforce" but rather "can a static type system catch more errors than unit tests and can statically typed code be as expressive as dynamically typed code".

In other words, it was asking for existential quantification: "does there exist some type system such that..." rather than "forall type systems..." or even "forall average systems...".


>So? Folks don't use the "potential", they use the real.

Haskell is real.

>They're asking questions like "should I use Java or Python".

That's wonderful, but it has nothing to do with the subject at hand, which was the question "can static typing reduce the number of bugs?". If you want an answer to a different question, don't complain about the answer given for this question, go find someone answering the question you want answered.

>That's not how things actually work. You decide between what's available. The performance of the best possible airbags is irrelevant. The real question is the cost and benefits of airbags that are likely to be deployed.

Why can't anyone follow a simple line of reasoning without resorting to fallacies? He tested the best airbags available. Not theoretical airbags that don't exist. He tested a car with the best airbags available to one without. The airbags were a benefit. You and the other guy making up fallacies insist that this isn't a fair comparison, because you want to drive a car where the airbags deploy 5 seconds after impact. Your crappy car isn't relevant to the question of "can airbags save lives".


>Why can't anyone follow a simple line of reasoning without resorting to fallacies?

Indeed. The conclusion C was out of scope with the premises A and B. C is wrong, but that doesn't mean A and B cannot infer useful, more modest conclusions.

What I don't understand about every one of your responses is that you seem to think false equivalence applies in only one direction.

You seem to think it's fine for OP to infer broad conceptual conclusions from a small subset of the domain, but counter-examples to the broad claims cannot be applied, according to you, because, rather bizarrely you continue to insist that the counter-examples are too specific and and don't apply because the scope is general? That doesn't even make sense.

It's quite simple. OP claims "unit testing is not enough," "you need Static Typing" and uses broad language like "static type systems." I continually insist that such conclusions are out of the scope of the data given: The fact that type-related bugs were found in a handful of relatively small Python programs translated to an idiosyncratic environment like Haskell cannot possibly infer something so broad as what the OP is claiming.

Using Java/C++/Clojure/C#/etc. vs JavaScript/Lisp/Smalltalk/Ruby to give a counter-example is clearly within the scope of the argument. If OP had claimed something like "Python shows risk of static type errors, exposed by Haskell port" and claimed something like "more care and unit-testing is needed to guard against certain types of type-related bugs" I wouldn't have a problem. But that's not what OP claimed.


>I continually insist that such conclusions are out of the scope of the data given:

Yes, clearly you have some serious issues to work through.


> can static typing reduce the number of bugs

No one claims otherwise. However, that's true of Java's type system too.

> Why can't anyone follow a simple line of reasoning without resorting to fallacies?

I followed your simplistic line of reasoning just fine. It was wrong. Admit that and move on.

Of course you can't, which is how you got there.

The biggest obstacle to Haskell becoming more popular is its advocates.

And, it will never replace Java, C, Python, or even PHP. (One of my professional goals is to never use Java.)


> And, it will never replace Java, C, Python, or even PHP.

What do you mean by that?

Many people, including myself, have had Haskell replace Python.


>I followed your simplistic line of reasoning just fine. It was wrong. Admit that and move on.

You are wrong, admit it and move on. Oh gee, does that not actually make a constructive argument?

>The biggest obstacle to Haskell becoming more popular is its advocates.

What does this have to do with anything?

>And, it will never replace Java, C, Python, or even PHP

It already has. You might be too foolish to take advantage of that fact, but how does your foolishness matter to me?


> >And, it will never replace Java, C, Python, or even PHP

> It already has.

Oh really? Significantly fewer systems are being developed in those languages? How about some evidence?

What? You meant that a couple of applications have been written in Haskell instead of those applications? That's not "replace".

Which reminds me - if I find an application that was written in Haskell that is being replaced by an implementation written in some other language, would you claim that said other language is "replacing" Haskell? If not, don't make the mirror-argument.


I believe you are confusing criticisms of the methodology with criticisms of the strength of conclusions.


> It uses the best static typing system to do so.

It doesn't use the best dynamic language or best unit tests.


Then you should be proposing he use whatever language you feel is better than python at being the best dynamic type system. The best unit tests is entirely irrelevant.


> Then you should be proposing he use whatever language you feel is better than python at being the best dynamic type system.

Nope.

> The best unit tests is entirely irrelevant

I can find errors in programs with a spell checker. Suppose that those programs have unit tests. Do you really think that spell checker is better than unit tests?


Are you trolling or incapable of reading? Nobody, at any point in time suggested that static typing was an alternative to unit testing. You haven't posted a single constructive thing in this entire thread, and you waited till it was over to do your trolling so you could avoid downvotes. Grow up, or go back to reddit.


>it says nothing even remotely resembling "this benefit applies to all languages with static typing".

That is precisely what it says, and that is reiterated later:

"...the conclusion can be reached that...in practice [unit testing] is an inadequate replacement for static type checking."

I'm not sure what you're reading, but there's no qualifications in the language used here regarding the idea of 'static type checking,' nothing so modest about the scope of the conclusion as claiming it was merely a "useful test" as you put it. It was a sweeping generalization about two very broad and extremely complex categories of languages. Had the conclusions used more moderate language and qualified itself adequately, I wouldn't have a problem. But all that has been shown here, is that in some contexts more care needs to be taken writing unit tests in a dynamic environment to catch some errors that are automatically caught in static environments. That is all that the data warrants.


> That is precisely what it says

This is a very strong claim and it's false. The article doesn't say that anywhere. You interpret it that way.

I would hazard a guess that presenting your own interpretation as fact is what brought on those downvotes you complain about below.


Can you show how I've misinterpreted the plain language of the conclusion section?

I'm under the (perhaps mistaken) assumption that in academic papers people tend to mean what they say and choose their language carefully, particularly in the conclusion section.

If the following are not in fact broad, strong claims about the nature of static and dynamic languages in general, then won't you please explain to me how I should interpret them?

Here are the quotes from the conclusion of the paper (emphasis mine):

"The translation of these four software projects from Python to Haskell proved to be an effective way of measuring the effects of applying static type checking to unit tested software."

"Based on these results, the conclusion can be reached that while unit testing can detect some type errors, in practice it is an inadequate replacement for static type checking."


Honestly, at this point I can no longer tell whether you're misinterpreting or misrepresenting the conclusions. I'll make an honest attempt to argue, nevertheless.

"Static type checking" and "unit testing" are two concepts. There are numerous concrete implementations of these two concepts. The former is implemented in several languages, including C++ and Java and Haskell. The latter is implemented in several frameworks/tools, such as TestNG and PyUnit.

The article concludes that unit testing, as a technique for discovering and/or preventing defects, cannot wholly replace static type checking.

Apart from mentioning the concrete implementations of abstract techniques that the author used, the article does not conclude anything about the benefits of using specific languages, frameworks or tools.

What you have claimed so far is that:

1. there is a "hidden assumption is that all static and dynamic typing are created equal, i.e., since Haskell is statically typed and Haskell appears to have caught Python bugs that unit tests did not, therefore Java will catch bugs in a Ruby codebase, C++ will catch bugs in a JavaScript codebase, etc."

If anyone jumped to this conclusion, it was you. The only thing I can conclude from the article is that static typing checks such as those implemented in Haskell catch bugs that were not caught by unit testing logic such as that used in Python projects within the study. To conclude anything more I would need the data not present in the article, such as exactly what types of errors we caught or missed, etc.

2. the conclusion of the study "makes an unwarranted equivalence of all languages that have 'static type checking'"

It doesn't. The conclusion about the static type checking vs. unit testing might not be backed by enough solid data, but the conclusion makes no claims about languages, beyond specifying which languages were used in the study.

3. the claim that "this benefit applies to all languages with static typing" is "precisely what" the conclusion "says".

No occurrence of any phrase even remotely resembling the quote can be found in the article. Saying "this is precisely what it says" means "you'll find that phrase or one very similar to it in the text". Maybe you were trying to claim that "this is precisely what it means", but it's definitely what it "says".

All in all, the sweeping generalization about the concrete languages was introduced by you. My guess is that this is because you were, like me, frustrated by the vagueness of the article. I would have loved seeing more concrete data. Saying "X types of errors were found" is not as good as saying "the following types of errors were found" and that's just the start.


"All in all, the sweeping generalization about the concrete languages was introduced by you."

I think the plain, direct language of the paper's conclusion is clear enough without me having to embellish it, and without its defenders extrapolating all of the qualifications and subtexts that they think I missed. You really don't have much to work with, because the paper's clumsy conclusion is small, blunt and unqualified in its scope. It takes a handful of small Python programs translated to an idiosyncratic language like Haskell and concluded:

"in practice [dynamic typing with unit testing] is an inadequate replacement for static type checking."

This is unequivocal language. There's no qualifications about language, context, or any kind of variables that might possibly dilute the strength of the conclusion.

On the other hand, Peter Cooper gives a great example elsewhere on this thread of a much better paper with much broader scope, more stats, and much more modest, qualified conclusions. This is the kind of language that is useful and gives me confidence that the authors didn't start out with an axe to grind and merely followed what metrics they had to the warranted conclusion, no more, no less:

"Even though the experiment seems to suggest that static typing has no positive impact on development time, it must not be forgotten that the experiment has some special conditions: the experiment was a one-developer experiment. Possibly, static typing has a positive impact in larger projects where interfaces need to be shared between developers. Furthermore, it must not be forgotten that previous experiments showed a positive impact of static type systems on development time."

http://www.cs.washington.edu/education/courses/cse590n/10au/...


> "Saying "X types of errors were found" is not as good as saying "the following types of errors were found" and that's just the start."

The blog post is vague, but the paper (also available at the link) isn't. It identifies the particular errors found.


When you present conclusions in an academic paper, the onus is on the author to bend over backwards to prevent the reader from interpreting a stronger conclusion than intended. I think davesims' interpretation is fair given the language, and I were I reviewing the paper, I would have asked the author to temper his conclusions in a similar manner.


From the downvotes I can only conclude that many of you wish the study didn't claim what it claims and are merely shooting the messenger. If anyone can point out rhetoric within the study that qualifies it in such a way as to make comparisons of other statically typed languages with other dynamically typed languages out-of-bounds or expressing a false equivalence within the scope of the conclusions of the study itself, I'll retract.

But so far all of the arguments I'm seeing against using, for instance, Java, are coming from a perspective not advocated by the study. You all have a point -- it's just not the point made by the paper.


To simplify: there is a difference between "static typing is better than dynamic typing" and "all static typing is always better than all dynamic typing". It's basically the difference between ∃ and ∀.

Saying that "static typing is better than dynamic typing" is like the former: there exists some static typing system that is better than dynamic typing. Saying that "all static type systems are better than any dynamic system" is like the second. All the paper ever says is the first: "Based on these results, the conclusion can be reached that while unit testing can detect some type errors, in practice it is an inadequate replacement for static type checking." Note how it never claims to apply for all possible static type systems; rather, it just says that tests are an inadequate replacement for type systems in general (i.e. there exists some type system that catches more errors than tests). This is exactly like my first example.

In summary: a being better than b does not mean that all a is always better than all b. Just because static typing is better than dynamic typing does not imply that Java is always better than Python; it merely implies that some statically typed language is better than Python.


I agree with your characterization in your first paragraph, but I agree with davesims that the conclusions are too strong. If one has to do the level of analysis of the conclusions that you present in your second paragraph, then they are poorly worded. I find davesims' interpretation a reasonable one, which leads me to agree that the conclusions need to be tempered and clarified.


You would do well to consider the very real possibility that it is in fact you who is misguided, and not the rest of the world. You come off sounding childish when you refuse to even consider the possibility that you are simply misinterpreting the purpose and conclusion of the study. The only reason most people can think of to explain your behaviour is that you have an axe to grind and just want to shoot down anything that paints static typing as a positive thing.


When it says "static type checking" it does not mean "all static type checking" but rather "good static type checking". And this is what the study showed (ignoring issues of methodology and sample size for the sake of argument): a (good) static type system would have caught more errors than unit testing, therefore static typing is good.

Generalizing any comment to all static type systems is silly: there are language like C that have a static system but provide basically no additional safety at all. You can easily provide examples of really bad statically typed or dynamically typed languages, but these examples say nothing of static or dynamic typing in general: they're just bad. Questions about static vs dynamic typing can only be answered by the best (or at least good) examples of each.

Showing that a good statically typed system is more robust than a good dynamically typed system is a useful proxy for comparing static typing to dynamic typing. This is similar to a study on seat belts ignoring poor seat belts that strangle the passengers in the event of a crash.

In short: just because static typing is better does not mean all static type systems are better, because you can always come up with a sufficiently bad example of static typing.


>I'm not sure what you're reading, but there's no qualifications in the language used here

That is precisely my point. You are saying "this comparison of coke vs pepsi is no good because they used cold coke, and when I drink warm coke it isn't very good". Yeah, no shit. Stop drinking warm coke. Your decision to drink warm soda has no bearing on the test of cold soda vs cold soda.


> Yeah, no shit. Stop drinking warm coke.

Fine, then don't claim something like "All cokes in all contexts at all temperatures are better than all pepsis in all contexts at all temperatures."

This is equivalent to what the study does with static vs. dynamic. Your argument, if you actually had a point, would be something along the lines of, "wait I'm talking about this boutique hand-crafted cola (Haskell) I get at Whole Foods, not that old Coke (Java), that's 20 years out of date!"

You're trying to retro-actively reduce the scope of a study you didn't write. The conclusions clearly use generic language that brings all static typed languages into a comparison with all dynamic languages. The false equivalence is not mine! It's the study's. If you want it differently, go write your own study that reduces the scope of the conclusions.


I'm gonna have to disagree with you about the conclusion you're drawing. Yes, they are using the generic phrasing of "static typing" vs "dynamic typing", but this is because the study was intended to test the concept of static vs dynamic typing, not particular instances of it. However, seeing as we only have specific instances from which to test, it used the best one currently in widespread use. I don't see this as a problem, nor do I think the wording of their conclusion necessarily implies anything about all instances of static typing currently in use. Sure, it left that open as a possible interpretation for people looking for justification of a preconceived notion, but you can't really blame that on the authors.


> but this is because the study was intended to test the concept of static vs dynamic typing, not particular instances of it

Help me out here -- since the study confines itself to a handful of small Python programs translated to an idiosyncratic language like Haskell, how can the scope of the study possibly in any way qualify as a study on something so broad as "the concept of static vs. dynamic typing"?

Are you not confusing a better, more appropriate argument you'd make for the argument actually made in the paper?

EDIT: > Sure, it left that open as a possible interpretation for people looking for justification of a preconceived notion, but you can't really blame that on the authors.

Is that really an argument you want to make, that I can't blame an author for using broad and imprecise language that infers unwarranted conclusions in an academic paper?


>Help me out here -- since the study confines itself to a handful of small Python programs translated to an idiosyncratic language like Haskell, how can the scope of the study possibly in any way qualify as a study on something so broad as "the concept of static vs. dynamic typing"?

You raise a good objection here. Is it possible to draw conclusions about the class of type systems labelled "static typing" vs dynamic typing by using a small sample of programs? I think this is where the impedence mismatch is occurring. The author seems to take static typing to mean "what can be currently accomplished through static typing", and thus he was justified in using the strongest static type system in use to do the study. Taking it this way, then the study seems meaningful.

Taking the other meaning, the class of type systems labelled static typing, then you end up with a very large set of languages each with (perhaps) varying amounts of power. Doing a study with just one static language does seem inadequate. Although, depending on the class of errors caught, it may still be valid. As far as I've seen, Haskell doesn't catch new classes of errors that are impossible in other systems, it just makes it a lot easier to do so. So essentially Haskell has the same power as other common type systems. If this holds, then the study would still be valid. (Admittedly I know very little about Haskell so I could be completely wrong).

TLDR: I see what you're saying, and I do agree that there needs to be more said before his conclusion can be supported by the study.


But isn't it problematic that it compared real-world average unit tests with the best-available type system?


I don't think so -- anyone is free to choose to use the best-available type system. You can't just choose to write the best possible unit tests.

He could only compare one of the best possible environments for writing dynamically typed code and unit tests to one of the best possible environments for writing statically typed code.


>Fine, then don't claim something like "All cokes in all contexts at all temperatures are better than all pepsis in all contexts at all temperatures."

He didn't. He said "coke tasted better than pepsi". I've explained this to you several times already. You are the only one saying anything about "all the time in every context". You. Not the author, not his paper. You.


Get me, still waiting over here for a relevant quote from the paper. I've given mine. Where are yours?


> He said "coke tasted better than pepsi"

I think he actually said "coke tastes better than pepsi". That verb tense has very different implications.


You make a good point -- I don't think any statistical study will ever be able to show that static typing is better.

I do think that a rational argument can show it, but my argument is too long to fit into the margin.


I think that a study over a broad set of applications of considerable complexity could provide enough statistical evidence that most people would be comfortable coming to a conclusion. That study, though, would take a very large effort. Large enough that it may never be done.


I'm convinced that dynamically typed languages are a transitional technology that will be superseded once we develop type systems that are both usefully strict but also flexible.

After over ten years working in dynamic languages I'm very happy to have a compiler on my side again.


Amen, I've spent the last 3 years of my career working on an increasingly complicated Perl project and frankly, I've had enough. My pet projects now are all Haskell and I can't believe how fun it is.

I spend most of my time these days fixing bugs and regressions due to the sheer scale of the project, and I'm a running meme at work for saying "a type checker could have caught that!". I can't imagine going back to a dynamic language now.


Are you having fun because you switched to a statically typed language or are you having fun because you switched to a functional language? Would you be having less fun had you chosen Clojure?


I started a parallel implementation of a bunch of machine learning algorithms in Clojure and Scala. I figured the Clojure version would pull out ahead thanks to my previous lisp experience and lisp's history in that domain.

I was very surprised to discover that my Scala code was a lot easier to understand and maintain and had fewer bugs.


What do you attribute this to? The static typing of Scala or the libraries that you use?


Definitely typing. I wasn't using any libraries.


You know, I'm probably having fun because of both. I love the type system Haskell gives me for exploring a problem space without diving into a solution. I am enjoying working in a functional language because it forces me to work in really small pieces and slot them together. I try and do this in Perl too, of course, but it can sometimes be easier to just cargo cult some things and forget to come back to them.

I think what I enjoy most is the type system though, and the ability to make massive refactorings until stuff compiles. 9 times out of 10, things just work after that.


Seems to me that that's what static typing proponents have been saying for years (if not decades), and it still isn't true.

That said, I'm currently hacking on a compiler to see if I can come up with a design for a static language that's almost as pleasant to use as a dynamic one, so I'm not completely hopeless. But it certainly seems like the data point that until now no such superior type system (strict but flexible) has become widely popular should not be underestimated.


That's because it takes years or even decades to really get this right, like a lot of other sophisticated technology. Look how long it took to get the JVM to where it us today. This is fundamental research and hard stuff.


What you are looking at are not two variables (strict, flexible), but also "ease of use". Arguably dependent-typed languages are most strict, but also most flexible: you can write type of a function that only accepts primes on input. They are hard to use though. Dynamically-typed languages are flexible and easy to use, but not strict. Java-likes are strict and easy to use, but they're very unflexible.


So we're talking about type inference - which way are you going?


What dynamic language were you using and which language are you using now?


I pretty much lived in Rails from 2004-2011 but did a lot of Python and Perl before that.

These days it's mostly C++/Obj-C but I'm keeping an eye on Haskell for iOS.


> I'm keeping an eye on Haskell for iOS.

Really? Is that even possible? Can't imagine Apple being okay with that.


They've significantly relaxed their restrictions on iOS languages. It's not any weirder than running C# binaries via Mono.


Apple's fine with such things… provided you don't let arbitrary code run in your system. If all your VM/runtime runs is what is in the application, it's fine.


Sure, GHC is on ARM now.


I haven't heard of anybody suggest that static languages should allow lists to contain arbitrary types. Hmm, do tuples cover all the use cases for that feature?


If you want a list of heterogenous "things" you either a list of "I don't care about what's on list" and then it should work without much in languages like Haskell, ML and the rest or you actually have list of things with a common property, such as "I want to be able to turn these things into strings and sum them" or "These are either numbers or strings" -- all functional languages work well with the latter through ADTs and the first one can be solved easily in Haskell with existential types (I don't know about other languages)


>type systems that are both usefully strict but also flexible.

that's basically the design criteria behind Go's type system.


I think Go's type system is decades behind the state of the art. I can't believe they repeated the mistake of Boolean Blindness [1]. Sum types and pattern-matching are crucial for useful strictness with flexibility. This results in funny things like encoding the optional error result in Go as a type product rather than a sum, allowing reading of a result even if it does not exist due to an error.

[1]: http://existentialtype.wordpress.com/2011/03/15/boolean-blin...


...your problem with Go is that it has a boolean type?


No, did you read that article?

My problem with Go is that it does not have sum-types and pattern-matching. This means that branching (conditionals) in Go do not gain any type-information. And that means that programmers have to manually keep track of the invariants that hold true in each of their conditionals, and if they get them wrong, they get no help from the compiler.


I applaud the effort to try and dissect the problem scientifically.

If a whole program has 1 bug due to being implemented with dynamic types over static types, and that bug has gone unnoticed, then it can't be particularly important.

The other common proposition is that dynamically typed languages are faster to write in than statically typed languages. If this is true then we need to compare the saving in development time with the cost of the bugs which go undetected.

My gut feeling says that this line of analysis is never going to prove that static typing in inherently "better".


What? He specifically says that many of the bugs he discovered were exploitable. Just because you missed them doesn't mean some hacker won't.


> The other common proposition is that dynamically typed languages are faster to write in than statically typed languages.

This part is also very tricky, as most of people's hard-earned experience with this is (almost by definition) old. In recent years, type inference has reduced the type-caused slowdown immensely. E.g. in Haskell you can (+) write large statically typed programs without specifying any types at all - they are inferred by the compiler.

(+) But please don't.


Heh I had to parse your sentence twice as (+) is a function in Haskell.


Recent years? 70s more like.

ML way is to not write type annotations unless they're needed, not that I agree with it, but it certainly works in some way.


From my personal experience, the saving in development time is very much real.

I used to work on .NET, and you'd have to program against crazy, non-intuitive patterns in order to have your code "clean" (I hated IoC containers as well as writing all that boilerplate code for Attributes. And for Java, remember that hilarious post about Factory-Factory-Factory patterns http://discuss.joelonsoftware.com/default.asp?joel.3.219431?)

I'm not saying statically typed languages are bad. Type errors always bite me the ass in Ruby, but it's a small price to pay (IMO) for better maintainability.

EDIT: By the way, it sounds like I'm an ignorant dynamically-typed lover. I'm not, I still yearn for those type safety net, but I'm just speaking from a pragmatic perspective.


     I hated IoC containers as well as writing all
     that boilerplate code for Attributes
C# and Java are only representative for static languages because they are mainstream.

My static language of choice is currently Scala. It is less verbose than both C# and Java (for example you never have to generate setters/getters). Also, whenever I work with Java I always work with projects such as Lombok / Guava.

And I have no need for "IoC containers". This is partly because of Scala, but also because of the libraries I'm using. Currently I'm using DropWizard for the development of web services, which is just a thin wrapper around Jetty + JAX-RS. Because the resources (commonly known as controllers in MVC frameworks) are configured as instances (not classes specified in web.xml), then IoC is easy because you are free to do the bindings yourself.

IMHO, the necessity for IoC containers in C# / Java has risen from poorly designed frameworks. So you may not have a choice in the matter, however using something like Google Guice in combination with Scala is much nicer, because of Scala's mixins in combination with structural typing.


I can't speak for/to java but C# has come along way.

I used to feel just like you did (at the asp.net 1.x) days. I went to Ruby on Rails for awhile. But once NET 2.0 (generics/nullable types) and then 3.5 (LINQ), I was hooked on C# again. We also have the dynamic type now.

I want the features a statically typed language gives me with minimal boilerplate.

It's hard to give up IDE features like "find everyplace this method is called" and know that you found all of them (disregarding reflection, etc and assuming private methods).

Visual Studio will do this type of thing for Python now. We aren't terribly far away from having the benefits of both.

I really like python as a scripting language. I just don't feel as good about scripting languages in big projects as I do a modern language like C#. (Just my humble opinion, no science).


I like C# these days quite a bit, after ~10 years in both Java and C#, and now ~7 in Ruby, my language preference is Ruby first, then C#. I do a lot of Android development these days too, so I retain a strong connection to Java.

But when I have to move from Rails to, say, ASP MVC (which is quite good, btw), generics are actually one of my biggest frustrations. Even with some of the quasi-dynamic typing available in C#, and even though I believe I have a pretty strong grasp of advanced generics concepts (in? out? Generic methods? Wildcards? Argh!) to me they introduce a maddening level of conceptual complexity and un-readability that are simply not worth the trouble.


This is not complexity born of strict typing, this is complexity born of object oriented programming. This is what we do in OOP. We take a complex problem and attempt to map it to an even more complex hierarchy of objects and interfaces.


The kinds of static type systems you find in C# & Java are too primitive. Something like Haskell with perhaps a little less religion about mutability is a whole different story.


> Something like Haskell with perhaps a little less religion about mutability is a whole different story.

I consider Ocaml [1] to fill that particular niche. When I was going through my FP phase, I tried that out a little before going full-on lazy evaluation with Haskell.

From what I can recall, the type inference isn't as cool as what's in Haskell, but at least it's there. There are plenty of libraries, and packages for major OSs.

[1] http://caml.inria.fr/ocaml/


What do you mean by "the type inference isn't as cool as what's in Haskell"?


IIRC, Ocaml does use Hindley-Miller, but Haskell has some tweaks that make it work better. I may be mistaken, someone who is more current on both please correct me.

In retrospect, I think I was also thinking about type-classes in Haskell, which Ocaml does not have. But from what I understand Ocaml has other means to achieve the same ends.


Does Scala fit your idea of something in between? It has nice type inference features and does not require pure immutability.


Personally I like Scala but I think it's too complex and a little too clever to escape the FP niche. I'd be happy to be wrong about this.


Interesting. I never perceived Scala to be in a functional niche. As far as I know most people consider it to be an object-oriented language first and foremost, with functional features.

Removing inheritance would have made the language (and every other language, too) a lot easier, but seeing that people cope with C# or Java quite well I'm not sure about the merit of the "complex" claim.

Comparing the C# and the Scala spec is very enlightening, even though they have different writing styles of course (so I won't bother bringing up page numbers).

Checking and realizing which "features" are in one language, but not in the other, is very helpful to gain some insight into this topic.

What do you think?


I don't think page counts of specs or feature lists really tell you that much. I didn't find it that hard too get up to speed in Scala but it seems to scare away too many Java people.

My main criticism is that it allows too much syntactic flexibility.


I'm a programmer that in general prefers dynamic languages. Recently I've started writing production code in Scala for a couple of web services for which I really needed the performance and flexibility of the JVM.

Scala does have problems. But NOT the language. I find the language to be extremely elegant and well designed.

The problem lies with the community. I find Scala libraries to be an abomination of taste and common-sense. I don't know why that is, but in general I stay away from libraries commonly used by Scala developers.

For instance I prefer JUnit over ScalaTest, I prefer JAX-RS (DropWizard) over Scalatra or Play. I prefer Maven over SBT.

Of course, you could say that the language itself invites this nasty style of programming, because the syntax is too flexible and the features too powerful. However I strongly disagree.

For instance I've worked with a lot of Ruby and Python libraries over the years and most popular Ruby/Python libraries are extremely well designed, easy to use and easy to look under the hood. Ruby on Rails for instance was not pretty, however starting with version 3 it went through a major refactoring effort and now the codebase is clean and easy to follow; while being one of the easiest to use and full-featured web frameworks ever built.

Unfortunately when picking a language, you do have to rely on a community and an ecosystem of libraries. However in the case of Scala, if you don't like the style of the current community, you can just pick from the thousands of already available and mature Java libraries.


Python culture/community is a greater thing than Python. If this culture could be cloned in other places...


David Pollak has long been one of Scala's greatest champions but his grudging acceptance that Scala is hard, from a lot of experience in the field, is worth a read:

http://blog.goodstuff.im/yes-virginia-scala-is-hard/


Yeah, but Scala is hard in the same way that Ruby is hard.

You can use a subset that's easier, however the usage of the more advanced language features need a level of understanding that's beyond the capabilities of many developers.

And this is in fact true of most mainstream languages. Java may be an easy language to learn, but Java isn't just a language, but a platform - and as soon as beginners start messing around with multi-threading (since multi-threading capabilities in Java are in your face), then all hell breaks loose ... from this point of view, Scala may in fact be easier to deal with than Java is for beginners.

And for instance I see people complaining about the method signatures exposed by the collections API. I can't read those signatures very well myself, however in the case of a dynamic language, such as Ruby, you'd have to look at the source-code to see what the method actually returns. So IMHO, even if those signatures are complicated, at least you've got signatures to look at.


I really don't think so.

Scala requires a more disciplined mindset. David Pollak actually developed Lift in response to problems he had managing big Rails projects so I don't think he would be advancing this analysis if he didn't think Scala was harder than Python/Ruby.


Yes, I'm full aware of that. I'm not claiming that it tells us the complete and final truth, just that it is an interesting data point.

In my experience a lot of those "I programmed 15 years in Java, get off my lawn" senior developers get angry about Scala. This is interesting, because it isn't that way for other languages running on the JVM like Clojure, JRuby or Groovy.

It is only a speculation why it is like that, but in my opinion it is because alternatives like Groovy, JRuby, Clojure are considered to be some sort of "supplemental" or "add-on" languages by those people, while Scala is seen as full-scale alternative to Java (not in the sense that it will replace Java, but in the sense of "I can write 100% of my application in this language without dropping down to Java for the performance critical parts").

I think it is interesting how different the reaction is compared to C# <> F#, for instance. Is it because Java and C# developers come from different backgrounds? Because F# is "made by Microsoft", while Scala is not "made by Oracle"? Is it because Java developers were happy with Sun telling them that they wouldn't need all those "fancy" features of the .NET languages? I think this would be interesting to discuss further, although I think it is hard to come up with valid data points.

From my experience, most of the claims about "complexity" and "too hard for beginners" come not from people learning the language, but from people with > 5 years of Java experience _not_ wanting to learn another language.

I'm quite ambivalent about the syntactic flexibility. On the one hand, they cut it down for 2.10, but on the other I think it is a superficial measurement. There are a lot of languages with more flexibility and a lot of languages with less, both seem to be alive and well. (Just want to make it clear that I think your stance is totally valid, even if I disagree slightly. Different people have different tastes, and this is a good thing!)


I've found Scala to combine the best of both worlds, non-verbose and statically typed.


Using Java and C# as an example of typed languages is like using Youtube comments as a representation of humanity.

Once you get a hang of them, writing in any modern statically typed declarative language enters you into a state such that code flows out effortlessly. It is a wonderful feeling.


Java, XML/XSL/XSD, and design patterns (gone wild) all hit about the same time.

Most (but not all) of my pain slinging Java come from misuse of XML and design patterns. Now we're on to services (SOA). Yippee. It's just a socket, marshaling, retry logic. Does it really need its own ecosystem of consultants, conferences, and books?

Java the language could be more terse with type inference, string literals, default visibility modifiers, and some other syntactic sugar. (I'm pretty optimistic about Ceylon as a successor.) But 18 years on, for the most part, I'm still very pleased with Java. Except autoboxing and annotations; garbage.

Java the platform (JDK, J2EE, java.util., collections, javax., misc APIs) should be burned to the ground, plowed under, and complete do over. And any one who says "factory method" gets keelhauled.


>From my personal experience, the saving in development time is very much real.

But your personal experience is comparing expressive languages to unexpressive languages, and then making the mistake of thinking you are comparing dynamically typed languages to statically typed languages. What does .net's ugly (in your opinion) API have to do with static typing? Factory-Factory-Factory patterns is a joke even in java, but it also has nothing to do with static typing at all.

>By the way, it sounds like I'm an ignorant dynamically-typed lover. I'm not

Well, you are repeating the most commonly used and easily debunked strawman, so it really does sound like ignorance. Have you used a modern statically typed language? If not, by definition you are in fact ignorant.


Got to say this is hard to admit, but after reading everyone else's comments, I am indeed ignorant. Thanks for the reality check.


There shouldn't be a stigma about ignorance like there is. Everyone is ignorant of lots of stuff, nobody knows everything. Knowing that you don't know something is good, not something to be ashamed of. If you are aware of what you don't know, then you can learn it.


An interesting thought: a good type system can actually make a language more expressive.

A perfect example is QuickCheck. QuickCheck allows you to write complicated tests very simply by relying on the type system. You just write out the invariant and the type system automatically figures out which random generators to use to run the tests.

QuickCheck has been ported to a bunch of other languages, but it's more complex and seems harder to use in dynamically typed languages as compared to Haskell.

A simpler example in the same vein is Haskell's read function. Essentially, read is the opposite of toString--it goes from a string to some value. The beauty is that you never need to specify what type you're parsing; it can figure out what type it needs to be thanks to the type system. So instead of having a bunch of functions like parseDouble and parseInt, you have a single read function. This also makes the library prettier by maintaining the symmetry between show and read (toString and fromString).


I think most people would agree that these two circles overlap on a Venn diagram:

    (    Bugs found by unit tests (     ) Bugs found by type-checking    )
The disagreement is how much. Also, type-checking is free[1], while unit tests have to be manually written.

I'm glad someone spent a lot of time trying to answer this question, but I don't think it will affect my choice of language in any new project. I like to write code in the languages I like, and bugs be damned.

1. A common argument is that static-typed languages slow development. I'm not touching that land-mine.


> 1. A common argument is that static-typed languages slow development. I'm not touching that land-mine.

If you claim type-checking is free, the counter-argument is not that it "slows development." The counter is that type-checking is not free because it incurs measurable costs. You may sacrifice dynamic features, you may have to add declarations and type casts-- these are all costs whether they "slow development" or not.

(Simple solution: don't waste time trying to claim type-checking is free and just focus on the benefits.)


I would feel bad editing my comment now that you've replied, but my original intent was to say that type-checking is "free." At some point the quotes were lost.

To get as close to the land-mine as I am willing: I like C. I like Python. I like Ruby. But most of all, I like using the right tool for the job.


I am not claiming static typing is free.

But it is also untrue that you have to "add declarations and type casts". With type inference, you don't actually have to.


But you have to use a language with type inference. That's still a cost. Maybe not a big one, but that was my point (and I'm definitely nitpicking a bit, I realize that.)


A constraint is only possibly a cost - if it forces behavior you wouldn't have chosen otherwise.


True, but free usually means freedom from both cost and restraint, so the point doesn't change much. I should have said "constraint" rather than "cost".

Though the original commenter did follow-up, clarifying that the common interpretation of "free" doesn't closely match the point he'd intended to make


Fair.




Context: My day job has been coding Ruby for years, and I'm a big fan of Haskell.

Someone was asking me about my thoughts on static vs dynamic. And I realized that I have two reasons for my tests a) I didn't do something stupid like mistype a method or variable name and b) validate my logic.

Sometimes I think it swings 20/80 and others 80/20. But either way, I know that's why I gravitate towards integration tests in Ruby, and that some of them would go away if it were statically typed.

Even after coding in Ruby all these years, I'm still just as afraid of (and susceptible to) problems in a).


a) If you aren't touching that land mine you're missing the point. You can't say that one is free (except maybe it has a cost), and the other has a cost. That's just nonsensical.

b) Second, I'm going to argue that tests are actually free. And I think this because I don't care how good static type proponents think they are, I know they don't wait until it compiles and SHIP SHIP SHIP, they actually run their damned program. The alternative to automated testing is manual testing, not no testing.


> I'm going to argue that tests are actually free.

> The alternative to automated testing is manual testing, not no testing.

Actually, unit tests have different costs than manual testing. That doesn't make them free. For example, with unit tests, you now have more (possibly buggy) code to maintain.


> For example, with unit tests, you now have more (possibly buggy) code to maintain.

Well, that's no problem. Just hit them with a bunch of unit tests...


Zeno stirred a little...


Really stirred? I thought he proved that motion is impossible...


ha! touché


Are we really talking about type checking or the larger circle of validation (of which type checking is just a small part)?

( Bugs found by unit tests ( ) Bugs found by input validation )

Or in other words...

String s = "lastname'; drop table user--";

...is still a perfectly acceptable string.

It seems to me that type checking is the simplest form of validation (are you an int, are you a String) and nothing more. It wont tell you if that int is positive or negative or if that string is an email.

When dealing with either static/dynamic languages I think more unit tests should be spent validating.


No, this is just common ignorance of static typing. That string is a perfectly acceptable String. But it isn't a perfectly acceptable Query, and you can't pass a String to the database, only a Query. In order to turn a String into a Query, it has to be passed to a function that escapes problem characters safely. You need to use such a function regardless of dynamic vs static typing, but static typing enforces that you always use that function, and can't forget and accidently submit an unescaped string to the database.


> but static typing enforces that you always use that function, and can't forget and accidently submit an unescaped string to the database.

So you're saying it is impossible to do this without static typing?


I don't think anyone is making that claim. You can obviously do runtime inspections of types before building the query, or rely on the "runtime inspection" of getting an exception when the unescaped String doesn't support the Query method being used.

It's entirely possible to do this without static typing. It's impossible to guarantee that all database calls use a Query instead of a String without running the code in some form.


I think he's saying it won't be automatically checked for you without static typing. Since, you know, that's what type checking is.


To have the compiler trap accidents for you? How would you do this if a query and a string were the same thing?


That is why you make them separate. You only really start taking advantage of the type system after you learn to encode system invariants and rules into the type system.


He is making the point that you can create a separation between Query and string just as easily in a dynamic language; it just gets caught at runtime (preferably during testing) rather than compile time.


So when the Query is sent to the database MySQL actually receives a Query object and then parses that Query object?

...oh wait, right before it is sent to mysql it is turned back into a string again.

My point is that static typing doesn't help you do anything other than verify that the objects being passed are of a particular type. I'm not saying static typing is bad or good I'm just saying that type checking itself is NEARLY USELESS unless you include some sort of validation.

		Query q = new Query("select * from users where id = (id)");
		QueryParam qp = new QueryParam("(id)",25);
		q.addParam(qp);
		ResultSet rs = q.execute();
		public class Query {
			public ResultSet execute() {
			for(QueryParam qp : this.getQueryParams()) { 
				this.getSql().replace(qp.getId(),qp.getValue());
			}
			super.execute(sql);
			}
		}
That's all type safe. So it should be good right?


Yes, you can write a Query type that is vulnerable to SQL injection, if you want to.

But if you write a secure version, you only have to write it once. You only have to maintain it in one place. You only need to test it in one place. And if you forget to use your secure Query type, anywhere else in your code, the compiler will yell at you. It's a significant advantage.

This is easier to see in a language with a rich, flexible and expressive type system than it is in Java. The writer of the original article used Haskell for a reason.


> But if you write a secure version, you only have to write it once.

> You only have to maintain it in one place.

> You only need to test it in one place.

Again, so this cannot be done in a dynamic language? If it can be done, why bring them up?

> And if you forget to use your secure Query type, anywhere else in your code, the compiler will yell at you. It's a significant advantage.

The only thing the compiler will yell at you is if you passed a type that is not of a Query type. The compiler will not yell at you for getting the current session directly or creating your own jdbc driver for that matter.


> "The only thing the compiler will yell at you is if you passed a type that is not of a Query type. The compiler will not yell at you for getting the current session directly or creating your own jdbc driver for that matter."

In Haskell, I'd have a module, Database, that held all my db code. That module would export functions something like

query :: Query -> DBResult update :: Query -> DBAction -> DBResult

(read those as "query is a function that takes a Query and returns a DBResult.")

In the rest of my program, those functions would be the only way to talk to the database. There's your guarantee.

Could I, rather than using my nice database module, instead drop into IO and write code to do something vicious? Surely. But now we've moved beyond bugs and into active malice.

> "Again, so this cannot be done in a dynamic language? If it can be done, why bring them up?"

It's harder. With duck typing, if it looks like a Query it is a Query, no? Even if it drops your table. I'm no expert on dynamic languages, and I'd believe that there are sophisticated object hierarchies that can do these things (at runtime...), but the original article is empirical evidence that real projects get this wrong.

Really, though, try a language with a modern type system and see for yourself. I know we Haskell users sound like zealots, but the difference between the Java and Haskell type systems truly is night and day.


> In the rest of my program, those functions would be the only way to talk to the database. There's your guarantee. Honest question. Take these pseudo sql calls:

    //Bad Person
    username = "lastname'; drop table user--"
    
    //Good Programmer
    query = "select * from users where name like %[username]%";
    input = {"username":"frank"};
    result = execute(query,input);
    
    //Bad Programmer
    query = "select * from users where name like '%"+username+"%'";
    result = execute(query, {});
	
vs

    //Bad Person
    String username = "lastname'; drop table user--"

    //Good Programmer
    Query q = new Query("select * from users where name like %[username]%");
    Input input = new Input(username);
    q.addInput(input);
    Result r = q.execute();
    
    //Bad Programmer
    Query q = new Query("select * from users where name like '%"+username+"%'");
    Result r = q.execute();
    
	
	
Could you solve this better using a static system? Right now I see no difference between the good and bad


> "Right now I see no difference between the good and bad"

You're building a new query string each time you create a Query object, and concatenating the string onto that. With that approach, each time you build a Query object you have a fresh opportunity to mess up. So you're right that there's no difference between your to cases.

Let's drop my off-the-cuff example and look at how a real library, postgresql-simple, handles the issue:

  query :: (ToRow q, FromRow r) => Connection -> Query -> q -> IO [r]
 
Usage example

  query conn "select x from users where name like ?" (Only username)
Do you see the difference? Instead of sticking the username into the SQL query by hand, we use a query function that takes three parameters: a database handle, a Query with a '?' character, and a thing you want to use in the query. The function takes care of properly escaping the username during interpolation. (The "Only" is just a wrapper to make sure we're handing in a datatype we can query with.)

Notice that because Query is a distinct type from String, just doing

  query conn ("select x from userse where name like" ++ username)
doesn't typecheck. Bad Programmer would have a hard time screwing this up.

The full documentation for postgresql-simple is here: http://hackage.haskell.org/packages/archive/postgresql-simpl...


Sure, and Rails offers a similar syntax:

  User.select('x').where('name like ?', username)
But if your language allows literal string interpolation (as Ruby does), what prevents you from doing this:

query conn ("select x from users where name like #{username}")

How do type-safe languages prevent this?


Query isn't a String. String interpolation[^1] would de-sugar to something like this:

  query conn ("select x from users where name like " ++ username)
++ is a function that expects two Strings. The "select..." stuff isn't a String, quotation marks not withstanding. When we try to hand a Query to ++, the compiler screams bloody murder.

Longer explanation: I suspect the syntax is a bit confusing, since while I keep saying "select ..." is a Query, it looks an awful lot like a String. Here's what's going on. Haskell has a typeclass called IsString. Query is an instance of IsString, as is String.[^2]

Quoted text can represent any instance of IsString. So the compiler sees a function that expects a Query and an IsString of some sort, and through the magic of type inference, it decides that the IsString must be a Query.[^3] And when you try to use a function that concatenates Strings on that Query, it knows that Something's Not Right.

[1]: Haskell doesn't have string interpolation. But if it did, this is how it would work.

[2]: And other instances as well. postgresql-simple actually uses ByteStrings, not Strings, for performance.

[3]: I've fuzzed the evaluation order a bit, for simplicity. In practice the first error reported might be that you've passed 2 arguments to a function that expects 3.


In your static example, "Bad Programmer" would be fine, because the Query constructor does escaping. You could do this in a dynamically typed language too, but notice that you don't, you just use strings. The difference between static and dynamic is that with static typing, you can't compile your incorrect program. With dynamic typing, you find out at run time that you forgot to escape the string (turning it into a Query), when that code actually runs.


I'm admittedly ignorant of any type system newer than C++. In a modern static language, how would you design Query such that any SQL injection is caught at compile-time?

On the dynamic side, Rails (in Ruby) doesn't currently catch SQL injections, but it does catch HTML-escaping injections. It (roughly) tags all strings as tainted by default, and when you send them to the browser, it escapes them. If you want to send literal ampersands, angle brackets, etc., you have to mark them as explicitly safe. Since most of your literal HTML is generated by templates (which themselves distinguish variables from static HTML), you end up with run-time safety unless you actively try to break out of it.



If he builds the final query string before giving it to Query, his valid query parts that rely on not being escaped would also be escaped.

To make a safe query type you'd have to provide non-string primitives to build one, if I understand correctly. You can't allow just a full query string (with all of the injections already in place) to be converted to a Query type (as in his Bad Programmer example).


Interestingly Wikipaedia lists SQL injection as being mitigated by strong typing (no I did not just edit it!) http://en.wikipedia.org/wiki/SQL_injection


No, I am saying your strawman is a strawman. You were claiming static typing doesn't help since a string can contain a bad query. Now you are suggesting that you wouldn't write such code in a dynamically typed language anyways? Then why did you offer it as an example of how static typing doesn't help.

Of course you can make sure you never actually run the bad query with dynamic typing. I assumed it was obvious when talking about static typing that the difference would be compile time vs run time. With a statically typed language, when you make the error, you get told about it by the compiler. With a dynamically typed language, you find out about the error later, when that code actually runs.


> Now you are suggesting that you wouldn't write such code in a dynamically typed language anyways?

I never suggested that...?

> With a statically typed language, when you make the error, you get told about it by the compiler. With a dynamically typed language, you find out about the error later, when that code actually runs.

> static typing enforces that you always use that function, and can't forget and accidently submit an unescaped string to the database.

So you are really just saying "static typing requires you to use static typing". This has nothing to do with actually writing good code or having any sort of validation. Just that the compiler tells you that you are sending the wrong type... that's what we are arguing about?

Look my whole point is static typing by itself gives you next to nothing (See my code example below) without some form of validation beyond static typing. That obviously holds true to dynamic typing as well... I'm not even sure what we are arguing about.


> static typing by itself gives you next to nothing ... without some form of validation beyond static typing

You mean like the validation you get when you compile a program?

Yes, a statically typed program that never gets checked is strictly worse than a dynamic program, but that's the whole point of the type system: you can check it. This argument is a strawman because nearly every language with a static type system includes a validation step (maybe Dart is an up-and-coming counterexample).


>I never suggested that

Yes, you did suggest that. I am not sure how this level of cognitive dissonance is possible. What possible purpose does your example serve then if it doesn't impart any sort of meaning at all?

>I'm not even sure what we are arguing about

Clearly. Please, take the time to think through the subject and present a clear point that you will not later pretend you didn't make.


> What possible purpose does your example serve then if it doesn't impart any sort of meaning at all?

The example shows that static typing doesn't do anything more than what it says. It doesn't solve problems/fix bugs or provide some magical insight to the system as you seem to believe.

I'm genuinely curious as to your position and why you are so... clearly opinionated. I'll take the "idiot banner" for today. Please provide me with your insight as to what the fundamental argument (and why you feel so strongly about it) really is.


I think there are two points that are being mostly missed in this discussion:

A) How to define the Query type such that it would convert injection bugs to type-checking errors (No, it cannot simply be a function from a full String containing a query to a Query type, as you demonstrated).

B) Sure, you could define the same Query primitives to do the same in a dynamically typed language. The main difference is that the type-checking errors due to incorrect use of the query primitives would be caught at runtime.

As for A, you would want to define primitives that build query strings safely. That is: Query(unsafe_string_here) wouldn't work. Either because it allows too much (still can inject the original string) or disallows too much (escapes everything, makes the query invalid).

Instead, you would define "select", "update" and other querying primitives as non-string primitives you can use to build queries. You would basically mirror SQL or the query language you use into non-string primitives that allow constructing safe queries.

B) Yes, you could do this with dynamically typed languages. Right before executing your query you would need to do an isinstance() check or some other way to validate that the query was generated using the safe machinery. This means no duck typing. If you allow other, unsafe implementations of the query type here, you get the unsafety back.


Again, provide a clear point if you want me to argue against it. Don't just say "I am going to keep making weird nonsensical posts and then pretend I didn't say what I clearly said and then blame you for replying" and expect me to grace you with some magical "insight".


Many of the unit tests would have to be written in a simply-typed language like Haskell too -- the author notes that only a handful of tests could be entirely eliminated due by the rewrite. Probably worth further study.


In my admittedly-limited personal experience doing something similar, it's really hard to characterize this in a sane way. What you end up with is a pile of unit tests which are "really testing something", yet, some non-trivial percentage of the tests are still redundant to the type system. You feel like you can't throw it away because of the percentage that is a real test of functionality, but if you'd been starting from scratch with the stronger type system you'd have written fewer tests with very different focus.

It's hard to even come up with an example, but consider testing that an HTML generation library doesn't unexpectedly emit text unescaped. The pile of tests you write if you're in Python or Perl mostly translate to Haskell unscathed, in that each individual test still is testing something ("is the href attribute on <a> encoded properly? is the name attribute on <a> encoded properly? ..."), yet considered as a whole the tests have significant redundancy with the type system, because if you set the types up correctly there's a great deal fewer possible ways to screw up than there used to be.


Using the type system to prevent the generation of malformed HTML (and most types of invalid HTML) at compile time is actually a standard example for a situation where unit tests become mostly superfluous in the presence of a static type system. Some unit tests for the escaping functions in the library can be useful, but code that uses the library can just trust the compiler.


That's probably why it came to mind. The real instance I encountered wasn't that, but requires so much other context to explain it wasn't a good HN comment.

Also, rereading my comment, something that may not be clear, when I said "some non-trivial percentage of the tests are still redundant to the type system", I mean per test. On each test, some non-trivial percentage is actually redundant, not that there is some percentage of tests that are totally redundant. You just remove those, of course.


In Haskell, you could define a type AttributeString that was a plain string internally, but encapsulated so it could only be instantiated (or only be serialized) through a function that did the relevant escaping; require attributes to have that type, then the type checker would enforce that property for you as well!


I think the cleanest example is the Quickcheck library. In Haskell an important focus is proving static invariants while in traditional unit tests an important focus is proving code coverage (since stupid type bugs like to hide in uncovered code).


Also, type-checking is free[1], while unit tests have to be manually written.

The use of type-checking doesn't negate the need for unit tests. It just adds another layer of validation. The Unit Test "cost" is still there.


You can write far fewer tests if you have good static validation.

For example, taken to the extreme, you can write 0 tests in Agda, and still have more assurances about correctness than if you had 100% coverage in a dynamically typed program.


The author really needs to be complemented on rewriting swathes of code from Python to Haskell.

In Google, in my project, we've had runtime errors in Python code, due to wrongly spelled variables(although that is a different problem), and type error, something a compiler would have caught.

Strong type checking is something that I truly like about Haskell and OCaml, I'm reasonably convinced that once my program has passed the typechecker, it is logically correct. Though debugging in Haskell is truly a different ballgame altogether (I'm a Haskell noob).

I'll stop here lest this turns into a flame war.


pylint can help with misspelled variables and type errors. I started using it recently and love it. I still love my C++ compiler though and would not trade it for anything else.


pyflakes also detects typos easily, and it's quite useful. In languages that don't catch anything you can usually still use tools for static verification.

As another example, Java may let you get NullPointerException, but FindBugs detects a lot of those.


Wow thanks, I didn't know pyflakes. I knew pylint, but will make it more of a point to run it. I only occasionally need to code in Python, but when I have to, it is legacy code that I modify.


glad to be helpful.

Depending on your editor of choice, you can get integrated pyflakes/pyling/flake8, e.g. I use vim and the syntastic plugin which is great.

This way, you get a bit of on the fly static checking without needing to remember command line tools/using vcs hooks, which is much more productive.


Will keep this in mind the next time around!

Thanks again.


  > we've had runtime errors in Python code, due to [...]
  > type error, something a compiler would have caught
Are wrongly spelled variable names and type errors the only runtime errors that you get? If not what percentage are they?

Personally, I think that people tend to obsess about the specific type of errors because the 'solution' (static-typing) is something that already exists, whereas there is not easy solution to other types of flaws.


In Python, there are a lot of errors like "NoneType has no attribute '...'", and those disappear too.

Missing imports and redundant imports also go away.

Lots of lots of invariants in the program can be encoded as types, too, so any bugs relating to them go away too.

When you want parallelism, you get useful guarantees about not changing the deterministic result you had before you added parallelism.

I used to use Python, but after Haskell, there's no way I'd go back...


  > "NoneType has no attribute '...'"
C is statically typed, but I can still attempt to dereference a null pointer. Static typing doesn't save me here, nor does the compiler, as it's possible for these issues to happen at runtime.

This may be something that Haskell doesn't allow, but it's not something inherent to static-typing.


In static-typing's defense, C isn't as strictly typed as Haskell, a pointer (and therefore null) is just basically an integer.

But I totally agree, static typing is no cure for a wrong program. With power/expressiveness also comes a great ability to goof up.


Intercal is dynamically typed, but it doesn't save me any development time or make my code any shorter! (Well, maybe compared to Java :)).

On top of this, the C type system is not really about correctness at all. My understanding is that it primarily helps with performance, memory management (e.g. you know the size of stuff) and not accidentally using a non-pointer as a pointer. I'm not a C person, but C does not give off a vibe of caring about correctness.

In fact, C is particularly unsafe: you can get all sorts of fun things like bus errors and segfaults that are basically impossible in other languages. C definitely has a place, but only if correctness is much less important than performance.

You can ultimately come up with a sufficiently bad language for anything.

Also, the way Haskell avoids null errors like this is with the static type system. So while it's certainly not inherent to all static type systems (then again, nothing has to be inherent to static type systems except being verified at compile time), it is a property of the type system.


I'm just 'arguing' that pitting static typing against dynamic typing using specific languages as examples isn't necessarily the whole picture. Saying that static typing will save you from attempting to call methods on None in Python is a fallacy. Saying that Haskell's static type system will save you, is possibly correct.

My original point was that it's possible that we (programmers) focus more on issues that could be with static typing (of some implementation) just because it seems like a group of problems that could be 'easily' solved. I.e. 'the grass is always greener'


Static typing can alleviate that, and that's how Haskell does.

The problem with C is that nullability is not statically typed.


Yes, probably that is my confirmation-bias at talk. But more than anything it is the frustration that the whole make into a par file and deploy it on a production system, only to find just moments later that the binary isn't up because of a typo is frustrating in any language.


    I'm reasonably convinced that once my program has passed the typechecker, it is logically correct
Yup, this is one of my favorite things about Haskell, that's how I know that http://bpaste.net/show/32033/ is a totally correct program.


If you're hoping to catch a specification error, don't use a type like `Integer -> Integer`, which doesn't capture the specification except in a most general sense.

Just as you should write good tests, that actually test for useful properties, so you should write good types -- and get useful proofs back from the compiler as a result.


I wasn't trying to catch the error of "program author is a moron who doesn't know the difference between Fibonacci and factorial". Were I trying to catch that error I would have been aware of it, and then much less likely to write the bug in the first place. This is a truism that is well accepted by testing proponents: which tests you write are incredibly important, and you need to write your tests first in order to avoid a curve fitting problem (so to speak). Any non-trivial test would have shown my function to be very broken, what type would you have used to represent that so it wouldn't compile?


Numerical algorithms suffer from a paucity of types. So either you enrich your numerical type hierarchy, or you prove an implementation matches a model, e.g. for fibonacci http://stackoverflow.com/a/8434107/83805


I don't have a response to that, other than to say, now you know why I decided to write compilers instead of pursue a graduate degree in computer science.


If you write compilers, it would be really beneficial if you had a curiosity about the state of the art in programming languages.


… And lo, PHP-2 was born into the world.


The point about a static type system not catching all errors is well taken[1], but if a programmer has mixed up factorial and Fibonacci I would expect that their tests would reflect this as well as their code. This is more the sort of thing you should rely on code reviews to spot.

[1] Isn't the old joke in Haskell "If your program compiles it probably does something someone would find useful, but not necessarily the useful thing you want"


Very elegant way to show a point...


He qualified with "reasonably convinced", and you countered with an unreasonable example. That is the definition of attacking a strawman.


It's not unreasonable, it only seems unreasonable because of the context. I've seen several really good programmers (and overall bright people) mix the implementation of the two up.


If you have the two algorithms mixed up, unit tests aren't going to save you either. Nothing will ever save you from intending to do the wrong thing.


If you actually use the result of the function somewhere, some tests of that component should give wrong answers.


I forget whose quote this is - The computer is a wonderful machine, it does what I tell it to do, not what I want it to do.

Your point taken, but that is a problem in programming language X, which has Y type system, for all permissible values of X and Y.


If you can convert a program from one language to another (which is non-trivially different), in the time it takes to complete your masters, I'm pretty sure it wasn't a very interesting program. Further, the quality of the developers is going to play a large role in how effective any tool (and make no mistake, static typing is a tool) is. This is not intended as a disparaging remark to the authors, but in the 30 seconds I spent reviewing each of these code bases, I was totally unimpressed: none of them seemed to follow PEP8, and several of their test files weren't even unit tests, they were just a random scrip that appeared to exercise a tiny part of the codebase. I therefore conclude that the methodology used in operating this experiment was flawed and, consequently the conclusion cannot be taken as scientifically valid.


I have not looked at the programs, but programs don't have to be long to be interesting. There are many "interesting" programs under 100 lines of code - and they can be important if they form the kernel of a larger program.

Also, the author responds to a similar point in his comments: http://evanfarrer.blogspot.com/2012/06/unit-testing-isnt-eno...


The author doesn't really respond to the point, he simply says they were non-trivial in complexity, which may be true, but that doesn't mean they're non-trivial in their dynamacism (is that a word?). Moreover it in no way responds to the claim that these codebases aren't very good.


Does it matter if "these codebases aren't very good"? Have you ever actually seen a good code base? Most code I've seen has some poorly written corners; something where someone was in a hurry, or didn't know what they were doing, or someone inexperienced with the project started working on it, or the like.

The point is, these were real-world codebases with substantial unit tests, and they had type errors that weren't caught by these unit tests.

Honestly, in several groups I've worked with, I've had trouble getting people to add unit tests even to code in dynamic languages. A codebase which already has substantial unit tests is likely to be better than average, on that basis alone.

This is about saying, in the messy real world, do unit tests actually make type safety obsolete. And the answer is, no, they don't, even code with fairly good test coverage can be improved by adding typechecking. Now, there is the question of whether writing in the dynamic language allowed people to write code faster (it's generally a lot easier to translate correct code once it's written than to write it in the first place); or whether some of the more highly dynamic features of dynamic languages benefit writing or deploying code.


Dynamicity or dynamicness.


What's wrong with "dynamism"?


I considered it, but dynamism refers to personality and philosophy, while dynamicity is just the condition of being dynamic.


The Wikipedia disambiguation for Dynamism (last edited in March) includes:

"Dynamism (computing), When any process in computer is using Dynamic management methods for its processing/computing/memory management/parallelism handling for being able to give more user friendly work that are more easy to interact and modify."

... for whatever that's worth.


Consider:

  staticity
  staticness
  statism


That doesn't work because statism comes from "state", not "static" - I don't think there's a comparable derivation for "dynamism".


If most code doesn't use "dynamacism" (we'll pretend it is a word), then it isn't an important value for most people. You are suggesting people should use dynamic languages to gain no benefit because some other software might in theory benefit from it.


It seems like you're two criticisms are as follows: 1. The sample size is too small. 2. The quality of the Python code isn't that great.

I (the author) would like to address both of these. First of all I completely agree that more research needs to be done. I mention this in the paper. I have provided a data point not a proof. It took me a couple months of several hours a day to the do the translation, I hope more people translate more programs to see if the results hold in the face of a larger data set. Second I agree the quality of the Python code isn't that great. I wanted to see whether unit testing obviated static typing in practice. In order to avoid selection bias I choose the projects at random. I picked the first four projects that were < 2000 lines of code and that had some sort of unit testing.

I believe that my methodology, nor my conclusions are flawed but that all should remember that a single experiment does not make a scientific proof. I hope that others will try to replicate this experiment on many more code bases. If they do it will be interesting to see the results.


The lack of static types really comes into play not inside a library, but in the interface between the external code and a library. Note that at least a few of the bugs that were found involved invalid API inputs. From the library writer's view it's not a bug because those values are not in the domain of defined behavior. From the caller's view it's a PITA that the library doesn't yell at them when they pass it garbage. Of course, they could find the problem with unit tests but the further up the food chain you get the more scarce unit tests tend to be.


Static types or static analysis?

  * KLEE: Unassisted and Automatic Generation of 
    High-Coverage Tests for Complex Systems Programs  
    http://llvm.org/pubs/2008-12-OSDI-KLEE.html
  * Erlang Dialyzer 
    http://www.erlang.org/doc/man/dialyzer.html
  * Datalog based systems 
    http://www.cse.msu.edu/~cse914/Overheads/mmcgill-java-static-race-detector.pdf
If you want static analysis hard coded into your language - what feature set do you want to support? The following support different styles of programming thus have very different static type systems.

  * Standard ML / OCaml
  * Haskell
  * Scala
  * Typed Racket
  * Qi
And there's still the question that some kinds of extremely useful programs are very difficult to write in popular languages with strong static typing. miniKanren, a flexible embedding of Prolog and Constraint Logic Programming into Lisp, comes to mind here. I've seen versions of miniKanren written in Haskell and it abandons the most powerful feature of miniKanren - it can be trivially applied back on the language it is written in!


Dialyzer is not very good. We've been using it, but it rarely catches anything non-trivial. On the other hand Haskell has a better support for unit tests than many of dynamic languages.


Interesting, but worth remembering, as Rich Hickey says, every bug has got past both your unit tests and your type checking.


That statement, though, is almost content-free. It's a truism. If you don't have type checking, then every bug has gotten past your unit tests; but some proportion may have been prevented with type checking. And vice versa. The statement doesn't say anything about the value or non-value of unit tests or type checking.


No, it's not content-free, at least when put back into context.

It's like coffee. If your coffee is shitty (complect), then no amount of sugar (unit tests) or cream (type checking) will make it any better.


Your coffee analogy also says nothing about whether sugar or cream are better than one another, or if either is necessary.

Though I can guarantee you that either sugar or cream with only 0.1% coffee content will still be usable as sugar or cream for other purposes, so I can't even agree with your analogy taken to its limit: that no amount will make the coffee better, because enough will make it usable for other things.

In fact, unit testing and static type checking are largely orthogonal to the complecting issue that Rich was getting act. Unit testing and static typing will both increase complexity; they are both ways of making assertions about the behaviour of the code, and for working code, they should both actually be redundant. But that doesn't mean, as a practical issue, that we should do away with either or both. We're fallible. Saying the same thing twice or three times increases the probability of finding something inconsistent if our statements are not representative of the Platonic ideal we're trying to express.


Except unlike sugar and cream, unit tests and type checking prevent the crappy coffee being served (well, unless you're from the PHP Coffee House where serving an unit test failing brew is perfectly acceptable (https://bugs.php.net/bug.php?id=55439)


Sometimes you have to drink the coffee anyway, and cream and sugar do in practice make it much better.


Speaking of Clojure, I've recently started to learn it since I like Richs reasoning about state, time, value and so on. His speeches are nothing short of fantastic in my opinion.

Anyway, it occurred to me that with Clojure it could easily happen that I trade bugs caused by state problems to bugs cause by type problems.


> Anyway, it occurred to me that with Clojure it could easily happen that I trade bugs caused by state problems to bugs cause by type problems.

With Haskell, on the other hand, you may not need to trade :)


Yes, I know. Haskell is a very cool language as well.


It is definitely refreshing to see some actual evidence in something where arguments tend to be based on speculation, experience, or opinion. Now, we just need someone to research Emacs v. Vim, Tabs v. Spaces, etc.


> Tabs v. Spaces

Not exactly this, but there exists "Program indentation and comprehensibility" http://www.cs.umd.edu/~ben/papers/Miara1983Program.pdf which tries to tackle the "2 spaces vs 4 spaces vs 8 spaces indent". Not very good tho, since they seem to add superfluous indent levels like a separately indented "then" after "if" (Pascal), which effectively doubles the actual indent of the semantic block.


Don't forget about braces versus indents, and above all, semicolons or not! :)

On second thought, let's forget about them after all...


Braces really are evil, just because of how much time is spent bikeshedding about them. In what universe will a program function better if braces are on the end of the line or the next?

That's one thing I really like about go. It's nearly as opinionated about its braces as Python is about its not-braces (almost...).


Any development environment where there exists a rule on Braces versus Indents is a place I'd stay well away from.

Put braces where they make the code readable. Use indents instead where they make the code more readable.

And if you do need a rule, make sure it's based on actual need. I.e. start the discussion with "the past two months we've had 4 non-trivial bugs which could have been avoided if we enforced braces.", not have a bunch of people bicker about their Personal Preferences (usually goes by the name "Best Practices").


I disagree. I prefer a development environment that settles on a coding style, even if I don't like it, rather than chaos. I find reading code using multiple coding style painful.

It also ends the bikeshedding, "that's how we do it there, deal with it".


Such binary arguments bother me - Things won't degenerate to complete and utter chaos unless you go and add strict rules for everything. Good developers tend to follow the style of code they're in, and over time converge on a consistent style within projects or at least modules.

If they don't respect existing style, you don't have a "lack of rules" problem. You have a lack of education problem. Do code reviews. Point out to Bob that he's making the code less by letting the style alternate.

But you're sort of touching on what I meant when you say you don't want "chaos". If you can point at a code file and say "this is chaotic and hard to read - we could improve that by converging on style X or Y.", then we could come to some agreement.

Personally, I'd rather deal with slight aberrations in style than having a flaming row and angering half my team.


I wish more of these open questions in coding (and everything) had people making such a good effort to research them! There never is a simple answer, or they'd not be an open question.

Really inspiring, painstaking piece of work.


Thanks for the compliment, one of my goals is to put the science back in computer science.


How can you translate from python to a static language, when the code is written for the interfaces, and not types? How willl you translate a function receiving a (possibly custom) iterable, when the function doesn't care about the type, but just whether it implements a next() method?


Hindley-Milner type systems to the rescue! (With a sprinkling of Haskell style type classes.)

  f :: Iterable a => a -> b
  f x = (do stuff with x...)
f is defined to be constrained to accept only on types which implement the Iterable interface (however that's defined) as it's first argument and the compiler (or interpreter) will enforce that constraint.


> Hindley-Milner type systems to the rescue!

You're implying that all type-systems derived from HM have a form of bounded polymorphism. They're not, for example OCaml does not have type classes (you could probably encode a lot into objects, though).


TYpe classes and the ML module system overlap in capabilities

http://www.cse.unsw.edu.au/~chak/papers/WC06.html


OCaml's nephew, F# does have bounded polymorphism though. Although it inherits that from the OOP side of its family.


False, ocaml has type classes now! (or maybe I'm thinking of coq)


I don't think so: If you want the equivalent of Haskell type classes in OCaml then you have to either use a functors or some sort of class IIRC. I'm sure Oleg will have done something though.

I believe they're experimental in Coq.


yup, you're right, you got to use functors or go home.

Yeup, some folks did show the functors + modules \equiv type classes in some sense, for system F or a variant thereof. Theres also an oleg approach too I think.


Haskell type classes are sufficiently flexible to represent things like 'does this type have a next method' without having to instrument the actual type.


But could that kind of type system be considered similar to the usual Java-like little-flexible static typing? I think I would prefer a static vs dynamic implemented as Java vs Python (since those are usually the subjects on every one of these discussions)


ALthough the name is confusing type classes are very much like interfaces in Java: you define a set of operations that you want and then instantiate concrete types to that interface.

The major differences between type classes and interfaces are: * In Java interfaces the interfaced type is restricted to the first argument (the this). In Haskell interfaces the interfaced type can also appear on return values and arguments. * In Java you need to decide what interfaces to implement when you create your class. Haskell allows interfaces to be implemented for previously existing types. * Java allows for subtyping. You can turn a monomorphic program into a polymorphic one just by creating subclasses while in Haskell you would need to rewrite your code to be explicitly polymorphic.


It's identical, conceptually. A type class can be thought of as a degenerate kind of Adapter pattern.


There's a difference between the implementation type and the protocol type. Or as Java calls them - Classes and Interfaces. Unfortunately most classical OO language collate the two, causing much confusion.


Haskell's typeclasses are very close to this, but even closer is Go. You just define and interface and the functions that require it, and the compiler goes off and check's by itself whether the type you're passing in satifies the interface - no need for you to annotate it yourself. You lose Haskell's nifty "deriving" feature, but it's even closer to Python's duck typing with compile time checks.


> the function doesn't care about the type, but just whether it implements a next() method?

You're describing an "existential" type.


... or a typeclass.


Scala's type system can handle this as it support type safe duck typing.


The advantage of statically typed languages has a lot less to do with tests and everything to do with tools. IDE's can do very little when no type information is available, and most automatic refactorings require human supervision when performed on dynamically typed languages (read this for details: http://beust.com/weblog/2006/10/01/dynamic-language-refactor... ).


Tooling and performance are two largest advantages. Disadvantage is more verbosity, but it's a trade-off.

With dynamically typed languages I find that you still need to worry about types, but you have to trace through the code to figure out what type a particular variable is (esp. if you're not the only one working on a project). In a statically typed language that information is readily available.


> Disadvantage is more verbosity, but it's a trade-off.

When I was learning Haskell, I wrote a program to count the value of a cribbage hand. The code was half the length of an equivalent JavaScript program.

The expressiveness of a language has more to do with the length of a resulting program than static or dynamic typing.


The claim static typing adds verbosity is a common fallacy.

Haskell code is more concise than most dynamically typed languages.


> Disadvantage is more verbosity

Type inference cuts down the verbosity tremendously.


>Disadvantage is more verbosity

How so? Java is verbose, but that isn't due to being statically typed, it is just a verbose language. Statically typed languages with type inference are no more verbose than dynamically typed languages.


For the cases I was easily able to count, this is one bug per thousand lines of code. I realize that's a handwavy metric, but it's enough to say we're not talking about a ton of bugs.

On the other hand, successful conversion of these codebases is a very interesting result. Apparently static, at least static as sophisticated as Haskell, actually can express most of the idioms in standard python. This surprises me, as I've never seen a C++ program of significant complexity that didn't resort to void*s somewhere.


> "frequently cited claim by proponents of dynamically typed programming languages that static typing was not needed for detecting bugs in programs"

Who says that? There's trade offs, multitudes, in choosing paradigm / language. It's never so black and white (except for academics, and twits who like to argue more than code)

I did not read paper, don't have time for 60 pages of pointlessness.


> "Who says that?"

http://news.ycombinator.com/item?id=4137283 comes pretty close to making that claim, and that's just on this page.

> "There's trade offs, multitudes, in choosing paradigm / language. It's never so black and white"

Sometimes, it is. Most people don't use COBOL anymore, with reason; better languages came along. Programming is still a new field in the scheme of things. It would be strange if our languages were perfectly optimized, with no room for improvement without offsetting costs.

> "twits who like to argue more than code"

Some of us like to do both :) But go program; we won't stop you...


I agree there are generally trade offs, that's essentially the results of my "60 pages of pointlessness" paper. If you did have time to read the paper you would notice a reference to this book http://my.safaribooksonline.com/book/software-engineering-an... that is an agument for unit testing instead of static typing. I think we need to use the scientific method in computer science and not just base our ideas of intuition, belief or absolutes like "It's never so black and white".



The author's interpretation of the argument in favor of dynamic languages seems purposefully naive. I don't think that any proponent of dynamic languages or unit testing claimed that the mere presence of unit tests guaranteed bug free code or that it was impossible to have type related errors at run time if you have unit tests. It's a more nuanced argument asserting that the benefit to programmer productivity when using dynamic languages outweighs the cost of potential type related errors not possible in a statically typed language. Whether it has merit or not is the question I hoped this paper would answer.


On a related topic, another interesting paper: http://www.cs.washington.edu/education/courses/cse590n/10au/...


Slightly different view here.

I think the nirvana of typing is hybrid static/dynamic and it hit us with visual basic 6. I don't think anyone really noticed it though.

It supports traditional type checking by the compiler, runtime type inference and dynamic typing without boxing. Each case can be chosen at will. It supports all theoretical programs, supports unit testing and runtime assertions.

The least buggy software I've seen over the years was written in vb6 (by professionals, not the crap that haunts the web).

Now I'm not saying we should all switch to vb6 but some of the ideas may be worth investigating.


Regarding bugs, I think it's clear that static typing does detect more errors - the issue is whether it's worth the trouble.

I think a more significant difference is the inertia of the codebase, how difficult it is to change. Static types make it harder to evolve interfaces; but unit tests make it even harder.

But both the above are just fiddling: the real advantage of static typing is runtime speed; the advantage of dynamic types is development productivity. In the history of programming, the latter always wins.


> But both the above are just fiddling: the real advantage of static typing is runtime speed; the advantage of dynamic types is development productivity. In the history of programming, the latter always wins

Have you used Haskell? Speed is just a (nice) bonus. Type safety is not about speed at all.

And I am much more productive in Haskell than I could be in a dynamically typed language. I can turn the code base inside out, and trust the type system to guide me to everywhere that needs to be fixed. When I compile it, it almost always works. I barely have to test anything.


It's almost always the case that applying a new set of tests finds more bugs so this experiment doesn't actually prove its conclusion.


There is a case for "optional typing". It's speed. I guess everyone agree on this one.

Better than "strict" typing, sometimes "hints" about the type would be enough.

ie: "ii often integer" versus "ii" or "ii integer"

From then on, those who favor strict typing would be happy, those who favor dynamic typing would be happy and those who favor "efficiency" (whatever this means) would be happy.

Let's close this silly debate.


The better programmer you are, the less you need static types.

But a group of ten programmers create a codebase that is only as good as the worst programmer's code.


Let me guess, you love dynamic languages, right?

I really wish we would all stop ourselves when we're about to make a statement of the form "Great programmers do X (which I happen to do)". Without any rationale to back it up, its just ego-stroking.


No true scotsman much?


I don't know how you could contrive what I said to be cognitive dissonance.

If you have a team of N good programmers, you can probably write without static typing. If you have a team of N-1 good programmers and 1 bad programmer, you should really use a language static typing.


I just read the first part of your comment as "no true programmer would need no filthy static type checking". Sorry if I misunderstood.

I find it hard to correlate use of static typing with development experience, that's all.


If you are a good programmer, you don't need typing, you don't need interfaces, you don't need many things that make bad programmers better programmers.

You might still use them, and there's nothing entirely wrong with that, but you probably don't need them.

This is why so many people have problems writing JavaScript.


Even if you're a good programmer, typing and interfaces make refactoring quicker and typos easier to find.

This is why so many people have problems writing in JavaScript.


Typing makes refactoring quicker only if you're relying on it. If you're not, then it will actually make refactoring slower.

Interfaces will not make refactoring faster, just easier.


> Typing makes refactoring quicker only if you're relying on it.

As opposed to?


I wonder how long it takes until the first one picks Java to argue against typed languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: