True, but the whole point of designing modern type systems is to attempt to catch as many common bugs in the static-check phase as possible (while still being decidable), rather than purely checking datatypes (int, string, etc.) in the classical sense. I don't think it's too unreasonable to compare it to empirical bug prediction: they're a rationalist vs. an empiricist approach to statistically predicting where bugs probably lie, with some various tradeoffs.
Google - like most of the software development world - has a lot of people who believe in type systems completely.
That's why they have tools like the Closure JS compiler[1], which provides type-checking for Javascript, and GWT[2], which produces (un-typesafe) Javascript/CSS/HTML from (mostly)typesafe Java.
Using methods of reducing bugs (such as type safety) is orthogonal to producing systems that predict where bugs will occur.
Your last statement's the part I disagree with, though it depends on what you mean by "orthogonal". If you mean that they're two different approaches that can coexist, then yes. But I don't think they target orthogonal classes of bugs. In both cases, the goal is to employ some algorithmic, decidable method at compile-time to predict whether a given piece of code is "correct" or "incorrect", trading off the possibility of a false positive or false negative. Each approach does better or worse in different cases, but I think in a manner that either isn't orthogonal, or at least isn't obviously orthogonal (if "orthogonal" is meant in any strong sense, rather than just "two different approaches"). And I think for any one, we can at least in principle ask whether the other one could've done it: e.g., if we're noticing a lot of bugs of a certain sort, could the type-checker have caught that? The ambition, at least, of static type systems is to render accurate bug-prediction impossible, because every bug statically predictable at compile-time will be a type error.
I think this misunderstands the goals of bug prediction, which are typically along the lines of
- directing engineering resources to where they're likely to be most leveraged from a quality standpoint [what Google focused on]
- estimating how many resources to devote to bug fixing and/or long it will take an in-progress code base to stabilize [the goal of a lot of the work Micorosft did with Windows in the early 2000s]
- and estimating post-release defects to estimate (and perhaps direct) customer support and maintenance resources [more typical in the hardware world than software]
And I wouldn't say that static type systems have an ambition of rendering bug-prediction impossible. If you can truly detect all errors at compile time, then prediction's trivial: no bugs remain!
>The ambition, at least, of static type systems is to render accurate bug-prediction impossible, because every bug statically predictable at compile-time will be a type error.
I might not use Haskell, but I do understand what a type system does, and why it is important.
BUT, not all bugs are statically predictable at compile-time. Take things like cross-browser compatibility - something like GWT goes a long way to reducing bugs with that, but no type system will protect you from a new bug in a new browser you need to work around.
(Edit: by orthogonal I meant "statistically independent". Given a piece of code written in a type safe language, this method will predict bugs independently of a type system.)