Hacker News new | past | comments | ask | show | jobs | submit login
Why do dynamic languages make it more difficult to maintain large codebases? (programmers.stackexchange.com)
144 points by CmonDev on Jan 27, 2014 | hide | past | favorite | 209 comments



For a contrary perspective, see my blog-post "We can’t afford to write safe software" (http://reprog.wordpress.com/2010/06/05/we-cant-afford-to-wri...) in which I make the rather obvious point that all the scaffolding you have to erect in statically typed languages is itself a cost, and that ignoring that is misleading. ("You have to fill in a form in triplicate before you can throw an exception", as one of our poets has written.)

Please note (if you can't be bothered to read the actual post before responding), I am _not_ saying that static typing has no value. I'm saying it has both a value _and_ a cost, and too many arguments seem to consider only one of these.


I don't think you are really arguing against static type checking in that post, but rather the explicit type declarations that often accompany it.

These are not the same thing. See Haskell for a counterexample.

EDIT: I see that Haskell is mentioned in the comments. Still, I don't see any understanding there that static vs. dynamic is not the issue.


Haskell is inspiring, I did the interactive "Try Haskell" tutorial and though "Oh, it's dynamically typed." Later I found it it's just really good and inferring types. From what I've read, it sounds like good Haskell practice is explicitly declare types on functions, but otherwise never declare types. The compiler can then use the few explicit declarations you have made to infer all the other types.


I'm admittedly new to the language, but my convention has been as follows:

Declare types of all named functions (i.e. not lambdas). This serves two purposes.

First, it makes your code more self-documenting. When I look at a function, probably the first thing I want to know is: "What are its parameters?" Type declarations on functions give you a trustworthy and highly readable answer to that question.

Second, it makes type errors easier to understand. If you don't explicitly declare types, the compiler will perform type inference. When your code has a type error, the compiler can infer a nonsensical type for a function, yielding a hard-to-understand error message. If your function has a type declaration, the error message is more likely to point you to where your actual mistake is.

Rarely declare types on anything that's not a top-level function definition. For example, if I want to call (foo (bar baz)), I don't need to add separate type declarations to baz, (bar baz), and (foo (bar baz)), even though each quite possibly has a different type. On the other hand, if you want to remind yourself of the type of some inner expression, or you think it'll make the type errors more readable, you can add the declaration. It won't hurt you.


I've gotten in the habit of explicitly declaring the types of everything I can. As you already mentioned it's a form of (excellent) documentation that I find myself missing in many languages - when writing Python I get frustrated because I have to go dig or debug the type of a value when in Haskell I can simply look for the type declaration (happily Python I think is considering type annotations). If you use Emacs, haskell-mode and ghc-mod + ag or grep have some awesome features for finding type definitions in source, querying hoogle, and lambdabot style de-sugaring and queries.

While type inference is cool, it's only really useful when I'm playing inside GHCi. Also - it can often improve the quality of a program a lot by thinking about your types first, declaring them, then filling out the function definitions (that is often how I do it).


On the flip side, I tend to declare types mostly at the top level. It's nice to be able to change just a few types and have everything else work itself out. There are times it doesn't work and I don't hesitate to add more type annotations then, and when I get weird type errors my first response is go through and decorate it piece by piece.


Most of the time compiler can do without most top-level declarations too. They're a form of documentation enforced by the compiler.


The compiler can already infer all the types anyways, you don't need to declare types. It is considered good practice (and -Wall warns for it) to provide types for top level definitions, as they are usually important enough to warrant this form of machine checked documentation.

note: Yes I know the compiler can't actually infer all types all the time, but the exceptions are obvious and require type information in dynamically typed languages too (ie. convert a string to an int: you have to tell it you want an int, you can't just say "convert this string" and have it guess what you want it converted to).


It's also pretty easy and useful to step out of the HM, inferable fragment using any number of popular language extensions. I'm not arguing that it's a problem, but it's disingenuous to suggest that anything resembling all code can be type-inferred.


I don't think so. We are talking about haskell after all, not haskell + experimental ghc extensions. And which popular extensions break type inference anyways?


RankNTypes are pretty popular in libraries and break total inference. You don't need to enable the extension personally to suffer it either.


But the cool thing is that in many (most?) cases you can actually infer the type of reading from a string. You are going to do something with that value right? So you can say 1 + (read "1") and the compiler will know that you must want to read "1" as a number and does that!


Yeah, I most commonly hit "oh, I need to annotate that" when I'm building some expression piece by piece in ghci, and so effectively what I'm doing with that value is "print it, as appropriate to the type" - which doesn't tell us much about what type it is.


That kind inference only works because + is defined to work on Nums only (importing only prelude). You don't even need the initial 1. A nice trick but these are rather small details.


With the caveat that the compiler can come up with A consistent inferred typing, but it may not be the exact type you had in mind, especially when dealing with numerics. This is very rarely a problem, but in some cases may result in less optimized code than might be possible with a stricter annotation.


You're right that it has a cost (although type inference can significantly reduce that cost). But...

From TFA:

"They write test cases for every identifier ever used in the program. In a world where misspellings are silently ignored, this is necessary. This is a cost."

So in a dynamic language the cost is still there. In fact, it's probably higher than in a static language because you have to build at least some of the infrastructure yourself. Well, I imagine some people would argue that in a dynamic language the cost is optional. And again, I point to type inference where sophisticated static languages achieve the same thing.


> In a world where misspellings are silently ignored, this is necessary.

How does "dynamic typing"="misspellings are silently ignored"? Maybe in some languages but that's not a corollary of dynamic typing.

Disclaimer: Haven't read TFA so shoot me down if the context makes this clear


TFA was talking specifically about Javascript there. But I think the overall point about the cost of testing applies to dynamic typing across the board.


How would one consider JS to silently ignore misspellings?


Except that'll write those same tests on staticaly typed languages (except Haskell), because their typing system isn't powerfull enough to detect most bugs.

And yes, you'll need to do a little bit of infrastructure on the test cases, but you'll gain the extra power that comes with dynamic types, that makes everything (even writting tests) much easier. And, of course, your codebase can be much less "huge" with that extra power.

Anyway, all that thing is a big red herring. Nobody decides to create a huge codebase.


> but you'll gain the extra power that comes with dynamic types

What "extra power" are we talking about here? If we're talking about the power to pretend a bool is a string or vice versa, then no thanks, I'll pass. I suspect that a number of the things you are referring to that give you "extra power" are actually things that don't have anything to do with the type system like first class functions, tuples, etc. Static languages can have those things too, you know.


> If we're talking about the power to pretend a bool is a string or vice versa, then no thanks

That's weak typing. JavaScript does this, Python and Ruby don't.

Typically when people bring up the power of dynamic languages it's in the ability to alter definitions. The ability to extend existing classes/objects is pretty powerful, and isn't always trivially duplicated in static languages.

Many concepts that can be painful in statically typed languages (creating/programming against new interfaces to existing types, reflection, delegation, etc) are easy as breathing with dynamic types. It's not a matter of can vs. can't so much as easy vs hard.


Oh right, my mistake.

s/pretend a bool is a string or vice versa/not know whether an object even has a certain method because it isn't added until runtime/

I'll grant you that this type of thing is more difficult in static languages. But in my experience, the need for this kind of dynamic behavior is quite rare in terms of lines of code. So I'll trade it away any day of the week for the ability to reason about my code with more guarantees. At their core, all bugs are situations where the programmer's expectations didn't match reality. In my experience programmers who are good at debugging are good at being able to question as much of what could be going wrong as possible. Strong static types are about reducing the amount of things we have to question.


"Except that'll write those same tests on staticaly typed languages (except Haskell), because their typing system isn't powerfull enough to detect most bugs."

Agda is offended.


The counter argument is the main argument for clean code and documentation -- code is consumed much more than it is produced.

You can hack code together and forgo documenting it, but it's a form of technical debt. When creating a minimum viable product for a startup, taking on some forms of technical debt probably makes sense, but by the time you're talking about huge code bases, you're probably past the startup phase (I hope) and you're already up to your neck in technical debt.


Related: static typing is itself documentation (low-cost, machine-checked documentation).


Yes, but mostly this type of documentation: http://photos1.blogger.com/blogger2/1715/1669/1600/larson-oc...


When the static type checking system has an inference engine there is a strong incentive not to document types on the part of programmers. Conversely, a requirement that for explicit type declarations makes the use of generics more cumbersome.


Fortunately, your IDE can use inference to add a type signature with a keystroke.


And now you've added the cost of mandating an IDE, and someone has to write that feature for the IDE in the first place.


The feature exists in emacs and vim, and is nice but not mandatory.

It should be easy to integrate into any editor.


"It should be easy to integrate" is the downfall of many a software developer.

Anyway, my point is not that it's difficult or bad: it's that it's has _some_ cost.


> When the static type checking system has an inference engine there is a strong incentive not to document types on the part of programmers.

Not the case in Haskell, at least. Most Haskell code out uses type signatures.

> Conversely, a requirement that for explicit type declarations makes the use of generics more cumbersome.

True enough, but you can't have your cake and eat it. Either you want explicit documentation-by-types, which has a cost, or you don't (which also has a cost in terms of maintenance).


> When the static type checking system has an inference engine there is a strong incentive not to document types on the part of programmers.

But then the compiler has the capacity to write/emit the type declarations.


A contrary perspective? Even the original post is a dubious argument to the effect of "it's not that the language is dynamic, it's that dynamic languages lack X or Y!".

For me, I've looked inside one of the biggest Python codebases in existence (the EVE Online client). It's a freaking mess. A simulation of static typing but mostly they just passing freaking tuples around. Everything is a tuple in a tuple in a dictionary in a tuple in a homegrown table.


Any modern type system barely require scaffolding, lowering the cost and providing a way more value (see Haskell or ML, or even compare Java with Scala)


You're confusing static typing with explicit typing. This is static typing's version of the "dynamic typing doesn't have to be weak typing" argument.

There are a variety of languages that don't impose explicit typing on you to enable static typing. OCaml and Haskell are popular options. Scala makes the compromise choice of having explicit function signatures with inferred typing elsewhere.

If the only thing you're using the type system for is to make sure arguments line up, you're barely using it at all. When your types can encode (and thus, ensure) aspects of your program logic, that is when static typing becomes useful.


I view the extra scaffolding as writing unit tests. The type definitions are just a way to specify unit tests in a formal way. The compiler does the rest of unit testing using the type definition, at compile time. With the correct tools a lot of the scaffolding are automated and kept to a minimum so not much is loss.


The op is arguing that static typing is correlated with project scalability (due to language support for a number of other features that help encapsulation, modularity, etc). Not that static typing by itself is what makes it easier to scale project complexity.

For C++, note the now accepted use of `auto` and `decltype` which together remedy a number of issues from C++03 and prior.


That might be one of the reasons that type information is optional in Erlang, and what makes it fairly expressive at the cost of some static analysis.


>in which I make the rather obvious point that all the scaffolding you have to erect in statically typed languages

You mean "in which I pose the rather obvious strawman". What scaffolding? What is this mysterious "extra stuff" people give vague names like "scaffolding" to that I've never encountered despite programming in a statically typed language all the time?


I think he means having to implement toString() ten times to handle all the various types that need to be converted to strings. toString() is a trivial example because most languages have that built-in to some degree. But there have been several times where I would have to write the same C/C++ function again and again to support a new type.


In Python, you have to reimplement __repr__ for every class whereas in Haskell you use deriving Show. This has nothing to do with static vs dynamic.


Python's `__dict__` is probably a better example of duplicating Haskell's `derive Show` behavior.


Why? __repr__ (or __str__) is pretty much the "show" method, in Python. It isn't derivable (or rather, the default derivable one is useless), though.


Try "deriving (Show)". And stop using the C/C++/Java straw man as the basis for arguing against static type systems.


But if we exclude C/C++/Java then static typing is just an obscure feature of some academic languages.


I don't know what your first sentence means. As for the second, I was explaining what I thought the author meant by "scaffolding" not making an argument for or against anything. Now here is an argument for you: shouldn't two of most widely used languages be included in an argument about static vs dynamic typing?


The first sentence is a hint toward Haskell's wonderful derivation system which tells the compiler to write the "obvious" code for all kinds of common scaffolds. It's almost always all you need for equality, showing, ordering, hashing. There's also the GeneralizedNewtypeDeriving which allows you to inherit instantiation of types into all kinds of structurally equivalent types.

And while it's valuable to talk about popular languages in terms of the kind of static typing people are likely to encounter... they also have really outdated static type systems. While you can argue that it's likely that you'll bump into Java/C/C++ when working with static types, it's pretty invalid to argue generally about static type systems using them as examples. Things have just come a really long way.


Fair enough.


And my first sentence was explaining why even that interpretation of the author's meaning is an invalid argument. It demonstrates the equivalent of writing toString() in Haskell. And no, the two most widely used languages should most definitely NOT be used in an argument about static typing because the state of the art in static typing has evolved significantly since those languages were created. The fact that you didn't know the meaning of my first sentence tells me that you're arguing from ignorance about what is possible today in a fairly well-established language.


He was referring to Haskell's syntax for derived instances - http://www.haskell.org/tutorial/stdclasses.html#sect8.4


> shouldn't two of most widely used languages be included in an argument about static vs dynamic typing?

No, because the affordances and limitations of static typing are not the same as the affordances and limitations of static typing as implemented in Java/C++. You can't infer very much about static typing from statements made about those languages, and you can't infer much about those languages from statements made about static typing, so it follows that you can't freely substitute one for the other in these discussions.


stop using JavaScript as the basis for arguing against dynamic type systems.

(not necessarily directed at you personally)


Who does that? Dynamic type systems don't actually exist, it is simply a lack of a static type system. Nobody argues about dynamic type systems, they argue about whether or not to have a static type system.


> there have been several times where I would have to write the same C/C++ function again and again to support a new type.

If the function has to handle different types in different ways, you'd have to write it again anyway. Otherwise, you could have used templates.


Why did you have to write the same function again and again? I guess polymorphism would have helped, would it not? Not knowing too much c++ I can't really comment so I guess I am asking for a clarification.


C does not have proper support for polymorphism. It can be simulated with structs in certain cases though. I can't remember a specific instance with C++ off the top of my head. It has been a several years since I touched it. Sorry.


In what language do you not have to implement a conversion-to-string for each type? It's not like the compiler can know what string representation you have in mind for some arbitrary object.


Haskell. It's often not about a specific representation, but rather just some representation that allows you to see a type's internals.


For debug purpose (like you'd use Show in Haskell), Perl gives use Data::Dumper to dump your data structures. Not quite the same as deriving Show, but for most purposes, entirely sufficient.


That is an argument against particular languages that happen to be statically typed. That is not an argument against static typing, which requires nothing of the sort.


One example in the article was the difference between def qsort(a) and public static void ArrayList<Integer> qsort(ArrayList<Integer> a)


The cost is significantly reduced by type inference (see Go and Vala).


Go does not do type inference, it just does type deduction, a much weaker thing that at least relieves some basic tedium, but still leaves the source code fairly full of type declarations.


The core of the argument is that the features of static code bases are something you need anyway when programming in the large, and I basically agree with that.

I'm not totally convinced, though. Take the simple case of "tell me what range of values to expect here". Java / C++ deal with this with their types, and you can be reasonably confident that you're getting the object you expect. As I'm sure somebody will mention, those types are a bit weak, and can't guarantee the object you get is exactly what you expect. What if you don't test with the right sub-class? What if it's an int, but it's out of the range you expect?

"No, that's not what types guarantee!" you may say. And it's true.

I think one of the problems is that language features aren't perfect. Forcing everybody to admit they're dealing with integers is more desirable than no contract at all, but would it be better if you dealt with Scores, which where integers in the range of 0-100?

In a big enough codebase, the language is never powerful enough to handle all your checking for you, so you need some level of discipline. I worry that by having something "good enough" when your code base is moderate, it allows teams to slip by into "large" without ever considering what their tools, conventions, and limitations should be.


>What if you don't test with the right sub-class? What if it's an int, but it's out of the range you expect?

Type systems can do all this and more.

https://github.com/milessabin/shapeless/wiki/Feature-overvie...

They can assure you handle all possible control flow cases with ADTs and pattern matching, they can facilitate automated testing, etc.

Just because it's not how java/c# do it, doesn't mean it's a limitation of 'typing'.


The problem is that your argument is basically boiling down to "this hyper-experimental language that only three people on earth use can do it, therefore you should use Java/C++/C#".


> this hyper-experimental language that only three people on earth use can do it

That's dishonest. That's the kind of stuff Ada or Haskell do. The linked repository is a Scala library.


Shapeless is a Scala library, not a language.


How very dishonest of you. It actually boils down to "a bunch of common languages do it right, so use one of those instead of java".


Well, no, the dishonesty is basically in arguing that statically-typed languages have all sorts of wonderful features... while omitting that the set of statically-typed languages which have those features, and the set of statically-typed languages people actually use, do not overlap much.


LinkedIn, Twitter, FourSquare, Netflix, Rackspace, Firebase, Hulu and a ton of others use Scala. How is that dishonest?


It's dishonest because no matter how many hot HN-echo-chamber companies you can find using Scala, it is still a language that nobody uses, in terms of the broader industry.


No. Discussing what static type systems do means discussing what static type systems do. Not pretending static type systems can't do anything java doesn't do. We don't judge dynamic typing by how shitty PHP is for the same reason. Dynamic typing is not what made PHP shitty, it just happens to be both dynamically typed and shitty.


I largely agree with this, but also consider the following:

1. I have a function 'saveScore' that has a parameter 'score' that is an int.

2. Users of my function frequently misuse my function, passing in negative scores, so my team decides a range-checked Score parameter makes more sense.

3a. In a static language, users of my new code get compilation errors saying 'saveScore' takes a 'Score', not an int.

3b. In a dynamic language, at best, devs that use my code get an easy-to-read exception thrown at runtime. This assumes I'm thorough enough to check my preconditions and throw an understandable error message. This also assumes all code paths are covered before the code is released to customers.

In short, static languages provide facilities for library and framework authors to enforce a little discipline in the consumers of their code.


Now what if you have a function that returns an int, which you want to use as the input argument to saveScore? You'd probably use a conversion function that returns a Maybe Score...


Better than that, you can enforce that a Score is a Score, not just any integer between 0 and 100. You can put units in there, so a Score must be in Points. This is all normal in Ada and off the top of my head, F# has it too.


Yep! Another huge benefit of static typing, particularly for systems that deal with lots of numbers that mean different things. (Some are lengths, some are durations, some are speeds, etc.) Don't want your airplane to crash because you passed in square feet instead of meters per second? Have the type system check your units.


In an application at work where we deal with mapping and geo-coordinate systems and stuff we frequently all sorts of fun little bugs because a point on the map doesn't know whether it's in geographic coordinates or in the current transformation system or in screen coordinates. They all use the same class and can be freely mixed. Alas, my suggestion of using different types to encode different coordinate systems was more or less ignored (too much effort cleaning up a 15 years old codebase).


> Don't want your airplane to crash because you passed in square feet instead of meters per second?

Airplane? Hah, think bigger: http://en.wikipedia.org/wiki/Mars_Climate_Orbiter#Cause_of_f...


Just create a type in any language and you can do it there too


Well, yes and no. You could in Java create a class called Score with get() and set() methods, set() taking another class called a Point as its input, and throw an exception if the value of Point is outside the range, but the overhead of that would be huge both at development and at runtime. With expressive types in the language, there's minimal extra effort in development (in many cases much less since now the compiler can do things that otherwise you'd need unit tests for) and none at all at runtime.


> ... would it be better if you dealt with Scores, which where integers in the range of 0-100?

That's what value objects are all about, and yes - you can do that in a mainstream language such as Java/C#. http://en.wikipedia.org/wiki/Value_object

I strongly recommend that pattern.


Well you could subclass the Integer class and take that as a param (except you can't in Java because Integer is final, so you would need a wrapper class) then throw an exception in the constructor.


Any decent programmer working in a dynamically typed language will implement pre-checks (and maybe even post-checks) to validate types at run-time.

Some dynamic languages even allow for type specifications and static analysis, optionally or through an external tool.


That just doesn't happen. People don't check pre-conditions and, frankly, they're right not too: checking all the ways your input might be wrong is often very wordy (much more so than static types), and usually slow. Not to mention that it's not at all trivial to check preconditions; if your program has any kind of duck-typing or dynamic dispatch it's often very hard to check more than the immediately obvious (can it quack?), and that check is often furthermore redundant because you'll use the function. Worst of all, this is extra code that itself can have bugs (and it's code that's likely to see almost no usage in production since you don't make a habit of passing invalid input). Of course, you could check that extra code with extra unit tests...

If you want to check preconditions rigorously and pervasively, use a statically typed language (and be a little realistic in what you can check). If you want short code without ceremony, use a dynamic language and don't uglify it with more boilerplate than the worst statically typed language. I've seen that kind of code in ruby, and it's really not a joy.

Real life in a dynamic language: people introduce some checks in some high-risk functions, and call it a day. And that's... Just Fine.


It's also harder to establish checking boundaries in a dynamic language. If you honestly claim that someone both (a) writes pre-/post-condition checks on all functions and (b) compartmentalizes functionality sanely then you've got an exponential blowup in type verification.

So in practice both (a) and (b) are weakened. Usually it's a whole lot of (a) and a little bit of (b). You can reduce the problem by establishing domains, but that limits reusability and requires its own kind of stringent documentation and discipline to handle.


I am strongly anti-dynamically typed for this reason. I spent 5 years in a dynamic language writing so many asserts/unit tests to handle basic compiler type cases that it more than invalidated any argument for conciseness or time to market.


Did you consider not doing that, then? Plenty of code-bases don't and the sky doesn't fall in.


> Any decent programmer ...

That is something that you won't find that often in the enterprise space.

Which is actually where most large codebases tend to exist.


Yes, but that means that no language can save you.


True, but some help better than others the herd to control. :)


>> That is something that you won't find that often in the enterprise space.

What utter rubbish.

What you won't find that often in the enterprise space is dynamic languages, so your statement is irrelevant as well.


> Any decent programmer

Your "decent programmer" is about as common as a Sufficiently Smart Compiler.


Unfortunately, everyone uses statically-typed languages wrong, so they get about the same protection with their compiler that you get with Ruby or Perl or Python.

Consider this common mistake:

   interface Greet {
       void sayHello(String firstName, String lastName)
   }

   class PrintGreeter implements Greet {
       public void sayHello(String lastName, String firstName) {
           ...
       }
   }
Where's your compiler NOW?


Haskell libraries and programmers are good about using type synonyms to avoid this problem.

Notice http://hackage.haskell.org/package/happstack-server-7.3.2/do... isn't a string like "GET" or "POST".


Yes. newtype and GeneralizedNewtypeDeriving eliminates pretty much every excuse for not writing type-safe code.


GeneralizedNewtypeDeriving is kinda terrifying though, you're better off dumping the derivations from GHC and inserting them yourself after having a look-over :)

-ddump-* is great for this kind of thing.


What, autogenerated code never has strange bugs! :)

That said, it works well enough for deriving monad from a transformer stack. Do people use monad transformer stacks? It's been a while since I've written any real Haskell code.


Monad transformers and stacks of them are pretty standard though it's something library writers engage with more often than "users".

There are more adventurous and specific ways being explored to handle things like "effects", but that exists mostly in SHE or Idris.

mtl is still the practical way to go for now.

I keep wanting an excuse to futz around with this: http://hackage.haskell.org/package/layers


It should also be noted that my specific example doesn't even use or need newtype. It's a sum type that derives a bunch of standard typeclasses.


And that's how to piss off staff engineers, bring down production bigtables, and get to write your first regression test :).

(I swapped key and value in a string, string API and ended up writing 5+ MB "keys".)


This is a classic example of when to use the Value type pattern. You are in fact arguing for stronger typing (String is too limited here).

If you language makes that problematic, it is the fault of the language implementation, not a problem with static typing.


The people writing "dynamic languages suck" blog posts are merely unaware of how much dynamic typing they do every day in their statically-typed languages. And yes, I am arguing for stronger typing. And for getting rid of null.

(I've seen plenty of Haskell programs that are not compile-time checked for, say, XSS problems, even though it's trivial to do that with the type system. The problem is that nothing is ever automatic; if you want to write the best possible code, you have to make it a goal and carefully execute that goal. There is no silver bullet.)


Most of my mistakes are elaborate versions of this:

  double :: Int -> Int
  double = (3*)
The only amount of typing that is going to help with that is an amount nobody is willing to use.


Ah yes, the other common misconception. You still have to have test coverage to be confident that your code actually works.


At least you won't be trying to add 15 to lastName...


That is a problem of weak typing, not dynamic typing.


In Javascript or Ruby, we'd just pass in a hash--named arguments lol.


I have a pretty different experience here.. we have 30+ million lines of JS in production (2k+ devs and tens of thousands of files -- it isn't as scary as it sounds). Prior to converting app development to JS in 2005, development was mostly done in C/C++ and the boost in productivity by writing things in JS has been clear as day. This isn't to say that it is easy to maintain; in our case, a core infrastructure team maintains the guts reasonably well (+dev/test tools) and it allows higher-level teams to build the product. The best feature of the script when it comes to higher-level product is that large portions (if not the entire thing) can be scrapped and started anew in a different way without a significant loss of investment. I suppose a distinction could be made between referring to a single large product/codebase versus a much larger amount of physical code but split between many teams/products. There are probably only a handful of apps that wind up being "large" (in the 10k+ LoC range).


There are several things about this post that make me really question its validity.

"The best feature of the script when it comes to higher-level product is that large portions (if not the entire thing) can be scrapped and started anew in a different way without a significant loss of investment."

Large portions can be scrapped and started anew without significant loss of investment?!? Is this code being auto-generated somehow? If it's legit actual lines being written by your developers, then that is the definition of a significant investment. And if you're redoing it, then that's by definition loss of the original investment.

"There are probably only a handful of apps that wind up being 'large' (in the 10k+ LoC range)."

What are you smoking? I've got more than 10k LoC in an app that has only been under development by < 2 developers for a few months. My company has another app built by one guy that's more than 30k LoC. And you say you have 30+ million LoC in production?!? If 10k is "large" for you, then you must have a shitload of apps out there.

Also, there's a huge difference between maintaining 3000 apps with 10k LoC and maintaining even one app with >1m LoC.


Lots of time is spent understanding what the product needs to do and building a suitable solution to it that is intuitive for users. The resulting JS code does not always represent a large portion of the investment into delivering the end-to-end product. In my subjective experience, the JS is written more quickly, requiring overall time to get something working. An idea might change course in significant ways a few times in the span of time it would have taken to write it in C/C++ once.

In our setup, apps leverage key lower level libraries that are maintained by a small team and represent a much smaller, more manageable native codebase. A very complex app can be written in 10k LoC. Yes, there are a lot of "apps".

> Also, there's a huge difference between maintaining 3000 apps with 10k LoC and maintaining even one app with >1m LoC.

I said exactly this at the end of my post, but I don't feel it is always black and white. I said "apps" in quotes above because many times they act more like modules than standalone apps. Many of them interact in non-trivial ways, making the comparison more muddled. It's definitely a lot of code either way and in our situation JS has definitely made evolving the product much easier from an end-user perspective.

edit: ^once


Wow, that's quite an unusual profile. Good for you guys that you've been able to decompose things into thousands of relatively self-contained apps. It's a really interesting situation. Part of me wants to say that this doesn't really qualify as a "large codebase" as referred to by the OP. I tend to think of "large" as hundreds of thousands of lines and up. But I certainly don't want to marginalize the experience of an organization maintaining 30 million lines of code in 10k chunks. Your experience definitely suggests there is great value in decomposing things into small self-contained pieces. But I would say that strong static type systems are better at helping you do that than weak dynamic type systems. It can certainly be done in dynamic languages, but it requires more discipline.


They have thousands of apps. Maybe 10s of thousands by now. It's been many years since I worked there. It's a pretty unique setup.


I think C and C++ are pretty terrible languages for most application developers, and they are also from an earlier generation of languages than JavaScript, so I don't see how we can ascribe the difference in productivity to dynamic versus static type systems.

The most damning thing is perhaps that C and C++ require additional testing that JavaScript does not require, and that is testing for memory safety. In C++ you can kind of stay safe by using references but there is still a lot of risk and you always have to keep object lifetimes in mind. C and C++ are probably among the least productive languages in common use today for other reasons as well—such as for their header files.


> I don't see how we can ascribe the difference in productivity to dynamic versus static type systems

My personal opinion/observation/"gut feeling" is that dynamic typing has helped, but I haven't done any kind of concrete study to figure out its impact vs, say, removing the compile/link phase. I observe some of the code being written and definitely see programmers taking advantage of the dynamic nature of the language.


what are you doing in JS now that required C/C++ before ? that's the most interesting question. I dont think you needed C/C++ at first place.


I agree, which is why the scripted approach works much better. Why were UI toolkits such as Gtk+/Qt written in C/C++? What kind of scripting language would have been suitable to use as a replacement at the time?


Maybe 100K LOC in a dynamic language is more difficult to maintain than 100K LOC in a statically typed language, but what if the same functionality requires 500K LOC in the statically typed language? It is easier to maintain a small code base, other things being equal.


Why would a dynamic language be five times shorter, given a modern statically typed language such as Haskell, Ocaml, Scala, or even Go? Even C++11 has powerful abstractions that can cut down the size of a codebase significantly.


Because often 'static language' is used in place of 'Java'; pretty sure those languages are much more succinct and readable than Java is, thus they're not as likely to become 'large' codebases. I'm probably wrong though.

Whenever I recall programming in Java though, I remember highly verbose, boilerplate-ridden codebases, and bits of code that are little more than abstractions for bits of code underneath them (with unit tests alongside them that test bit of code A calls functions B and C, which of course has no value whatsoever).

Nowadays I do Javascript, much better. Today I frowned upon a colleague who is used to Objective-C who proposed writing documentation. Pfff.


But this is a very big problem. It seems to me that the popularity of dynamic languages recently is almost entirely driven by Java backlash.

So developers with limited experience encounter some big "enterprise" Java code base and recoil in horror. Instead of making the deduction that enterprise Java is terrible, they jump all the way to compile time types being horrible.


Honestly, "statically typed language" often means C#/C++/Java. All languages which trend verbose.

While Scala/Haskell/F#/etc are great at concision, they aren't that common in traditional SW industrial applications.


Modern C# and C++ both have facilities that make the languages much more concise (lambdas in combination with std::algorithm resp. LINQ, type inference and the like). Java 8 will also have some of that and I can see it becoming a much nicer language to work with because of those changes.


Quite correct. But historically - and worse, codebases with a Standard Style that is perhaps older - these languages trend verbose.

IME, I've tried to make super concise modern C#, and it simply isn't concise compared to Haskell or even Python. YMMV, of course.


1. C# is not as verbose as Java and C++. Latest versions especially.

2. C# supports dynamic typing as well as static.

3. C# and F# are the easiest to mix and match within one solution.


> C# supports dynamic typing as well as static.

Are you talking about automatic types? Because that's the only thing that resembles dynamic typing I ever found in C#. And it misses about all the reflection available in most dynamic types languages, thus, even if it's somehow possible to not define types, it's useless.


C# fully supports dynamic types [1] and has very good reflection support [2] since version one.

[1] http://msdn.microsoft.com/en-us/library/dd264736.aspx

[2] http://msdn.microsoft.com/en-us/library/system.type.aspx


Those are 'anonymous' types. I wasn't talking about them.

    ExampleClass ec = new ExampleClass();
    //ec.exampleMethod1(10, 4); // compiler error

    dynamic dynamic_ec = new ExampleClass();
    dynamic_ec.exampleMethod1(10, 4); // runtime error


Well for Haskell and ocaml, it's more the type inference and functional nature than the static typing that saves code.

As for scala, it's not quite as functional or well typed inferred as the others... interested in what the LOC savings would be.

Realistically though what's being compared is Ruby/Python vs. Java. It's much more than just the language it's the culture, java is just plain bureaucratic, a lot of routine paperwork for not a lot of functionality.


I'd say its because its easier to get on with human-friendly abstractions in a dynamic language. For example you spend less time packing structs into something that can be used by a customer with a dynamic language. Things happen faster, if all you've got is a number, string, and table type. So, you end up with abstractions that do more.


LoC is already a poor measure, but if a good effort at solving a problem meant you ended up with 500K LoC in a static language, and you could somehow know that solving the same problem in a dynamic language would have produced just 100K, the take away there is that in that instance you probably chose the wrong tool. I don't think it says too much about static vs dynamic languages and maintaining a given size program over time.


Maybe 100K LOC in a static language is more difficult to maintain than 100K LOC in a dynamically typed language, but what if the same functionality requires 500K LOC in the dynamically typed language?


Often times I find large static code bases to be lacking tests. When you have type safety, the tendency is to test less, and in many organizations, forgo testing entirely. A dynamic language will force you to write tests a lot sooner, before the whole codebase turns into a bad game of jenga. In my experience, testing seems to be a lot easier in a dynamic language as well.

I would say that the most stable codebase would be a static one with lots of tests, but I would also guess that this would be the slowest to develop, and would be the most difficult to implement in a real world team of developers.

I know most of us would balk at a huge codebase with zero tests, but it happens all the time. I would definitely prefer a dynamic code base in which I could get my team to write decent tests, but give up type safety, than to have type safety at the cost of decent tests.


Static typing lets you focus on testing behaviour rather than "forcing" you to test something that machine can figure out for you.


The type checking is a form of unit test. The compiler just does it for you automatically.


At least in Haskell, which encourages writing pure functions, testing is easier. Also, QuickCheck can use static types to make tests more concise and cover more ground more quickly.


What research exists that testing actually is worth it? I feel like people seem to take it for granted and speak about personal experience and personal values but I never read about any actual convincing research that shows TDD as actually being beneficial. It's more like a cargo cult: Write tests and things will be well.


What alternative do you propose, and what "actual convincing research" shows that your approach is actually beneficial?

In the context of software engineering approaches, "What research exists?" is basically a conversation killer of a question. The scientific method, when applied to medium-number, complex-interaction systems like most business settings, offers very little in the way of predictive power. At best, it may illuminate some dynamics and raise some questions to ask, but rarely predict the future in the way that hard sciences research does. This is why, if you a predisposed to label something like TDD as "cargo cult", you will likely never find any published research to be "convincing".

But that said, a simple search for "published tdd research" turns up quite a few hits, including this one from Microsoft right at the top: http://research.microsoft.com/en-us/groups/ese/nagappan_tdd....


What do you need research for? If you don't test your code, how do you know it works, at all? Obviously you do test it, so why not have those tests written in code, so they can be repeated at any time and catch regressions? The benefits are entirely obvious.


It seems you are conflating testing and TDD, they are different things.

However the benefits of testing can seem rather obvious if you are in the software industry. We hire people to do QA, and if the developers write tests, we can hire fewer people to do QA and still meet reliability targets. Software development is a loop: design -> code -> test, over and over again, and automated testing means that the code is still fresh in your mind when you fix it.

But it sounds like you're wondering, "why test at all?" Well, would you sell a car or a table saw that had never been tested to a customer?


> What research exists that testing actually is worth it?

It's absolutely not worth it. By the way, I have a codebase with 240k line of early 2000's guaranteed-untested PHP, coded by the best and brightest and complete with best-of-breed programming practices such as global variables I could maybe interest you in. I also have a full complement of bridges available.

> I never read about any actual convincing research that shows TDD as actually being beneficial.

TDD? That's probably good when you know what you are doing, but I must admit I'm a "test after the fact" man most of the time. Probably because I don't know what I'm doing most of the time :)


> I would say that the most stable codebase would be…

Why guess, when you can look at real examples?

Look at standards like MISRA C, or projects like the computers on the STS or Mars rovers. Using a static type system and testing things is but a small part of the process typically used for high-reliability systems.


> the process typically used for high-reliability systems.

And that's the key. Producing high quality systems is something that is emergent out of a process designed to produce high quality systems (and in fact, you need to have a process for improving the process). What is your process? Has it gone under rigorous examination with an eye towards output quality? Then any quality that results is a happy accident of competent engineers and a particular time and place.

Unit tests, types, are simply a small part of how quality is created.


Given the lack of evidence to support the claim I find the argument specious. There are large code bases written in dynamic languages that are well maintained by thousands of contributors that could weaken such a claim without strong data backing it up (OpenStack comes to mind). The author fails to provide a link to a single study and relies purely on intuition for eir argument.

I only see one link to that answer on SO that points to a single study. It was provided by another commentator asking for the OP to provide evidence for the "strong correlation," claim. Not very good.

Though I'd hate to work with a team that used a statically typed language and tools that didn't write tests for their software. It's not magic soya-sauce that frees you from ever introducing bugs into your software. Most static analyzers I've seen for C-like languages involve computing the fixed-point from a graph (ie: looking for convergence). Generics makes things a little trickier. Tests are as much about specification as they are about correctness.

In my experience there are some things you will only ever know at run-time and the trade-off in flexibility for static analysis is not very beneficial in most cases.

Some interesting areas in program analysis are, imho, the intersection of logic programming, decomposition methods and constraint programming as applied to whole-program analysis. Projects like kibit in Clojure-land are neat and it would be cool to see them applied more generally to other problems such as, "correctness," and the like.


My experience with Python had been that scaling to large code bases is possible, but the burden of the late runtime errors and lack of type documentation accumulates.

The beginning of the project is wonderfully productive. By the time you regret having chosen a dynamic language for the project, it's too late to switch.


This is why I've been describing untyped/uni-typed languages as tantamount to taking a pay-day loan against your own project.


From the link:

"Rather it is also everything else [correctness facilities besides static typing] that is frequently missing from a dynamic language that increases costs in a large codebase. Dynamic languages which also include facilities for good testing, for modularization, reuse, encapsulation, and so on, can indeed decrease costs when programming in the large, but many frequently-used dynamic languages do not have these facilities built in. Someone has to build them, and that adds cost."

...I think that's a little generous to statically typed languages. C and C++, both heavy-hitters in the static language world, have anemic modularization, reuse, encapsulation, mocking frameworks, etc.


Why needs this to be discussed again and again? It is such an obvious thing. Use a statically typed language and you get a proof that your program does not contain bugs of a certain class for free. Use a dynamically typed language and you lose this proof or you have to do it on your own. Does this proof come with price tag attached? Yes, but a very small one - sometimes type inference fails and you are forced to manually specify the type. I don't see why this is worth any discussion.


My theory: Many proponents of various dynamic languages simply cannot find a statically typed language that is appealing to them. You say the price tag of statically typed languages is small, and while that may be true in theory, in practice it is not if it means that a language is more verbose (Java, C#), properitary (C#), complex/obscure (Scala, Haskell) and/or immature (Scala/Go).


Static and dynamic are nearly orthogonal to strong and weak typing.

A language with strong typing facilitates communication about the intent of code as well as its function. Strong typing is easier to build into a static compilation phase, but this is not an absolute requirement.

Granted, three of the four quadrants are well covered. Dynamic, strongly-typed languages will take a speed hit unless designed by a wizard.


Strong and weak typing don't even have commonly accepted definitions...


This aligns well with my experience. Maintaining even medium-sized projects in Python, Javascript and PHP has been a very frustrating experience. In practice, nobody ever writes detailed comments or tests (well, not tests that test for type errors anyway. If you're lucky, you'll inherit something that tests the basic functionality of the system and that's about it.), and types are sometimes about the only thing that can help you navigate through the code. Just being able to see what types of arguments a function receives and what it can do with them has saved me tons of time.


In my experience this all comes down to proper decomposition of services. Don't have large codebases. Break it down into individual services, libraries, etc. Version your APIs. Use proper semantic versioning in your libraries. Most of the trouble I've found has been in changing code that is coupled to a lot of other code. That sucks in every language. It's worse in dynamic languages only because detection is harder and tooling is weaker.

Sounds great on paper, I know, and is much harder in practice. But if done correctly then dynamic languages are awesome boosts to productivity.


I maintain a large set of codebases right now, and the most easy to maintain is certainly the components written in a dynamic language (Lua). Its so darned pleasant to return to that module, after dire straits suffered in Java- and C#- realms, that I'm seriously considering just moving everything to Lua, from here onwards.

So I don't know if I have an answer to this question -but certainly it appears, to me, that the more dynamic the language is - the easier it is to maintain .. certainly the mere fact of lighter tooling means this is true?


> I maintain a large set of codebases right now...

Does that mean you have a large set of small-to-medium-size code bases? Because, though that's also a hard problem, it's different than maintaining one huge code base.


The Java codebase is 170k lines. C#, about the same. Lua, about 45k. I don't know if thats a lot of little codebases, but sometimes it feels seriously dystopic moving around the Java stuff.


Well, huge is relative to many things. Thanks for the ballpark numbers though. Pain points change and intensify over different orders of magnitude.


So how do you refactor a method signature to swap argument order in Lua? Please don't answer "manually".


What's wrong with doing it manually, especially if you have good tests?


It's error prone and time consuming. How do you promise that your tests cover every instance of calls to that method? How do you make sure that you are only changing the correct method (especially when it has a common name)?


Umm .. I don't? Seriously, who needs to do this? Never had to do it in Lua. Same with Java, although I could see a case where it might be 'necessary', I still don't agree that its a good thing that the language promotes this activity.

Needing to do this indicates worse problems with the developer than the fact of a codebase being too unwieldy.


The language just promotes that it's easy to parse so that tools can be written to automatically transform source code in certain ways. Eclipse has a lot more refactorings than just reorder method arguments. Lua should theoretically fall into the same bucket of languages, it's just that there probably are fewer tools for refactoring so far.

Compare that to languages like C++ where despite its age and being used to widely there are still very few working refactoring tools.

(All that being said, I usually used method argument reordering when removing or adding arguments to a method by changing what the method is doing and then noticing that a different order makes more sense when reading.)


In 15 years of coding, I've need to do that exactly zero times.

Is this really something people are doing?


It's an extremely common refactoring in modern Java/C# IDE's. Swapping arguments isn't necessarily a big deal (but being able to do it safely and automatically means that there is also no cost). A bigger one is adding/deleting parameters or renaming methods.


I don't understand why it would be necessary to refactor the argument order for methods that are extensively used. Can you provide a productive example for when this might be an appropriate action for a developer to take - I'm having difficulty understanding why on Earth this would be necessary, but I'd be happy to learn ..


"Necessary" might be a hard barrier to get over, but I can think of times when it might be nice. For instance in C# optional parameters must be at the end of the parameter list (that is all required parameters must come before any optional ones).

So for our highly used method we realize that there is actually a reasonable default for the first parameter (maybe because we notice tons of usages of that value). In order to change this from a required to an optional field, we must move it to the back of the list.

Without an automated and safe way to do this, I'm unlikely to make that refactoring, leaving us with a code base that is worse than it could be. Again, is it necessary? No, but if it is trivial my code will get better, if it isn't my code will stay worse.


Do you have complete unit test coverage for that Lua code base?

I'm starting to build a pretty large piece of software and am struggling with this question right now. I'd much rather work in Lua or Python, but I've seen and worked on so many large Python projects that are absolutely unmaintanable and poor-performing that I hesitate. However, those projects all share a gratuitous use of threads, which I think is a poor idea in any language, but particularly in Python.


Almost a complete unit-test set. Its something thats being worked on.

But .. I really do think I have a future as a 100%-Lua user, across the full stack. From mobile/desktop client to backend, and persistence. I'm just not finding much energy to deal with all the cruft of the other languages, while Lua just gives me all I need, and leaves me alone. I don't know if I'm becoming myopic in my old age - possibly - but the more I think about where I can place that sticky little VM, the more I think I just don't care about much else but doing that, a lot...


Plus, LuaJIT is a seriously nice piece of engineering, and it's easy integration with C is just so damn nice. :)


Exactly! C for the systems-layer, Lua for everything else, LuaJIT for the customers! :)


So after reading through the comments, a lot of devs apparently do not make the distinction between static typing and Java.


Would you elaborate?


Do we have actual studies about this? Does it take into account the cost of initial development? The cost of fixing bugs? The maintainability and the learning curve?


Not many that I'm aware of. One commentator on the answer on SO asked for references for the claim of a, "strong correlation," but none was given by the OP. The commentator produced one link that they're aware of.


Not saying it's not true by the way. Even as a programmer that primarily uses a dynamic language (Ruby) I can totally see how types, a good compiler, enforced interfaces and more control mechanisms can benefit the development cycle, but in general, I tend to prefer hard cold facts and numbers over anything else.


Talk about begging the question. It would be far more interesting to talk about the characteristics which make large codebases easier or harder to work with and how to encourage those practices in different environments.


I thought the linked answer did pretty much what you suggest. Was it missing something?


The first paragraph started out reasonably well but then it detoured into a bunch of discussion about details which were specific to JavaScript rather than the class of dynamic languages as a whole. It felt a lot like saying static languages were hard to write maintainable, secure programs using C as the only example.


# randomly placed in an initialization file

   Object.class_eval do

     def to_s

        "a"

     end

   end
# in some other context perhaps a controller action...

   def lookup_user
   
     user = User.find_by_name(params[:name].to_s)
   
     if user.blank?
   
       flash[:error] = "No user found by that name"
   
       redirect_to "/users"
   
       return
   
     end

     # good stuff goes here

   end
# # try debugging it... it's tricky... use grep? #

the thing is this is actually not too bad... in C++ try debugging a memory leak...

(formatting...)


> C++ try debugging a memory leak...

On a codebase writen by a team scattered around the globe, with the customer shouting on technical support and on site support team to know when the fix is ready.

I like C++, but I don't miss manual memory management.


Static typing is a blanket of testing that covers the entire project. Likely not thick enough to be sufficiently warm, but still a better base layer than a patchy sprinkling of straw.


(Unit)-tests are mostly procedural while types are declarative (describe intentions) so good IDE can speculate about those intentions and enable better refactoring.


I guess we should just understand what's the goal of scripting versus what's the goal of statically typed languages.

Statically typed languages should just provides code that need to be reused often. Also needed when you need performance or low power consumption.

Scripting should be about using those libraries together.

People should try to understand how gmail works, because I think javascript is just used for presenting data to a browser, while gmail servers do the heavilifting.


Its difficult to maintain large codebases period.


From the link:

"Let me begin by saying that it is hard to maintain a large codebase, period.... That said, there are reasons why the effort expended in maintaining a large codebase in a dynamic language is somewhat larger than the effort expended for statically typed languages. I'll explore a few of those in this post."


> So for that reason alone, dynamically typed languages make it harder to maintain a large codebase, because the work that is done by the compiler "for free" is now work that you must do in the form of writing test suites.

Static types are not enough to ensure that your algorithm receives correct input, to ensure you that you will need to write tests anyway. So with static languages you get one check "for free" out of the many that you will have to write yourself.

If you rigorously write tests for a JavaScript code as you would have for a Java code, it becomes just as reliable.


Why, that's simply an indication that you haven't chosen the right tool for the job! Remember, using the right tool for the job is paramount. That's why every blog post on HN about libraries/frameworks/languages ultimately boils down to "use the best tool for the job". So if dynamic languages are making it harder to maintain your code, that means you chose wrong, tool-wise and/or job-wise.

Remember, great software can be made with any programming language! This is why all your favorite apps are written in Visual Basic. Hope this helps!


Isn't the question about choosing the right tool for the job?

> So if dynamic languages are making it harder to maintain your code, that means you chose wrong, tool-wise and/or job-wise.

The issue with large code bases is that rewriting everything is too expensive, so by the time you have data and experience showing your code base is difficult to maintain, it's already too late.


I'm curious what the experience is of maintaining larger codebases in functional languages.

I ask this because it seems like a lot of "big code" issues stem from poor development practices polluting the codebase over a long period of time, destroying any sort of design that once existed.

Does it help if the language takes a hardline stance against mutable state and side effects? I'm certain you can screw those up, but it's confined to a single place.


Maybe the kind of people who like standards and practices tend to shy away from dynamic languages. Maybe C coders are just more likely to be Style Guide people.

Maybe the idea that you can avoid side effects is a fiction that the horror of manual memory management cures you of.


TL;DR - most dynamic languages allow most things to change at any time. So with a large codebase, it's hard to know where a particular something is getting changed (or not). IOTW, it makes error traceability of the codebase more difficult.


Code base is incorrect metric. (Good) Dynamic languages enable smaller code bases.

And you really, really should try harder to make small isolated components (services, daemons, sites, commands, whatever) and avoid large codebases.


Managing large codebase is nothing to do with typed property. Untyped languages just give you more freedom (see lambda calculus) but you can shoot yourself if you are not ready to use it.



Wow, what disinformation. My background in static like Java has been that moving to dynamic makes it easier to do large.


This question is begged.


In my own experience, the problem has been dedugging.


Really? I don't think so.


Wow, that author has not been keeping up with changes in the JavaScript scene.


I find that hard to believe.


Static and dynamic typing tend to fall down in different ways at Big Code. Dynamic typing can make some kinds of versioning hell in distributed systems less of a problem (which is why languages designed for distribution, like Erlang) tend to be dynamic, and compile speed can be a massive PITA in a language with a static type system worth using (like Scala).

If you're using Java or C++ or mainstream OOP, you're not getting the benefits out of static typing anyway and a Python or Clojure is a big improvement. That said, there are domains where compile-time static typing is extremely useful (and justifies the associated costs due to, e.g., the loss of a Lisp's intuitive macro system and first-class REPL) but those are going to call for a Hindley-Milner type system, not Java's weird mess of one.


If you're programming in Java, C++ or another OOP language, you are still getting benefits from static typing. It's not as strong as if you were programming in a language with a more robust type system, but I cannot agree that there is no benefit.


> If you're using Java or C++ or mainstream OOP, you're not getting the benefits out of static typing anyway

What is it that about Java or C++ or OOP that you think makes their static typing useless?

> but those are going to call for a Hindley-Milner type system, not Java's weird mess of one.

Hindley-Milner is actually a poor type system. It trades away powerful features like overloading and subtyping in exchange for global type inference. Scala, like C++ and C#, does not use Hindley-Milner but a kind of local type inference.


> Hindley-Milner is actually a poor type system. It trades away powerful features like overloading and subtyping in exchange for global type inference. Scala, like C++ and C#, does not use Hindley-Milner but a kind of local type inference.

On the other hand, subtyping does add a large amount of complexity and is misused about 90% of the time.


> What is it that about Java or C++ or OOP that you think makes their static typing useless?

He's not saying their static typing is useless. He's saying it's not an accurate representation of the state-of-the-art in static type systems.

> Hindley-Milner is actually a poor type system. It trades away powerful features like overloading and subtyping in exchange for global type inference.

Something that you may think is a good idea in one context may not be such a good idea in another context. I'll take Haskell's type classes over subtyping any day.


> He's not saying their static typing is useless. He's saying it's not an accurate representation of the state-of-the-art in static type systems.

Perhaps he meant to say "full benefits" and it was only a typo.

> I'll take Haskell's type classes over subtyping any day.

Type classes and subtyping are not incompatible.


Correct, dynamic languages are very good for scripting and as a glue between components.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: