Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why choose a dynamically typed language?
38 points by jeswin on Aug 7, 2010 | hide | past | favorite | 113 comments
A few years back, mainstream statically typed languages were more verbose than Python or Ruby. But today, I find that I can make F# or even C# code (insert other lang here) as concise as any dynamically typed language.

And on the plus side, you get all the benefits of compiler type checking, better IDE support, and great refactoring support.

What are your reasons for choosing a dynamic language?




For exploratory programming, I like the ability to trial-run code that is clearly incomplete or even wrong--- maybe I'm trying out a change in one part to see if it works there first and haven't updated the rest of the code, or I haven't even written a bunch of the cases yet. At least with my approach to exploratory programming, it's common for me to have large parts of the code at any given time that are broken as I try out things. If the things I try out work, I go back and fix the rest.

In statically typed languages, at least the ones I've used, doing that requires a lot of overhead before the language will even let you run the damn thing. Yes I know lots of stuff is broken, but let me try out this one code path that I'm pretty sure works! Or if it doesn't work, fail at runtime so I know where it breaks! Instead, it won't compile at all, unless I go through an exercise of moving broken code out of the way (lots of block-commenting), supplying stub functions to take its place and the place of unimplemented stuff, keeping both an old and a new version of a function around so I don't break code that's looking for the interface with the old version (even though I'm not testing that code right now anyway), etc.

In Lisp, I don't have any of that commenting/stubbing/shutting-up-the-compiler hassle. If I want to try out one particular code path amidst an in-transition code base, Lisp will happily execute that code path. To me at least, it feels a lot easier to try out new ideas that way, because it doesn't feel like the code base is this ball and chain I'm dragging with me every time I try to make a change; I can choose to temporarily leave it behind for a bit.


While that's definitely a practical benefit, it's not limited to dynamically typed languages. It's completely possible for IDEs and/or JIT frameworks to offer this same functionality.


no not really, its not just about starting out fast - more importantly its that you aren't dependent on other types and don't have to hook into the existing code base to pull of something.


Your claim that F# or even C# are as concise as a dynamically typed language is the crux of your argument, and is the weakest part of your post.

I haven't worked with F#, but I have read a share of C# code. A non-trivial program is likely to need some methods to respond to many types; in this circumstance C# lets you write many methods, use generics, or (now I believe) use dynamically typed variables. Writing many methods is hard to maintain, and not at all concise. Using generics is theoretically great, but the syntax in C++/C# is just awkward and writing type-agnostic code with generics is nigh unreadable.

In Ruby or Python you have duck-typing, and you can easily be type-agnostic and concise; in fact if you are writing code in one of these languages, you /should/ be both of these things.

Duck typing is fantastic for true object orientation: "If it walks like and duck and it quacks like a duck, we can safely treat it like a duck." This reliance upon support for response to messages instead of reliance upon types is often the key to being concise and readable.

In ruby, for example, I can use object.responds_to?(:method_name) and be assured that the object will support this piece of the interface before I attempt use. This is an elegant solution to keeping code type-agnostic, and it emphasizes the best sort of object oriented methodology.

This is why I love Ruby.


> in this circumstance C# lets you write many methods, use generics, or (now I believe) use dynamically typed variables.

Or you could be explicit about the interface you are writing the duck-typed code against. The problem here is that you can't assign an interface to an existing type, and adding interface declarations results in more code. It will be interesting to see whether this functionality (extension interfaces and inferred interfaces) makes it into C# in the next 5 years. My guess is that it's functionality which is possible to add, but we're not there yet.


I believe the Go language gets most of the effects of ducktyping in a statically typed language. The compiler is smart enough to figure out if an object adheres to a specific interface without having to explicitly tag that object as implementing the said interface. So this basically means if you have a method which just calls 'read' on it's argument, you can define a 'read' interface and it will accept any existing type that responds to 'read'


How is a compiler supposed to figure out if I've added a method at run time?


hm... I think I didn't explain properly (or I misunderstood your point).

Lets say you have a method the sole task of which is to call call .read() on its argument and return a single char.

char readch(obj) { return obj.read(); }

Now, since I have not declared any explicit type for obj, the compiler would generate code like this:

    interface readch_obj {
        char read();
    }
    
    char readch(readch_obj obj) { return obj.read(); }
So the function basically takes in an interface readch_obj which defines a single read() method that returns a char.

Now, the trick is, the compiler can intelligently recognize that the FILE object has a method with the required signature (char read() ) and that the STRINGIO object also has a method with that required signature and so the compiler automatically tags the FILE object and STRINGIO object as implementing obj_readch and so, either of those objects can be passed in as an argument to readch().

Note: I'm not saying this is how the Go compiler actually works but this is the basic concept from what I understood.

The language doesn't need to allow for adding methods at runtime for this to work.

If the language did have the scaffolding needed to add methods at runtime, then yes, this technique wouldn't work and thats probably the fundamental difference between ducktyping and this (structural subtyping? not sure what it's called).


Sorry, it's that I was reading on my phone and missed your 'most.' Go's structural typing stuff seems pretty cool, but what I was trying to point out was that it can only get most of the way there, not the whole way. In Ruby, for example, I can define singleton methods, and method names based on a dynamically generated string. I'm not sure how a compiler would be able to do stuff like this. At least, not easily.

Then again, I haven't been keeping up with the more recent advances in compiler stuff, so maybe someone has already done this.


object.responds_to?(:method_name) is just a run-time variation on generics constraints. C# could use a bit more work on its syntax there, but something like go's inline interface definitions would make it just as nice as Ruby.


This is entirely fair -- my intended point is that C# is /not/ yet concise for operations that are sugar-sweet in Ruby, and that's why I don't love to use it.


But statically typed languages have duck typing too. Except it's called type inference. And F# has it.


This is patently false.

Duck typing: http://en.wikipedia.org/wiki/Duck_typing Type inference: http://en.wikipedia.org/wiki/Type_inference

Type inference relies upon known, preset interfaces and inheritance from other types. Duck typing relies solely upon the methods being used.

In duck typing, this means that the user finds out at the time of use whether or not a method is supported by an object. Here, the "interface" to a type is not set in stone. It is inherently dynamic and as long as the message you pass is supported, (duck->quack), there isn't a problem.

The mental jump which many people fail to make is to understand that Object Orientation is about message passing, not types. The key questions are, "what message are you sending", and "does the receiver have a response" - it got dogmatized somewhere in small-memory static-land long ago that the best or perhaps only way to tell if an object will respond is through static definitions of complete interfaces, which came to be very strongly associated with our conception of a "type". Once the compiler has used it to check interface constraints are unbroken, this information becomes useless and unchangeable.

Ruby is an example of a language that does not use that means of answering the questions. Instead, an object has dynamic metadata about what messages it supports (specifically a hash of the method names), and uses that (malleable) data to generate an answer. It is fundamentally different than type inference in a statically typed language.


While duck typing is not type inference, you can still get the benefits of duck typing and real OO with structural typing, which can be done statically.


Duck typing is a problem when it's done by message name and those names aren't distinguished by namespaces. We're already starting to hear horror stories of Ruby libraries that want the same name to mean different things, especially with monkey-patching.


Heh, I've never run into or even thought of this - it seems like a very rare and very hard problem to solve. It's almost a semantic problem of language and meanings as opposed to a technical one -- I guess the solution would be to find some technical means of adding specificity...

hmm, if you have two objects which both support the method 'run', one of which is a jogger that will move into a 'running' state, and the other is a program which will actually begin executing code, and you have them both in say, an array that gets iterated over and every object is sent a 'run' message.. and they both take one argument and return nothing distinctive, well, that's extremely contrived. But such things happen. Scary.

I'm gonna start thinking about this. Thank you :)


Monkey patching has nothing to do with duck typing. Monkey patching is both incredibly powerful and exceedingly dangerous at the same time but, there is nothing about monkey patching that says you need duck typing. You do static typing AND have open classes. To pretend that problems related to open classes is somehow connected to duck typing is simply false.


I wouldn't say that it has _nothing_ to do with monkey patching. Monkey patching is the ability to update classes members dynamically. Duck typing is the ability to update an object's members dynamically. Since the overall point is to dynamically create a new member of some object, I think the previous poster can be forgiven.


Duck typing isn't the ability to update an object's members dynamically. There is nothing inherent in the meaning of duck typing that says anything about updating. All it says if i ask:

walk and talk of it and can walk and talk then as far as i'm concerned, its a duck because that is what a duck was expected to do.

duck typing is polymorphism w/o inheritance.


> walk and talk of it and can walk and talk then as far as i'm concerned, its a duck because that is what a duck was expected to do.

This doesn't even parse.

Duck-typing is the idea that an object's type at a particular time is made up of the set of fields and methods it has _at that time_. I suppose that doesn't strictly require updating, but it would be a pretty lame duck with it.

Polymorphism w/o inheritance is a fine description. How do you think that can be accomplished without dynamic updating?


I dont believe Duck typing and type inference are really the same thing.For duck typing the type is unimportant, it only cares about the operation, types matter for type inference. The behavior of structural typing in languages such as Scala are is closer to Duck typing IMHO.


The reason most people choose dynamic languages is because the most popular static languages have fairly weak and unhelpful static type systems. The benefit of confirming that a variable is an int isn't worth the effort needed to support that analysis in most static languages, and where it is we can shove the trouble into unit tests.

C# is a little better than the big 3 (C, C++ and Java), but you still trade off between helpfulness and verbosity. F# is a lot better IMO, but it's still fairly obscure, so of course not many people are going to be using it.


Whether it's worth the effort is debatable. When you (not you specifically, in general) design a database, do you use types, do you mark a field as int or let everything be a string? Yes, when you code you can make sure to only put "integers" (as string) in that field and everything will work. Maybe it's not the perfect analogy, but I think it touches an important point. From a statically-typed point of view it "feels" dirty. When I look at a method declaration, I want to know that it only works with integers, the same way I look at a field in the database I want to be sure to expect only integers, and nothing else.


> When I look at a method declaration, I want to know that it only works with integers, the same way I look at a field in the database I want to be sure to expect only integers, and nothing else.

That's fine, but it doesn't help me with the errors that I make.

I don't make representation errors. I make kind errors. The difference is that the number of apples and the number of bananas are both non-negative integers, but a check for non-negative integers doesn't keep me from adding apples to my bananas inventory. Remember that I do want to add both apples and bananas to my fruit inventory.

Yes, I care about representation, but things shouldn't go south if I happen to have half an apple.


In some statically typed languages, it is possible to have an "Apples" and a "Bananas" type. And it's good to hear that you don't make "representation errors", because I do make these errors - which then blow up precisely after 3 hours of running the Python program.


No quite sure what you mean, if you write something like this:

void DoSomething(Fuit fruit, uint quantity) { .. }

Inside that method you will be guaranteed to receive only apples or bananas and a non-negative integer for quantity.


But noting types is not the same as the specifics of a type system. Merely marking what types a function accepts can be accomplished just as easily with a comment like "# function foo : (int, int) -> int".

But anyway, in many cases, structural typing can make more sense than name-based typing.


And the point is, with a static type system you get automatic verification that the "comment" is accurate


Indeed, that is the definition of a static type system. And that's a useful thing to have. But it's not infinitely useful. In some cases, the burden of pleasing the type system can be bigger than the burden of checking types yourself where necessary. Static typing can actually get in the way of useful algorithms if (for example) your type system doesn't support generics.

Due to these tradeoffs, in some cases, people can be more productive in a language without the help of a typechecker than they can in certain languages' static type systems.


If you're a new programmer, remember that most programmers aren't trying to figure this out. They're already substantially invested in the language they chose years ago, with a large base of knowledge and already written code.

Static vs. dynamic typing is nearly orthogonal to my programming language choice. Much more important to me are functional features. And all other things being equal, I'd rather have the flexibility of the dynamic languages than the performance and compiler-checking of static ones.

Haskell has some killer features that make me want to use it more. Static types are not on the list.

Maybe it's because all of my projects are small and I don't have to program defensively, but I've never been bitten by a type error that took anywhere near as long to fix than I would have spent using a static type system.

I'm sure in some circumstances there are dynamic language projects that are plagued by type errors, but in my case static typing is a solution looking for a problem.


>Maybe it's because all of my projects are small and I don't have to program defensively, but I've never been bitten by a type error that took anywhere near as long to fix than I would have spent using a static type system.

One of my projects for school turned into a 9k LOC Ruby project. I had precisely zero errors during the entire development cycle that static typing would've detected. Meanwhile, attempting a bit of dynamic code in VB.Net recently resulted in easily a couple hours worth of total waste dodging types while not going totally type-less. And then there's the crappy lambdas and total lack of anonymous or inner functions, so you end up polluting your namespace massively in many cases. (stuck in 3.5 + VS 2008, 2010 adds those) Gah!

I do occasionally miss IDE hinting, which is drop-dead easy in statically-typed languages. But dynamically typed languages allow such ridiculously simple APIs I find it's rarely necessary.


I had precisely zero errors during the entire development cycle that static typing would've detected.

You never get a

  NoMethodError: undefined method `[]' for :foo:Symbol
or similar? Then you are either a much better programmer than me, or we are dealing with confirmation bias[1]. I get such errors regularly. They are trivial to fix and are caught early by tests/specs, but they are still there on a first iteration.

[1] http://en.wikipedia.org/wiki/Confirmation_bias


No doubt that accounts for some of it, yes. But I'm extremely careful what I put into my variables, especially in larger projects, and especially in this one because everything was extremely highly connected. Chuck something in wrong once, and it could take me hours of hunting to find it; it wasn't worth being tricky.


Not to argue. 9K is comparatively small.


As compared to a lot of commercial products, absolutely. But it was still a significantly larger project than I'd attempted before, and I probably ended up writing a good 30k or 40k through the whole time (attempting lots of new things).

It's also still over 10x larger than the .Net project with which I've been fighting. And given my patterns of code, I see no reason why making my previous project 10x larger would have any more difficulties with typing than I had before. In a few years of Ruby coding, I can only recall a few immediately-identified type problems. Static typing seems to me to cost significantly more than its value.


You can get a lot done in 9 kLOC Ruby/Python/etc. Possibly 10x what you could get done in as much Java.


I wonder what features do you like from Haskell, as everything seems to be tied to its type system


I've only written a few small programs in Haskell; one was a basic parser.

The pattern matching syntax made a lot of things trivial to express that would require switch statements or if/else chains in just about any other language, so I loved that. Actually, once I got used to the function definition syntax, I like that too and felt it was very concise.

I like the flexibility that strings are lists (arrays?) of characters, which was incredibly useful for implementing a parser. I'm sure that would come in handy in just about any project.

The type system...eh. I did a bit of fighting with it which apparently is normal for newbies, and I can see the advantage of having it during the compilation stage from a performance perspective. It just didn't grab me by the collar like the other features did.


+1 for "static typing is a solution looking for a problem".


I don't think I can completely agree with this. There are tradeoffs and the issue isn't that simple. Dynamic typing is better than bad static typing but so far I'm finding my Scala code is every bit as expressive as Ruby and a lot easier to maintain.


Why is it easier to maintain? What specifically about static typing makes it easier to maintain?


I think it's three things:

First, it's a lot easier to quickly figure out what a function does when it has a type signature. It's strongly self-documenting.

Second, it's easier to figure out what kinds of datastructures are used in a program, because you can see the types of the variables and functions you're working with. For me understanding the underlying datatypes is a big help in getting to grips with a codebase.

Third, the compiler catches a lot of dumb mistakes early. It's no substitute for testing, but a quick compile cycle will rule out a common class of errors and helps me make changes with more confidence.

It's not all roses and there is a price to be paid but I do thing there's a strong benefit.


> First, it's a lot easier to quickly figure out what a function does when it has a type signature. It's strongly self-documenting.

I submit to you that I can show just as many examples where the type signature is of no help. Self-documenting functions existing in all paradigms, and it is a stylistic choice rather than one enforced by any technical considerations.

> Second, it's easier to figure out what kinds of datastructures are used in a program, because you can see the types of the variables and functions you're working with. For me understanding the underlying datatypes is a big help in getting to grips with a codebase.

Interesting point. In languages like Ruby and Clojure, you tend to use a standardized set of highly-optimized and standard structures as opposed to using generics to spin out a lot of them. They tend to conform to high level protocols (interfaces) much like statically typed languages do.

The flipside to this is that dynamic languages tend to have a lot more ease changing data structures down the road.

> Third, the compiler catches a lot of dumb mistakes early. It's no substitute for testing, but a quick compile cycle will rule out a common class of errors and helps me make changes with more confidence.

This is true, but I'd like to point out that lots of Lisp compilers already do this. They use optional static type inference for optimization and error checking but do not enforce those constraints, merely observe them and react to them.

Personally, I think that's the best of both worlds.


Interesting point. In languages like Ruby and Clojure, you tend to use a standardized set of highly-optimized and standard structures as opposed to using generics to spin out a lot of them. They tend to conform to high level protocols (interfaces) much like statically typed languages do.

True, but what happens when you decide you really need a list of maps instead of a list of lists? You better hope you have good test coverage when you do this in a dynamic language.

I made a similar change in a fundamental datatype in a Scala program from a list to a symbol -> string map and once I fixed all the compiler errors the whole thing worked correctly on the very first run.


Dynamic languages win in the worse-is-better department. They're easy to implement and easy to learn. You can get a beginner up to speed in a language like Ruby or Python very quickly. This creates a grassroots effect that builds libraries and community around the language because many of those beginners gradually become experts. Dynamic languages are also generally more expressive.

There are definitely some downsides too though, and static languages have gotten a lot better in the last ten years. After many years of working almost exclusively in dynamic langs I'm enjoying having the help of the compiler again in the Scala hacking I've been doing.


Hopefully more concise statically-typed languages will have the benefits of being easy to learn and build up their own grassroots communities.


Maybe so but the kind of type system that it takes to make a static language as expressive as a dynamic language has some inherent complexity you're not going to be able to eliminate. Static languages can be a bit of a harder sell too because the benefits of static typing aren't as apparent on smaller codebases.


While the type system might be more complicated, programs are simpler. Dynamic languages often need runtime checks dealing with types, and /that/ complexity is there in either case. Static languages just make a pretty interface for it.


Short Answer: Python idioms are intuitive to me, whereas many statically typed idioms are not -- so it is easier for me to learn new concepts using Python.

Long Answer: For me, part of it is syntax. I have found that using Python has enabled me to learn some programming concepts that I struggled with in .Net. .Net supported them just fine, but somehow the Python approach was intuitive for me, so once I learned it, I could abstract it to a different language. Polymorphism being the primary example with Python.

More recently, I've gravitated toward languages that provide support with both dynamic types and static types (and arguably C# has heavily moved in that direction, but still I find the syntax not intuitive). The hybrid is nice, as you can worry about the higher level algorithm more easily with the dynamic feature, and then shift over to static to take advantage of the features you cite for static.

Earlier this year, I discovered the Cobra language (http://www.cobra-language.com), which boasts a python-like syntax on top of .Net (and supports both dynamic/static typing), and is optionally more strict than standard static typed languages in that it adds the benefits of contracts, nil-checking, and embedded unit tests.


I'm going to speak in generalities here... If you do a lot of work with polymorphism then dynamic languages tend to be a much better fit. With polymorphic designs, you dont care what the type of your instance is, you don't care about its entire interface, you just care, does it respond to this message ( or set of messages ).

I have used and continue to use both dynamically and statically typed languages and I find value in both. I do find little value in static type system that can't do type inference.

As to a couple of your generalities, a tour through smalltalk would do you well if you think that excellent IDE support and great refactoring are something that springs from static typing.

EDIT:

People talk a good deal about the + of compile time type checking. I have done tons of coding in statically typed languages like C, C++ and more and I think the compile has caught less than a dozen type errors for me ever and those were usually because I got lost in pointer hell. Most errors that a compiler would catch because of type has been in my experience something that there should be a unit test for and the unit test would have caught.


C and C++'s type systems aren't nearly as useful as they could be. Haskell's type system, for instance, makes nullable types non-default, and forces you to catch any nulls while still being easy to read- the Maybe monad.

This kind of thing - a complete absence of NullPointerExceptions and segfaults - is what's great about static type checking.


I agree that haskell's type system is far more useful than c/c++. No argument there. I don't find the type system from C# or Java to be anywhere near as useful as haskell's.

That is part of the problem with the question... there are many different type systems out there that people would call 'static'.


I do both C# and Ruby on a regular basis, and well even with all the sugar, for most of the job, C# is nowhere as concise as Ruby.

I started using C# in 2001 and Ruby in 2004, and even after several years, I find I'm a lot more productive using Ruby.

This is my main reason - I can tackle projects with 2x or 3x less people on the team (which means I can also tackle alone projects vs. building a team of 2 or 3).

As well, I find dynamic languages more "fluid" to use.


As of right now, Python, Perl, Ruby and Tcl have larger and more relevant to everyday work libraries, documentation, sample code, etc, than Haskell, Ocaml et al.

This is changing. We'll see Haskell and Ocaml in the early 2010s, where Python and Ruby where in the early 00s. Still a little off the beaten track (as per the Python Paradox) but definitely usable. And if you're deploying compiled code, you don't even need any runtime on your target...


What do you think about the libraries, documentation, and sample code available for Scala or F#? I think they might have a better chance than Haskell or OCaml, no?


I'm excited by the possibility that these languages will make functional programming mainstream. Clojure's another exciting language. But I had to choose, there're only so many hours in the day, and for better or worse, Haskell is where I am focussing my efforts.


So your answer to why you choose dynamic languages has nothing to do with the type system? IE, the question isn't revalent to you?

Question, what does compiled code have to do with dynamic vs static typing? You can compile dynamic code, you can interpret static code. The typing system or lack thereof, has nothing to do with compilation.


No I love type inference.

I remember those days when Python was yet to be accepted as a mainstream language. I wanted to use it, but I was writing software for our clients, who would deploy my code on 10,000 Unix boxes. This is back when an estate like that was a huge and unwieldy thing to manage. There was simply no way that our customers were going to roll out and commit to maintaining a whole new runtime across their estate just for me (where "me" is "my project"). So our realistic choices were Perl or compiled code. Nowadays Python has become a "normal" language and no-one bats an eyelid when I list it as a prereq. Took, all told, maybe 7-8 years from when I first thought "Python can give me a serious productivity boost relative to C" to that point.

I have used Ocaml "for real" and I am still getting up to speed on Haskell (I love it!), but this time, I can skip all the struggle for acceptance. I can just ship 'em a binary. Is the type system going to give me as big a boost in real productivity (incl. debugging and support overheads) as Python did over C? Well we shall see...


I still dont see how much of this has anything to do with dynamic vs. static. You seem to primarily be addressing off topic points which have nothing to do with why do you choose dynamic over static.


Your question was:

What are your reasons for choosing a dynamic language?

And my answer was, to get a productivity boost over C. At the time, there was no strongly-typed language that it was feasible to use that offered as much leverage. Now there is, and one of the reasons for that is the new crop of functional type-inferred languages that "play nicely" with existing estates by deploying as compiled code (native, or .NET, or JVM).

Very few people have the luxury of coding in a vacuum. Language choice is very rarely a purely technical one.


It wasn't my question. My question could be boiled down to... so as you spent most of your time talking about things that have nothing to do with dynamic vs. static, i take it that they aren't really relevant to you. You said no. But you answer still seems to have little do with static vs dynamic but about more concrete comparisons that seem to be, at best, orthogonal to the question at hand.


Ermm, it was, it's right there on the page! I answered in great depth, explaining that "dynamic vs static" (again your words) is in fact a question with many variables.

Perhaps you meant to ask about "weakly vs strongly" typed?


I only use dynamic languages for reasons other than their dynamism. Lua is good because it's extremely small and easy to embed, ActionScript because it's the language of the Flash platform, etc.

Static typing can be just as concise as dynamic typing, and sometimes more. For example, Haskell's Maybe monad combined with do-notation lets you write code that has null checks where required without uglifying the code.

The benefits of static type checking haven't really been fully realized in popular languages. Type inference is almost mainstream, but you can go a lot farther with non-nullable types, structural typing, dependent typing, etc.


I think dynamically typed languages represent a kind of history of thought about programming, while statically typed languages have developed largely along their own path. Dynamically typed languages declared victory over "types" decades ago. That hardly signaled the end of language development, it was just one of many decisions made by a group of practitioners. Purely along the axis of dynamic-or-not you can make a bunch of arguments, speculate that a statically typed language can meet any particular feature or benefit a dynamic language has, etc.; but those statically typed languages have had an opportunity cost, and the opportunity they missed wasn't one of types but of all the other aspects of software development -- because in any community of practitioners people will hit a wall, it's just what wall, and how they get past it. I personally think people using dynamically typed languages have spent their time banging their head against more interesting walls.


I personally think people using dynamically typed languages have spent their time banging their head against more interesting walls.

I'd agree but I think that's because designing a static typing system rigid enough to be useful but flexible enough to be expressive is a lot harder than building a dynamic language. That doesn't mean though that there's nothing to be gained from that hard work. It took a long time to design operating systems that allowed truly robust, compartmentalized multitasking too, but they clearly turned out to be superior for most things in the end.

My instinct is that we're going to see a return of static typing at the cutting edge of production coding but I'm not sure if this current crop of options is going to effect this. Scala seems to me to be the only one with a shot at the mainstream for this round.


I used to use Python for its flexibility during experimentation, interactive prompt (REPL), and conciseness.

Nowadays I use Haskell for the same reasons, and Haskell is as statically typed as they come.

The one single thing that really made dynamic typing obsolete for me was type inference.


It's not a purely technical issue. Some languages are more popular than others, popularity means more libraries, frameworks and examples. Languages also attract different audiences, those who code C++, Java and ActionScript 3.0 just love piling on boilerplate code and long names like FactoryBuilderFacadeInterface, ApplicationFacadeDependencyHandler.

Other languages, like Ruby or JavaScript, attract people who shorten document.querySelector(id) to jQuery(id), come up with things like Processing (JVM but short syntax school), CoffeeScript, Rails, NodeJS, etc..

AS 3.0 is an interesting example. It's the language of the Flash plugin, which used to be popular among non-programmers as it allowed them to hack things together concisely, without having to learn about types or classes, since old AS 2.0 looked like JS. This was an audience the makers of Flash wanted to keep, but when AS 3.0 came out it chased them away. Online examples became polluted with design patterns, commands, factories, optimizations, complex best practices, long names. Libraries were targeted at advanced coders, there was nothing like jQuery for AS 3.0. When non-programmers see obj.addEventListener(MouseEvent.CLICK, clickHandler) they start shaking. JS frameworks do obj.on("click", onClick) or obj.click(onClick), few irrelevant details.

The language attracted serious developers who were used to big projects. Adobe released Flex, a gigantic framework reminiscent of the Java world, 300-600KB zipped for web deployment. Years later efforts like http://www.hypeframework.org/ tried to rekindle some of the old flame but the non-programmers are still using AS 2.0 or switched to JS.

Did AS 3.0 have to be as verbose? Did it need giant frameworks and Java developers? Did it need types? No. The performance improvements from AS 3.0 types were negligible, they can be left out in a non-strict mode and the difference is 2 FPS, maybe. For typical Flash projects, debugging with types isn't necessary. Supposedly the situation improved performance wise for types, but that's the wrong priority.

JS/HTML5 picked the smarter approach. There will be typed arrays for WebGL, they're ugly, which discourages their use unless actually necessary. Type strict mode applied to an entire app means example and open source code will have it, it pushes everyone to use it.

When subtle psychological differences are applied to large groups, they create obvious effects. The current division between typed verbose and dynamic concise emerged from social forces more than technical issues. There are efforts like Google's Go, typed Ruby, dynamic C# 4.0, typed arrays in JS, but they're all recent and it will take time for developers to discover new preferences.


  > those who code C++, Java and ActionScript 3.0 just love piling on
  > boilerplate code and long names like FactoryBuilderFacadeInterface
boilerplate code is not caused by the language, but by the programmer, I know more than few examples of compact/clean/well-named code in C++, Java, AS3, etc.

  > since old AS 2.0 looked like JS
no, that was AS 1.0 AS 2.0 was statically checked at compilation time

  > [...] but the non-programmers are still using AS 2.0 or switched to JS.
it's not a problem of being a non-programmer vs being a programmer, it's a problem of a group of people learning a certain way to program with AS 2.0, and when an update occurs to the language are too lazy to change their habits

  > Did AS 3.0 have to be as verbose? Did it need giant frameworks and
  > Java developers? Did it need types? No.
You're mixing a bit of everything there AS 3.0 is not verbose, you can still compile it "AS 1.0 style", just use the compiler MXMLC -ES and yes by doing that nothing is statically checked and all built-in classes default to "prototype" mode. (for ex: http://hg.mozilla.org/tamarin-central/file/fbecf6c8a86f/core...)

For the giant framework part, yes the Flex framework is big, but it does not define what I would call an official AS 3.0 framework, in fact AS 3.0 at the language level is pretty compact, the "real" framework is the Flash Platform API (or if you prefer the native classes available in the Flash Player or AIR).

So, again a problem of understanding with types, let be clear about something here, you seem to think that because JavaScript or AS 1.0 does not need to assign types to variables and because they are not statically type checked at compilation or interpretation, that there is no types, sorry they have types.

Read those articles from Eric Lippert The JScript Type System, Part One http://blogs.msdn.com/b/ericlippert/archive/2003/11/05/53336...

"JScript Classic is a weakly typed language. This means that any value of any type may be assigned to any variable without restriction. It is often said -- inaccurately -- that "JScript has only one type". This is true only in the sense that JScript has no restrictions on what data may be assigned to any variable, and in that sense every variable is "the same type" – namely, the "any possible value" type. However, the statement is misleading because it implies that JScript supports no types at all, when in fact it supports six built-in types. The "weak" in "weakly typed" refers to the weak restrictions on variables, not on the type system per se."

There are 8 parts of "The JScript Type System" and mho it is a must read if your language of choice is based on ECMAScript.

  > The performance improvements from AS 3.0 types were negligible, they can be left out in
  > a non-strict mode and the difference is 2 FPS, maybe. For typical Flash projects,
  > debugging with types isn't necessary. Supposedly the situation improved performance
  > wise for types, but that's the wrong priority.
negligible ?

OK, first thing first, when you compile AS 1.0/2.0 code, the generated bytecode (in the SWF) is run by the AVM1 (ActionScript Virtual Machine).

The Flash Player support 2 VM: AVM1 and AVM2, AVM2 is the Tamarin VM (http://www.mozilla.org/projects/tamarin/).

So when you compile AS 3.0 code, the generated bytecode is run by the AVM2, which support runtime type annotations and is much much much faster because of that.

But let's see hard data

http://www.masonchang.com/blog/2008/4/28/tamarin-benchmarks-...

http://www.masonchang.com/blog/2008/4/28/tamarin-benchmarks-...

see the difference with and without type annotation ?

now if you want all the greedy details of why this AVM2 is much faster you can look at http://www.adobe.com/devnet/actionscript/articles/avm2overvi...


"boilerplate code is not caused by the language, but by the programmer, I know more than few examples of compact/clean/well-named code in C++, Java, AS3, etc."

That's right, I said the language attracts a certain type of programmer, who codes with boilerplate. There are quick and dirty AS 3.0 libraries, but there is an abundance of frameworks that help you organize code like PureMVC and Robotlegs which add a healthy amount of complexity. JS world has a higher proportion of simple libraries like JQuery than big and heavy ones like Google's Closure, a visitor from Java land.

Flex framework is what Adobe thought would help developers. They spent some effort promoting it and integrating it into their IDE. This means they're not focused on graphics heavy projects that Flash is famous for, but are expanding into the big utilitarian app territory. Such moves attract verbose programmers, who write complex example code etc..

The standard API in AS 3.0 is also more verbose than the AS 1.0, 2.0 or canvas equivalents. This gives advanced programmers better control, while neglecting the needs of novice coders. Adobe is pretty worried about the situation actually, very few novice coders bother with AS 3.0 because they judge syntax visually and AS 3.0 looks longer.

"you can still compile it "AS 1.0 style", just use the compiler MXMLC -ES"

I mentioned that online examples and libraries would use types, which pushes everyone to use them. Using non-strict mode when everyone is using strict is less convenient than just using strict.

I don't know what VM non-strict mode in AS3.0 uses, I thought it uses AVM2, but I shouldn't need to know. I've tried 2D AS 3.0 physics simulations with types and without and the difference was difficult to measure. Most of the time was spent on rendering visuals, not on calculations, so there was no real world speed advantage in that situation. The cases where there would be, like encoding images, need threading anyway.

For a language that isn't a speed demon in the first place, it makes no sense to worry about silly optimizations that are noticeable only in unrealistic benchmarks. Knowing that you need to use int instead of uint to access indexed arrays slightly faster is distracting.

"it's not a problem of being a non-programmer vs being a programmer, it's a problem of a group of people learning a certain way to program with AS 2.0, and when an update occurs to the language are too lazy to change their habits"

I picked up programming a few years ago so I still remember how painful it was to switch languages when I was a novice. Now that I'm not a novice, switching to similar languages is barely noticeable because I understand the fundamentals. Keep in mind, lots of these novices code a simple thing here and there, they don't write large programs so they don't really learn programming. When things change drastically, it makes more sense for these people to stick to AS 2.0 if they can or let a developer handle it.

This is not so much a problem for novices, because they have options, this is a problem for Adobe since novices have options and are choosing Processing and HTML5.

"let be clear about something here, you seem to think that because JavaScript or AS 1.0 does not need to assign types to variables and because they are not statically type checked at compilation or interpretation, that there is no types, sorry they have types."

Yes there are types, but I'm talking about perceived language complexity and boilerplate code psychology. There are few instances where you have to worry about types in JS, but it's a constant time drain in AS 3.0.


I see no reason to use a language that eliminates run-time types. Having type information available at run-time means that I can develop, debug, and analyze at a living system. The IDE and refactoring support you can gain from this is in no way inferior to what you can do with a dead, static blueprint system.

My preferred language has dynamic typing, but its common implementations also do type inference at compile time, in order to optimize generic operations, and to warn about things that cannot work. Since during development I only recompile the snippet I am working on, while the rest of the system is running on, the compiler can use all the type information available from the running system.


Maybe I'm missing something, but it seems like having type information available at runtime is orthogonal to static type checking.


> I see no reason to use a language that eliminates run-time types

euh... even if a language is considered dynamically typed and/or using weak types, the language still have types.


You can keep type information at runtime and still be statically typed, a la the CLR.


For me, it's all about the interactive prompt. If I can't experiment with my code interactively my productivity falls through the floor. That doesn't rule out statically typed languages - Scala has a reasonable interactive prompt for example - but in my experience such tools are more common (and more mature) with dynamic languages such as Python and JavaScript.


That's probably just a historic accident. Most modern languages have an interactive prompt now. For example Scala, as you mention.

Another factor might be the availability of type inferencing, which makes interactive prompts perhaps more viable, as you can for example type "1 + 1" into the prompt, and its type is automatically deduced.


When it comes to prototyping nothing beats Ruby or Python. The initial solution won't be pretty or maybe even correct and it will contain tons of bugs but it will give you an indication of the viability of your solution. Nobody starts with a ready made program in their head ready to be coded and dynamic languages usually help you get started with a solution much faster than static ones. At least that's been my experience.


> When it comes to prototyping nothing beats Ruby or Python.

Have you ever tried Prolog? While it has a lot of quirks* , it's an excellent prototyping language. (It's also usually dynamically typed, though some implementations have type annotations.)

Instead of a usual function(arguments) -> result model, it treats everything like a search query, incrementally suggesting every possible variable instances that fit known rules. Since search, backtracking, and unification are built in, a lot of code becomes implicit, and you can prototype things quite rapidly.

* Briefly: I think it would be better as an embedded library (like, say, SQLite or Lua) rather than a standalone language. It's excellent for some things, but IO / side-effects are awkward in its model.


Wish I could upvote you more. I love scala (a lot more than python/ruby), but it means slow prototyping since you have to put good thought into the type system before you write a library. (If done correctly however, you won't need to put any thought into using said library).


Seeing statically typed variants of Python such as ShedSkin or PyPy's RPython, I think that one major annoyance is gone.

You still need polymorphism for some things, and having the possibility to have "implicit interfaces" (you just write x.foo() without declaring that x implements the foo interface, or that y and z also implement it - this is also useful when y or z are library classes you cannot change).

Often, the type inference that is needed slows down the compile times quite a bit; and (both with Haskell and with C++) you get incomprehensible error messages when something is wrong but it could have gone wrong in half a dozen places. (C++: 6 lines with "in ...", then "no such method: foo", Haskell: expected type bla->foo->Z x, found a->B c->d)

IMO, static type analysis on top of a dynamic language (e.g. PyDev) gives you most of the benefits of IDE and refactoring support - with the additional coolness that it can just skip the places that are not type-able. (Optional type declarations to put PyDev back on track in the most obvious cases would be great, though).


I can get away with simple dynamic typing in small Erlang projects. But building a large project in Erlang/OTP without static type analysis is hard. You will debug for hours some stupid bugs, which C++/C#/Java compilers usually find for you.

I started using type specs and dialyzer - it finds typing errors, and make building complex systems in Erlang much easier. The only problem that dialyzer requires intermediate step of building PLT, which sometimes can takes 20 minutes and more, but only once per project.


Static typing can have the benefits of implicit interfaces without the danger of runtime errors- look at Go's structural typing.


Having to think about types sometimes slows you down (because you have to think about them), sometimes speeds you up (for the reasons you mention).

Sometimes the type system won't type check a perfectly valid program. One thing that I bump into with nearly every program I write in C# or F# is that I want to add a method to a generic type when the type supports some method. Suppose you have a type Bag<T>. Now if T supports an interface IValue with method ComputeValue() I want to add a ComputeValue method to Bag<T> that computes the sum of the values of the things in the Bag. Of course the method would only be available when T supports IValue. This can be done in languages with more expressive type systems, but there are (provably) other perfectly valid programs that they will reject.

Another reason against statically typed languages is that the compilers are often slow. Compile+Run takes several seconds (if you're lucky).


Compiling in a language with static typing is a lot faster than running (not to mention writing!) unit tests to achieve 100% path coverage in a language without static typing.

It's also fast enough that I stopped caring years ago. Don't get me wrong, Eclipse is agonizing when it locks up while "helping" me, but it's not due to compiling (it happens just as often when I haven't made any changes).


Working with static types feels like wrestling the code around. Why should the programmer have to care about internal dynamics of a language that are the difference between double, long, int, Integer, Long, short, etc. The compiler or run-time environment should figure it out. That's one more thing that can break, that I'll have to fix.

It gets really irritating when it's an attribute of an object that's returned by a getter that's wrong and cannot be cast, then I need to figure out if this thing is used anywhere other then the place I'm at and if it is, create a new method or... at this point the code is just getting in my way.


Yeah, that type (heh) of typing is boring. What is really, really useful, tho', is a type system that can stop you doing things that make no sense in your problem domain. This was the original idea behind Hungarian notation btw: the C compiler can trap things like multiplying a char by a double; it can't trap you adding a weight to a height.


Would be nice to have a mixed dynamic/static typing system. So you can static type when you need the solidity and dynamic type when you want ease of use.


In my experience, no matter how sophisticated a static type system is, at some point it ends up getting in my way. This goes double for languages with very simplistic type systems, such as C#, as they have all of the disadvantages of a static type system with few of its advantages.

Statically typed languages tend to lack good REPLs, a tool I consider indispensable. I'm unaware of any statically typed language that has something as complete and integrated as SLIME, for instance. Even a properly setup IRB or IPython don't seem to have anything that matches them in the statically typed world.

Another problem is that statically typed languages can have long compile cycles. One C# project I worked on made heavy use of PostSharp, the use of which slowed the compile time to nearly a minute. Additionally, whilst Visual Studio updated the ASP templates in real time, there didn't seem to be a way to reload new DLLs without restarting the development server.

Conversely, when I'm hacking in Clojure, updating the entire system can be done without restarting the server, and takes a fraction of a second.

Macros, too, I guess are easier with dynamic typing. When I write a macro in Clojure, it seems a lot easier than when I was using Template Haskell a few years back. Perhaps there's a better system for macros in a statically typed language, but I haven't run across anything like that yet.


Another problem is that statically typed languages can have long compile cycles.

That only applies if you compile all classes at once. Netbeans (and probably others as well) compile Java classes instantly because they only compile what's changed.

there didn't seem to be a way to reload new DLLs without restarting the development server.

I have seen environments for Java that could reload JAR-files. Perhaps DLLs are different, but that has little to do with a statically typed language.


That only applies if you compile all classes at once.

Unfortunately, a lot of post-processors like PostSharp aren't smart enough to do that. Even though Visual Studio was only compiling once, PostSharp seemed to require a lot of time to do its thing.

I have seen environments for Java that could reload JAR-files. Perhaps DLLs are different, but that has little to do with a statically typed language.

I'd be interested if anyone knows of an environment for a statically typed language that could do this quickly and without losing run-time data.

In a dynamically typed language, altering the server environment at runtime incurs only the cost of evaluating the code. I can inject code at any point, redefine functions, query the state of the current data structures, and so forth. The entire environment is mutable; something that seems like it would be difficult to do with static types.


For Java reloading, I'd recommend you check out http://www.zeroturnaround.com/jrebel/


to sum it up very quickly

  statically typed: is-a, has-a
  dynamically typed: is-like-a, has-like-a
Read the introduction from Dynamic Typing in a Statically Typed Language http://lucacardelli.name/Papers/Dyn.ps "Even in a statically typed language there is often the need to deal with data whose type cannot be determined at compile time"

much more other interesting papers http://lucacardelli.name/indexPapers.html#Types and Semantics

If you find that today's C# can be more concise, it's simply because since C# 1.0 to C# 4.0 they added a lot of extensions that can make the language more dynamic (see http://en.wikipedia.org/wiki/C_Sharp_4.0 with the use of 'dynamic').

In comparaison, if you look at the ActionScript language which have its roots based on ECMA-262(ECMAScript), from AS 1.0 to AS 3.0 they evolved the language to make it statically typed, but you still keep dynamic functionalities like the * (any) type or the option to declare a class dynamic and add members to the prototype.

After it can be a long debate to try to prove which one is "better".

For example, combine unit tests and a dynamic language and you don;t really feel you're missing anything with type checking. Counter example, you can have a statically type checked language that compile and can still throw errors at runtime.


You've assumed that statically typed languages are somehow "better" than dynamically typed languages and then asked "Why doesn't everyone use the better languages?" Dynamically typed vs. statically typed is still an open question and is a lot more complicated than "which is better?"

> And on the plus side, you get all the benefits of compiler type checking, better IDE support, and great refactoring support.

Since when? Compile time type checking is only a benefit if you're worried about type errors and in my experience the people that are most concerned with type errors are the proponents of statically typed languages. With respect to IDEs and refactoring support, my experience has been more along the lines of "C# and Java have good IDEs and refactoring tools, everything else not so much." I personally find Scala and Haskell's tool support to be far behind Python's, just for example. Also, Smalltalk has, arguably, far better tools available than any statically typed language (or any other language for that matter).

Nobody has proven that statically typed languages are better than dynamically typed ones, nor that applications built with them are less error pronen. I would argue that the amazingly widespread use of Ruby and Python and the average quality of projects written in those languages is evidence to the contrary.


> I would argue that the amazingly widespread use of Ruby and Python and the average quality of projects written in those languages is evidence to the contrary.

Projects where people are able to choose the language will probably be of higher quality than projects where they're stuck using Java, C#, etc.

To some extent, this is probably independent of the languages themselves - there are a lot of terrible Java programmers because it's a default language, rather than one sought out by curious and motivated programmers.


Regarding tooling, Java has been massive investment in this area compared to ruby/python etc which is one of the reasons for it good IDEs.


This is not the only reason. Sometimes you just can't do it with a dynamic language. For example it is impossible for the IDE to help you with auto complete int this case:

def some_function(arg1, arg2) arg1.

The IDE won't show you a list with possible functions because arg1 can be everything.


After reading through this thread I still haven't seen convincing arguments why statically typed languages are not "better" (EDIT: "better" for typical web/business applications).

I have been programming Python for many years now, along with a mix of C#, Scala and some F#. I find that the programming idioms that I care about are supported in all of them.

There are some valid problems, but IMHO they are not major:

1. Compilation Time (But do we have to compile that often? Although I find myself doing this more often with Python, to make sure I did not make a spelling mistake!)

2. Duck-Typing (Yes good to have. In future we might see more Structural Subtyping)

But what tipped the scale is my experience with refactoring large projects. In dynamically typed languages, it is scary to make a even simple changes in code you haven't touched in a while. Unless you have full test coverage, plus insurance. In statically typed langs, this is much easier.

We run software services company. We have a policy of refactoring suboptimal code on sight. We try to do it in all projects, but I always see worried looks when refactoring dynamically typed code. And not so much when we are modifying, say C#.


I think that the dynamic vs static debate also depends a lot on how you program. For example, there are people who like to write a few lines, then test, write a few more lines, test again, and so on. For them dynamic languages are prefect.

On the other hand, I am the complete opposite. I often work for hours or days, sometimes even weeks, on a program without ever running it. This is only possible when 99.9% of all errors immediately caught by the compiler (or IDE). Also, I document everything that I do extensively with Javadocs or similar mechanisms. When I write in dynamic languages, I usually start writing the static type annotations ("//array of string") as comments, without having the benefit of using code completion and refactoring. So effectively, for me, dynamic languages mean more typing rather than less.


A big advantage of Python versus C# is not having to think about types unless you have to, and if the code is less verbose that's a side benefit. (And honestly, I'd be surprised to find a whole lot of C# code that couldn't be written to be more concise in Python, if performance wasn't an issue)

    def foo(lst):
        if type(lst) is str:
             #return do_string_stuff_to(lst)
        else:
             #handle do_list_stuff_to(lst)
And I'm done. There's a good chance I'll throw the function away or re-write it before I ever care about the sort of edge cases that would get caught by a static compiler. If the algorithm works on a dictionary or set just as well as the list I had in mind, so much the better. In the meantime it's doing its work and helping me get other things done.


> If the algorithm works on a dictionary or set just as well as the list I had in mind, so much the better.

This is the key difference in attitude. I don't want a language that frictionlessly helps me screw up. If what I'm doing might work but nobody actually knows, I want some kind of warning. At the very least I need to add your code to my test coverage.


That example, in a statically-typed language could be handled with overloading or polymorphism, which are both far more concise IMO.

C#'s standard library, for example, has common interfaces for collections so the code will still work on all kinds of lists, arrays, etc.


I believe you meant

    if isinstance(lst, str):


If you really want to be pedantic, it should be:

    if isinstance(obj, basestring):
So that it works with unicode in Python >= 2.3


As a non-programmer, in that I use programming for math/eng problems, the main advantage for a dynamic language like python is that you do not have to compile it.

I'm not a good programmer so sometimes I have tiny errors and such and being able to vim exec.py fix ./exec.py can help tons.

I do a good amount of things in matlab and must say that I use the cmd line (REPL?) infinitely along with my programs, it is just so easy for me to find solutions that way compared to something like C# and for someone that doesn't always know how a function works using it on the command line with documentation is great.


You are confusing dynamic typing with dynamic code interpretation.

So there ARE statically typed languages with a REPL, for example Haskell.

And there are dynamically typed languages without a REPL - for example Perl (in the core distribution).

Static typing means that for example, a list can only contain one type of element. And because of these constraints, the compiler can find some errors that are not found otherwise.


And there is the bad programmer part :), thanks for the clarification.


I choose Python, it just happens to be dynamically typed.

Not having used static typed Python I can't say for sure but I believe significant indentation, std lib, and how language generally follows "the zen of python" are at least as if not more important than static typing.


Because you can iterate, and you always have an REPL. Also you can change on the fly function definitions, reevaluate vars, and even more stuff - for example change the classes on the fly for Common Lisp (and I'm sure for others).


I'm confused by your statement that statically typed languages are concise. Every time I program in Java, the code size ends up blowing up exponentially. However, a good 30-40% of the code is boilerplate rather than having to do with the task at hand. Now, this isn't a direct consequence of static typing BUT it looks like a very high portion of statically typed languages lean towards having a low content to code ratio.


Java is not the only statically typed language, and some statically typed languages are almost phenomenally concise.


If you're referring to Haskell and the likes, sadly they're not nearly as prevalent.


See Scala for a concise static typed language.


Or others. A lot of people seem to think of Java when they immediately think that static typed code has to be very verbose. Much of the verbosity of Java comes out of design decisions that were made by choice-Java does a lot to limit the programmer's power, for better or worse (Pretty much entirely worse when working with competent programmers)


That is most definitely Java's fault for lacking good abstractions. Try Haskell or C#- much higher content:code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: