Hacker News new | past | comments | ask | show | jobs | submit login

This claim: "Python/Ruby/Javascript: my experience is that large systems are difficult to maintain in these languages, as the dynamic typing makes it difficult to refactor large codebases without introducing errors in poorly tested areas of a repository"

-- is unfounded. Even though there may be some good reasons why one might use Go instead of C/C++, I find it hard to justify using Go instead of Python/Ruby/JS/Java (the only criticism of Java - that it's verbose and hard to tune - is questionable as well). I've said this before. I like Go. I contributed to Go. I've used Go and I still use it from time to time. But much like D, I don't think it has a niche.

There are magnitudes more libraries and resources available for Python/Ruby/JS/Java -- thus far, it's been more than enough to sway me into using those languages (mostly JS/Java) instead of Go.




I share the sentiment. People often mention that Go is less verbose than Java like they were talking about Clojure or something. Go is just a little less verbose than Java (other than in a hello world example), and targets the same conceptual "level" (same order of abstractions, same "distance" from the metal, although Java can get closer to metal than Go). So, sure, it feels a little more modern in some respects (and less modern in others), but when considering both languages carefully, I find I really need a magnifying to tell the two apart. If Go had offered everything that Java does, I still wouldn't have had a compelling reason to switch because the differences are just too small.

But Go doesn't offer everything Java does. Like you said, Java's ecosystem dwarfs Go's. Java has dynamic linking, runtime code instrumentation, unparalleled tooling, and better performance than Go. The only advantage I see Go has over Java is a shorter startup time, which makes it a reasonable choice for writing command-line programs. As for concurrency constructs, Java is far more flexible than Go, and because goroutines and channels are easy, I've ported them to Java[1] (and Clojure).

Go sure is easy to get started with, but it would have to be 20 times better than it is to make me give up the JVM. In reality, it's just a recent, beginner-friendly Java without the awesomeness of the JVM.

(P.S. I'm not sure Java's often mocked factory-factories aren't simply a result of the huge number of multi-million LOC programs that have been written in Java. It's just experience, and Go sure doesn't have the necessary abstractions to make engineering large systems any easier. Other recent languages -- sure -- but not Go)

[1] https://github.com/puniverse/quasar


My too, that is what made me stop playing with Go and look forward to D and Rust.

Initially Go attracted me, mainly because of the Oberon-2 influences (method declarations) and being compiled by default to native code. Java and .NET have AOT compilers, but don't tend to be used that much.

I even tried to do some initial contributions before the 1.0 release, but with time I got a Java 1.0 feeling. The language just throws too much away in the days that the enterprise is adopting Scala, Clojure, F# into their ecosystems.

I wish all the best to Go developers, but personally I don't think the language would be all the time here, if it wasn't being done at Google.

I mean, how often was Limbo discussed here, if ever?


Yes. Rust is an entirely different beast. A truly modern C-level language is something to look for.

There's no doubt Go wouldn't have been discussed here if it hadn't been done at Google. It's a really nice language, and the tooling is nice as well (far better than other new non-corporate-sponsored languages), but it doesn't address a need like Rust, or offer a way to tackle the hardest modern software development problems like Erlang or Clojure. As a language, it's not interesting either (say, like Haskell). It's not even a modern Java (like Kotlin). It's just Java. (only severely handicapped but made a little friendlier)

Then again, all this might not matter. Go is well executed, it's easier for Python devs to adopt than Java, and it's made at Google. And Google is known for making popular Java flavors, so, if Go's particular (few) strengths appeal to some -- why the hell not? There are smart people behind it, and I'm sure we can learn from Go, too.


I don't think it's fair to say that Go doesn't address a need. I volunteer on the Rust project, and compilation time of the compiler itself is one of the biggest problems at the moment. Take a look at at the turnaround times on our automated testing bots:

http://huonw.github.io/isrustfastyet/buildbot/

That's 24 minutes spent compiling each and every pull request! Granted, this is triply exacerbated by the fact that as a self-hosted compiler we have to compile three times, but I'd kill for sub-minute turnaround times for even a single stage (on a beefy dev machine we're down to maybe four minutes per stage, so 12 minutes total). We're putting a lot of focus on reducing this burden for our next release, but that's still time that could have been spent on features.

As I understand it, Go is intended to address this need for systems at Google's scale, where compilation presents enormous time overhead. I'm not a Go user so I can't comment on how well it achieves this, but one way or the other I think it's a really fascinating thing to optimize a language for.


Rust suffers from having to compile LLVM as build dependency.

This will surely improve when Rust no longer requires it.

Go compile times are sweet, true. But Modula-2 and Pascal dialect compilers were already achieving similar compile speeds in 16 bit compilers a few decades ago.

Young developers get impressed by Go compilation times because they never experienced those systems.


Those compile times don't include building LLVM, which, on the rare occasion that a recompile is necessary, will add about five minutes on a beefy desktop and about an hour on our buildbots.


Sure, Go compiles a lot faster than C++, but unlike Rust, Go doesn't operate at the same level as C++, but at the same level as Java. How much faster does Go compile than Java?


I'm pretty sure the compilation time is less to do with the "level" that it operates. From presentations on the topic, most of the time savings were by making the syntax completely unambiguous. Apparently, in C/C++ and similar, some of the longest parts of the compilation process are lexing/parsing/AST-building, etc

(citations needed)


Go pretty much compiles and runs within the time the JVM needs to even start.


Which JVM?


Go sure is easy to get started with, but it would have to be 20 times better than it is to make me give up the JVM.

I like Rust because it gives you control (like C++) but does so carefully (unlike C++). In ten years, Go should be a great alternative to Java and Python. In ten years, Rust should be a solid alternative to C and C++. (That is, assuming they both succeed.)

A language like Rust, which is reasonably expressive and safe but fast and with little overhead, should become a very desirable language as hardware improvements become more marginal (adding more cores eventually brings diminishing returns), and as battery life becomes more important with mobile devices.

But the portability problem is solved really well by the JVM. I'm hoping Rust will make writing cross-platform, native code easier.


You have "ported" a green threads implementation for Java? I'd really like to know more about that.


I call them fibers. Or lightweight threads. They're like Erlang processes or goroutines. You can read more about it here: http://blog.paralleluniverse.co/post/49445260575/quasar-puls...

The library is very much in active development.


You mention Kilim (but perhaps not in a very good light)! Very cool.

To be honest, I was surprised what you could do with Kilim (and the awesome robustness of the system). Unfortunately, I don't think Kilim has been updated for ASM 4.0 -- your library looks interesting though, I will certainly take a look at it.


I have nothing but admiration for Kilim, and I considered using it, but I needed something more modular.


It's not easy, but certainly possible. I contributed to Kilim[1], a microthreading Java library built by the brilliant Sriram Srinivasan.

[1]http://www.malhar.net/sriram/kilim/


With cgo, Go can get "just as close" to the metal too. But I don't even think a cgo vs. JNI benchmark would be substantive. Most people don't use a "medium-"level programming language to write low-level code, anyway.

Like you say, Java is barely less verbose and arguably just as powerful as Go with many more times over the documentation and resources.


Oh, I would say Java is much, much more powerful than Go. Other than dynamic code loading and runtime instrumentation, even when it comes to concurrency you have your choice of schedulers and control over OS threads.


I agree, but I think fans of Go would not :P Hence my tentative "arguably."


This claim: [...] -- is unfounded.

Actually, most of that enumeration consists of shallow clichés. For instance, take the description of Java:

Java: too verbose,

Having written a fair share of Go and Java code, I have to say that difference is not all that profound. The usual boilerplate that people come up with is the construction of a BufferedReader/Writer. But Go has its share of boilerplate as well (e.g. error handling). Java currently has the advantage that IDEs can quickly generate whatever boilerplate is necessary. Given that Go is easy to parse and has a simple module system, there'll probably fairly complete IDEs for Go as well.

too many FactoryFactories

That very much depends on what libraries or frameworks you use. I have written lots of ML and NLP code in Java the last half year or so. I can count the number of insane 'FactoryBuilderProxy'-like classes that I encountered on one hand. Wait a bit and the architecture astronauts will also be writing Go packages ;).


> Actually, most of that enumeration consists of shallow clichés.

You can apply this to most of the writing about programming languages out there. My view might be skewed on HN, but it seems like the majority of this stuff is put out by programmers trying to promote themselves.


I was about to post the exact same comment, with the above quote from OP in my clipboard.

In my experience, this claim is made by (many times really good, experienced, intelligent) coders who have a lot of previous experience in static typing and aren't comfortable -- either because of a lack of time and sheer LOC, or because of some mental block they've doubled-down on ("where's my 'extract method' menu option???") with the techniques and peculiar challenges of dynamic type refactorings.

I work on a very large application/system of applications in a dynamic environment, and I've never encountered refactoring or maintenance issues that I thought would have been any easier in my previous life, 10 yrs in static typing.

The other thing to keep in mind is that -- at least when comparing something like, for instance Rails to Java, there will be a lot less code, and this is a significant contribution to code management. That factor, plus just experience and acclimation to the techniques for dynamic-type editing/replacing/finding/greping and so on, plus solid TDD practices -- maintenance has never been an issue for me, and I've had a decent amount of time in large codebases in both dynamic and static environments.


With a static language, you get more with less. If you didn't have any testing framework, the compiler would tell you where you messed up. So, there's that, barring "well you should have had testing code to begin with".

Grepping is ghetto, second-class. Whereas an IDE with built-in refactorings has a much higher guarantee of hitting the right artifacts, especially with shared substring collision. Again, barring "well you should have named things better".

Having spent the majority of my career as dynamic and now a recent convert to static, I fail to see the allure of dynamic languages at a certain scale. You can shoot yourself in any language, but I think it is easier and safer to crawl out of a static mess than it is a dynamic one.

And, I would argue the majority of applications do not demand the level of dynamism that dynamic languages are capable of, making it a waste.


> Grepping is ghetto, second-class. Whereas an IDE with built-in refactorings has a much higher guarantee of hitting the right artifacts, especially with shared substring collision.

Of course that only works when the tooling exists, which for instance it does not for Go but does for Python. Doesn't for Haskell but does for Ruby (Jetbrains has done a pretty good job there).


Well, you also don't have to rely on the IDE or grep since your incorrect code just won't compile.


This claim: "Python/Ruby/Javascript: my experience is that large systems are difficult to maintain in these languages, as the dynamic typing makes it difficult to refactor large codebases without introducing errors in poorly tested areas of a repository"

-- is unfounded.

Not necessarily unfounded, you simply don't have access to data on it.

I am not at liberty to share details, but I have in fact seen data from a large company based on many internal projects that found that initial development was faster but long-term maintenance costs were much higher for stuff written in dynamic languages like Python and Ruby than in static languages like Java and C++.

The cost difference was both large and real. As someone who mostly does dynamic languages I didn't like the conclusion, but I couldn't argue with the numbers.


> initial development was faster but long-term maintenance costs were much higher for stuff written in dynamic languages like Python and Ruby than in static languages like Java and C++.

I don't have any hard data, but this feels right. I wonder what would be the results with Clojure, which certainly isn't statically typed, but doesn't do duck-typed function dispatch (like Ruby and Python, and even Go) either.


It's a misnomer to say that Go uses duck-typing because an object must statically implement a complete interface, not just a subset that's hopefully sufficient for what happens at runtime.


Sorry, you're right.


> I have in fact seen data from a large company based on many internal projects that found that initial development was faster but long-term maintenance costs were much higher for stuff written in dynamic languages like Python and Ruby than in static languages like Java and C++.

A big part of this is the shortsightedness of the companies themselves. There are steps you can take to ensure that you can manage types in dynamic languages. Very often, companies simply don't take these steps. It's not programmers that need babysitting by tools. It's big enterprise.


> It's not programmers that need babysitting by tools. It's big enterprise.

If one language is easier for "big enterprise" to maintain than another, then what's the problem? Obviously different languages have different strengths.


The claim 'big enterprise is better served by a statically typed language' is a different claim than 'dynamically typed languages cannot be used for large projects.'


I only make this claim for the "typical" big enterprise.


> If one language is easier for "big enterprise" to maintain than another, then what's the problem?

I never said there is a problem. There is a lot of inefficiency in big enterprise, but since they're basically sitting on a formula to print money, it's often not a problem. (Until they start to get out competed.)


A big part of this is the shortsightedness of the companies themselves. There are steps you can take to ensure that you can manage types in dynamic languages. Very often, companies simply don't take these steps. It's not programmers that need babysitting by tools. It's big enterprise.

I am not at liberty to discuss details, but I would be shocked if your theory was an accurate explanation of the data that I saw.


> This claim: "Python/Ruby/Javascript: my experience is that large systems are difficult to maintain in these languages, as the dynamic typing makes it difficult to refactor large codebases without introducing errors in poorly tested areas of a repository" -- is unfounded.

Why is it unfounded? The author clearly qualifies his claim with "my experience". That's my experience too. I wouldn't extrapolate it to all programmers and in all scenarios, but dynamic typing just has never worked that well for me in larger projects.

It could be because I'm not as diligent about writing unit tests or it could be because I'm not intelligent enough to reason about large amounts of code without static types. Regardless of whether it is, it's my experience and it certainly isn't "unfounded."


I don't think reasoning about types is a matter of intelligence. I happen to share your experience. It is more of a cognitive burden, exacerbated by less diligent coworkers. No amount of "convention within dynamism" can compete with first class compile time type safety.

If I were to babysit a codebase by myself, maybe I would not feel this way. But with regards to professional team development on a large code base, the ship has sailed on dynamic languages.


I fully agree.

Dynamic languages don't scale on the typical enterprise Fortune 500 with three development sites and 50+ developers, as an example of the typical project sizes I work on.

The main reasons tend to be:

- Lack of unit testing, yes even in 2013 most enterprise managers would rather that time is spent on "real" coding

- Massive code size, hard to navigate with just grep/find

- Skill set varies too much across teams, specially if seen as cogs


It's weird. You mention that the claim is unfounded, and then you have all this text that follows, but none of it seems related to supporting your point.

How is the claim that [X at large is difficult to maintain] unfounded?

I happen to agree. I am (was?) a Perl/JavaScript developer. I find both to be sorely lacking at a certain size. And by extension I assume Python and Ruby to be the exact same (under the shared umbrella of lacking static type safety).


@dvt, this is the OP here.

I was hoping that those who already agree with me about dynamic languages would come to understand that Go is different in this respect. I did a lousy (i.e., nonexistent) job making a case to those who don't agree with me [yet! :)] about dynamic languages, though.

I will write a followup post later this week about the long-term maintenance problems associated with languages in the python/ruby/javascript family. I don't think they're "bad" (I was known to advocate for python in certain situations when I was at Google), but they're often inappropriate, and it is my sense that many developers haven't had the requisite large-dynamic-language-project trauma yet to understand that from firsthand experience. (The toughest part about those traumas is that they happen so late in a project's lifecycle that there's no quick way back to safety...)

So I will try to make that case in a future post. Thanks for your thoughts.


Dynamic languages work out great when code coverage is ~100% at every run, and runs are short.

Virtually all programs start out that way, so dynamic languages feel great.

As they grow, the pain creeps in very slowly. As you said, by the time the programmer realizes he's in hell, it's too late to fix it.


> by the time the programmer realizes he's in hell, it's too late to fix it.

From what I've seen, it's more that management doesn't want to take resources from fighting fires to move some gasoline. A disciplined group can even take rat's nest code and whip it into shape: but only if management is clueful enough to make that a priority. Usually, they're making decisions on a short-term basis.


Exactly. With proper discipline and true 100% test coverage, dynamic languages work well. But over time, that ideal is a challenge for most software organizations to actually live by. Or that's been my experience... it's one of those theory vs. practice situations.


Maybe you'll convince the rest of us! If you do write another blog post, try to include the recent indie nightmare that was purportedly caused by Go[1]. I think it's relevant here (since we're discussing large projects, after all).

[1]http://forums.thedailywtf.com/forums/t/27755.aspx -- it was also on HN but I'm too lazy to dig out the thread.


Ah, I hadn't seen that, thanks!

For what it's worth, "Go" as a language is not really implicated in that, it's more like the `go` cmdline suite that was causing trouble. I would also contend that the devs were being foolish to do what they did... Assuming everything was in a git repo, the toolchain makes proper use of submodules, and to my mind this sounds like a case of developers fundamentally misunderstanding git, not Go per se.

But my "railing on Rails" (and, to a lesser extend, Node, Django, etc) will not focus on Go... it's more of a general critique about the lifecycle of large software projects written in dynamic languages.


> There are magnitudes more libraries and resources available for Python/Ruby/JS/Java

Those languages have been out longer. I think Go will get there.

EDIT: cgo dramatically expands the libraries. I hear a lot of people say "Go doesn't have X", but if X is already implemented in C (assuming some conditions are met), then you need not reinvent the wheel. I've been using Go as a concurrent parent over C functions for my geospatial research... really nice fit when you need to do 100 gigs worth of XYZ coordinate transforms.


> I think Go will get there.

Possibly, but the language has very little to add over the incumbents. I think many Python programmers are attracted to Go because it's easy and so much faster than Python. But for veteran Java programmers, Go feels like a handicapped Java.

I will admit that it's probably faster to write a short Go program than a Java one. But on the JVM I'd use Clojure for short-and-sweet stuff anyway, and use Java when I need the big guns.


As a veteran Java programmer, I like that Go is a handicapped Java. Please force all of my coworkers to use delegation rather than implementation inheritance and make CSP-style concurrency easier to code than synchronized-style.


In that case, wouldn't you also want a more thorough and thought-out solution to concurrency like Erlang and Clojure offer? And if you prefer the familiarity of a C-like syntax, Kotlin is just what you're looking for. It won't force you to use delegation, but Go also leaves the door open for a a whole class of problems (especially concerning concurrency).


Communicating sequential processes is pretty thought out, I haven't used much Erlang but I was under the impression it uses a similar model to go (green threads, message passing between them).

Can you explain what you mean by 'more thought out'? If you're talking about STM or pmap and friends, then I'm very, very unimpressed by them (see Amdahl's law for why). I can do shit really slowly in one thread just fine.


Then why not both CSP and immutability? You can't really have serious concurrency without giving serious thought to managing state.

Erlang is really all about managing, and isolating, state. Clojure, too, has great support for CSP (with care for state) in core.async[1] and Pulsar[2].

[1] http://clojure.com/blog/2013/06/28/clojure-core-async-channe...

[2] http://puniverse.github.io/pulsar/


Immutability's great, I pass around Callables of final variables in Java all the time to accomplish CSP and immutability.

That's not to say you can only have immutability, though, we're humans here and capable of reasoning about happens-before relationships. I wouldn't implement a parallel linear algebra library, for example, that didn't use mutable just-plain-arrays of floating point values..


Clojure doesn't 'just' (I don't know how to do italics) have immutability. It has really good reference types for managing mutable state when you need it, as well as easy interop to just use plain arrays if you need them. Immutability is strongly encouraged, but mostly at the interface level. If a function is referentially transparent, most people don't mind if it's internally bashing on a mutable array.

It's great to have the vast majority of your stuff be immutable and have it clear when you're using mutation. I think avoiding mutable state is more important for maitenance than dynamic vs static typing personally (though I'm recently becoming more interested in static typing).


Yeah, I read the clojure book back in like 2009 and used it a little. I thought it had a lot of neat ideas but when it comes to just getting a job done, if it's not a problem where pure functional programming brings a lot of value (things that really benefit from code-as-data), I'd rather just write boring old procedural code. Easier to read, easier to reason about.


I strongly disagree with procedural code being easier to read and particularly easier to reason about. I haven't run into many (any) problems in the last few years that I thought functional programming didn't bring a lot of value. The code-as-data is kind of an orthogonal issue, most functional languages aren't homoiconic.

Edit: also Clojure isn't pure.


That's fine, you're just in a tiny minority.

Code-as-data is actually a win that you can't get from procedural languages, which is why I brought it up. It makes solutions to complex configuration spaces possible that aren't even conceivable in procedural languages. Everything else is just syntax.


But for veteran Java programmers, Go seems like a handicapped Java.

Assuming that the Go toolset evolves to a point where it matches Java performance, does Go's license (and non-association with Oracle) offer any advantage?


For some, perhaps.


Would you consider Scala for any use case?


Not personally, but only because I really don't like Scala. Some would say it has many use-cases. Scala is a chameleon of a language, which some may consider a strength. I think it makes Scala a non-language: a wonderful compiler, but zero coherence and a level of complexity which few organizations will tolerate in a language.

My guess is that most Scala users simply like it as a better Java, and Kotlin does a better job at that. Kotlin is what Scala should have been (and wanted to be) before it was overcome with a desire to prove that a compiler can do all sorts of crazy stuff. It's hard for me to understand what problem Scala is trying to solve (other than the challenge of writing a compiler with lots and lots of features), but whatever it is, the language is constantly becoming harder to understand, so its mysterious goal should better be really good.

So, for me, Kotlin is the true Scala. Problem is, Kotlin is very, very young, and it's way too early to tell if it will ever take off.


  > what Scala should have been (and wanted to be) before it
  > was overcome with a desire to prove that a compiler can 
  > do all sorts of crazy stuff. It's hard for me to 
  > understand what problem Scala is trying to solve (other 
  > than the challenge of writing a compiler with lots and 
  > lots of features), but whatever it is, the language is 
  > constantly becoming harder to understand, so its 
  > mysterious goal should better be really good.
This is a bit hard to follow ... I'd love to see some examples.


I can give plenty, but I'll try to keep it short. First, what is the problem Scala is trying to solve? I know that Erlang and Clojure try to solve the problem of writing concurrent code (and fault-tolerant code in Erlang's case). Haskell tries to make writing correct code easier. Ruby and Python were made for ease and productivity, and both Ruby and Clojure are great for DSLs. Java and C are used nowadays for performance, and Java is relatively good for architecting huge software systems.

What is Scala for? If I'm hard-pressed to give an answer, I'd say, "better productivity than Java, in a statically typed language, with good performance". Now that's great, and Kotlin is all that, too.

Why the immutable data-structures, then? To make concurrency better? In that case, why is mutability just as easy? And what are implicits and these new cringe-inducing macros for? DSLs? Why would a high-performance, statically typed language make it easy to write DSLs? Is it to introduce developers to the wonderful high-level abstractions of FP? Why all the OOP, then? Oh, it's to combine the too; in that case why do they feel so strenuously glued together (classes vs. case classed, an entire collection package replicated twice, once for the mutable case and once for the immutable).

So the language offers a powerful compiler but absolutely no guidance on how a program should be written. If at least Scala had somehow provided all of these features and stayed elegant, but man, it would take you weeks just to understand how a freaking Scala collection works, just because the language designers wanted to be so clever and prove that you could transform collections and still never require casts. It seems that at every turn Scala favors cleverness over clarity, and features over a cohesive philosophy. Scala chooses, over and over, to try and address more goals (most are really unimportant), and in the process has lost the most important thing in a language: coherence.

Scala sees something nice in another language and immediately adopts it. And I gotta say, writing a compiler that compiles code that's both javascript and Haskell is an impressive feat of engineering. But it comes at such high a price...


This is quite some incredibly confused rant. :-)

I think it is totally OK to dislike a language for pretty much any reason, but that wall of text reads a lot like “I never actually used the language, but here are some things I read on the internet which sounded plausible to hate”, which is quite disappointing, imho.


I've had a long history with Scala, but have only written about 500 lines of code using it. My feelings towards it had this trajectory: great hope, realistic hope, caution, suspicion, confusion, disappointment, pity.

Around 2006 I was working at a pretty large Java shop, and had hoped to convince the whole organization to gradually switch to Scala. There was one thing that really bothered me at the time, which was the inclusion of XML in the language. I wasn't too fond of XML, didn't think it would last, and thought it a sign of thoughtless trend-following on the part of the language designers, but I liked pretty much everything else. I really liked traits, I liked pattern matching; I really liked lambdas. I thought the language would never win any points for elegance and grace, but at least it was powerful. In any case, the language was young, and I knew I would have to let it mature before there was a real chance of it being adopted in such a large organization, so I kept close tabs.

Shortly after, implicits were introduced (if I have my chronology straight), and I then noticed that scaladoc only made an API even harder to understand, but I thought this could be resolved. Then structural types were introduced, and a big red warning light went off in my head. By 2009-2010 it was clear that a large organization like the one I was working at, would never adopt Scala; it was too unwieldy. Then collections were revamped, and Scala became the only language in existence whose automatically-generated documentation ensured that an API could never be understood. The designers' taste, or lack thereof (taste means choice; preference) was clear. I was then introduced to Clojure and learned that an extremely powerful language can be extremely elegant at the same time, and that a language can really help you program (rather than confuse you with "constructs") by adopting a coherent philosophy. I pretty much abandoned any hope for ever liking Scala again (or recommending it for a large organization), but I swear to you that I still thought, "the Scala guys haven't adopted macros yet in spite of their lispy awesomeness; perhaps there's some hope to them yet; maybe they finally realized that mixing ice-cream, steak, and pizza in a bowl does not make a good salad". We all know how that turned out.

I think I'm a pragmatist, but leaving aside the total incoherence of Scala, it has become so inelegant, so ungraceful, that I wouldn't use it for that reason alone, especially considering that most modern (and non-modern) languages value elegance. It's as if C++ hasn't taught us anything; as if programmers need to make a binary choice between power and beauty.

In the meantime, Scaladoc has actually improved, but that's just too little too late.


PART TWO

  There was one thing that really bothered me at the time, 
  which was the inclusion of XML in the language. I wasn't 
  too fond of XML, didn't think it would last, and thought 
  it a sign of thoughtless trend-following on the part of 
  the language designers, but I liked pretty much 
  everything else.
Good news: You'll be able to delete the scala-xml.jar file. Done. No XML support in the language.

  Then structural types were introduced, and a big red 
  warning light went off in my head.
They are a simple generalization and remove arbitrary restrictions on what can be a type and what can't be a type. A win for consistency. They will become crucial if you want to interoperate with prototype-based languages (JavaScript for instance), so I think the language designers made all the right bets back then when we see the hype around JavaScript today. I don't use structural types much, but a lot of people seem to so excited about them that they designed the whole language around that concept (Golang).

  I then noticed that scaladoc only made an API even harder 
  to understand, but I thought this could be resolved. 
  Scala became the only language in existence whose 
  automatically-generated documentation ensured that an API 
  could never be understood.
I don't understand what you mean. Could you explain?

  I think I'm a pragmatist, but leaving aside the total 
  incoherence of Scala, it has become so inelegant, so 
  ungraceful, that I wouldn't use it for that reason alone, 
  especially considering that most modern (and non-modern) 
  languages value elegance.
As someone who actually uses the languages and undertakes a lot of comparisons with other languages to better understand the state of the art and existing solutions before designing APIs, I totally disagree with that. There are not many languages out there which consider consistency and elegance to be as important as in Scala. In Scala things can and will be rejected or removed for failing to live up to these standards alone.


Your effort is very much appreciated. There, I upvoted both well thought-out answers.

I would never say that Scala's designers are stupid. Far from it. The Scala compiler is a work of brilliance. And, obviously, every feature, as you so meticulously tried to present, has a purpose; tries to solve a problem. But your explanations, I feel, only prove my point. Many of your explanations are along the lines of because sometimes you need that ("Some algorithms work best", "a good tool to solve some problems", "for some things it make sense"...). While absolutely true, and every practical language, be it a programming language or a spoken language, needs versatility and irregular forms, it seems like Scala tries to take on each one at a time rather than spending most of the intellectual effort on defining the common case.

For example -- and this is an important philosophical point of disagreement -- you say of macros: "... instead of having to resort to such terrible things as annotation processors, bytecode rewriting and Java agents." This, imho, the WRONG answer. Those problem areas that in Java are addressed by what you call "terrible" means, are highly irregular; highly uncommon. They should be addressed by "terrible" means if your goal is a simple language. Yet Scala seems to want to address every problem with a language feature, and in this case it's a huge one.

On the other hand, when I look at Erlang and Clojure (or Ruby, though I'm less familiar with it), I see languages that were designed by people who sat down and thought long and hard about which are the top one, two or three most burning problems of software development, and then tackled those and only those. Everything else would be solved possibly "terribly" (though it would be solved). Rich Hickey thought long and hard and came to the conclusion that while OOP might be the right solution sometimes, in the end it's more trouble than it's worth, and people should not generally use it to write most programs. He may be wrong, and he may be right, but he made a decision. He thinks (I guess) that if your particular problem absolutely requires OOP, then you're better off using a different language, one that's been lovingly crafted for that purpose.

This is extremely important. A coherent language says, "for my problem domain these are the tools you should use". A general-purpose coherent language adds "... and most problems fall in this problem domain". For whatever is left, use other, better suited languages. Scala never says this. For every problem, big and small, it tries to find a solution in Scala. I mean, it's running on the JVM for god's sake, and interoperability on the JVM is particularly easy. Why not come out and say, "DSLs are great; we absolutely love them; if you want to use them, write them in, say, Groovy"?

I did not intend to ask why would you ever need this feature or that? What I asked was, why must they all be in the same language? If you had said, "look, Scala just tried to do this, but because of sheer genius it just happens to do that, too", that would have been a good answer. But you didn't. Each feature is there to solve a different problem. That's why Scala lacks coherence.

A non-coherent language says, "here are the tools you can use". It says, "in your programming endeavors you might some day encounter this byzantine problem, and guess what? We got a tool just for that!" It lays out a huge set of tools, all usable at some point or another, all serving a purpose, but doesn't say "I think you should rarely use this tool or that, so I'm leaving them out of the toolbox. When you need them, buy them separately" (Worst of all, it gives you a bulldozer when all you need is a hammer. That's why it's unwieldy)

These are two competing philosophies, but for modern software development, the latter is the wrong choice and the former is the right one. Software systems are getting bigger and more complicated as it is, while programmers aren't getting smarter. Some challenges are much more important than others. Scala chooses to be Jack of all trades and master of none[1] in the very discipline that needs the opposite approach.

[1] What is the one or two things Scala is better at than any other language? For Clojure it's managing state; for Erlang it's fault tolerance. Both are at the very top in some other aspects as well. But what does Scala do better than anyone else? (and is that thing important? You might say it's best at marrying OOP and FP -- though even if that's the case I'd say being best doesn't mean you're good enough -- but I don't think that anyone would say that combining these two paradigms is what the software industry needs most. Or, you might say, typed OOP. But typed OOP is, again, a compiler feature, not a solution to a burning problem)


I think we have a major philosophical difference and talking a bit past each other.

Here are the two things why I am using Scala:

1. Confidence

Scala gives me the confidence that I can build software the way I imagine, I can focus on the user of my code, not on making the compiler happy.

While there are plenty of languages which make easy and medium problems nice to solve (Clojure and Erlang certainly belong to this group), Scala is one of the few languages which keeps supporting me regardless of whether my problem is easy, medium or incredibly hard.

In my experience, the work on making hard problems easier had a huge tricke-down effect which in turn improved Scala's issue solving capabilities for simple and medium issues.

I think the language is better for that and certainly ahead of Erlang and Clojure here.

While most hard problems are not common, they are often fundamental. Not being able to solve some issue in the best possible way can have huge negative impact on the whole application and library design. That's why for instance making it easier to create macros wasn't the first problem to concentrate on. Instead, developers made sure that users of macros had the best possible experience and focused on having one, unified API for reflection, the macros and the compiler, hugely simplifying semantics while re-using battle-proven code. Macros makes it possible to pull more functionality out of the language and out of the compiler; into libraries. For instance C# 5 introduced huge language changes with adding async/await to the language. In Scala no language change is necessary: Adding support for async/await would be just a library.

Unlike Macros in most other languages and inferior approaches like in Java, Scala macros are type-checked as regular Scala code at the definition site as well as the macro expansion at the call site, removing huge amounts of tricky issues all at once.

Great care is taken to make and keep Scala a highly regular language with only the minimal amount of hardcoding necessary to make things work. Unlike Clojure, it doesn't have special-cased syntax for collections and a few “blessed collection types” shipping with the language. Unlike Erlang, Actors are not built into the language.

In both cases, Scala avoids irregularity by enabling users to build libraries which can be improved and be replaced without much trouble.

  Yet Scala seems to want to address every problem with
  a language feature, and in this case it's a huge one.
This for instance is something I would call blatantly wrong. You confuse the distinction between a language like C# or C++ which adds tons of features to address every fashionable problem and Scala, which keeps its feature count low and orthogonal but manages to solve a lot of those problems by just being a better designed, more expressive language.

  On the other hand, when I look at Erlang and Clojure 
  (or Ruby, though I'm less familiar with it), I see 
  languages that were designed by people who sat down 
  and thought long and hard about which are the top one, 
  two or three most burning problems of software 
  development, and then tackled those and only those.
Well, that's nice, but I think it is even better that some people decided to bite the bullet and improve a lot of things instead of just building yet-another-language which improves on parts which the creator found subjectively important and regresses on dozens of others.

Is it hard to build a language with these intentions? Sure! Is that a reason not to do it? Absolutely not. I think one part where Scala has basically proven tons of people wrong is OO/FP. People have been saying for decades that OO and FP are fundamentally opposed to each other. Scala just went ahead and proved them wrong, showing that just because some earlier approaches like OCaml or F# are not that good doesn't mean it is impossible. Also, people have been claiming that there will always be a impedance mismatch between languages and databases. Scala went ahead and showed that it has not to be that case.

I want the best OO functionality combined with the best FP functionality. I want to be able to use higher-order abstractions combined with the best performance and efficiency. I want libraries written in the best possible way, not in the way the language decided it was convenient. I want to use the right tools for the right job without having to migrate from one language to another.

  He may be wrong, and he may be right, but he made a decision.
It's 2013. Let's stop forcing people to make pointless decisions. I just won't choose between things if I can have both, combined into a consistent library.

Clojure or Erlang just don't deliver here and that Clojure is the best language to manage state is highly debatable, too.

  I think you should rarely use this tool or that, so I'm 
  leaving them out of the toolbox. When you need them, buy 
  them separately
This is by the way exactly what Scala says. The language ships with the tools to enable people to build libraries. By default, everything is left out.

Don't take me wrong, a language should be as easy as possible — but not easier.

2. Community

It is pretty non-sensical to ask “What is the one or two things Scala is better at than any other language?”. There plenty of things it does better, because “good enough” is just not good enough for people in the Scala community.

In general the Scala community is highly critical of every aspect and tend to push things to the current state of the art or beyond it if they feel something can be solved in a better way. This has led to a huge increase of consistency and quality throughout the ecosystem, so that having a few good parts and a lot of mediocre parts is just not acceptable to most Scala developers anymore. They demand the best tools one can possibly build.

Anyway, I think your use of “coherent” is getting more clear, but imho makes less and less sense. You are basically asking for a silver bullet and are unhappy that Scala tells you that for many problems, there isn't one. I think this is one of the core advantages of the community: It doesn't try to sell you some “ultimate solution”, avoids ideological bullshit and treats people as grown-ups.

For instance, Scala's Akka team (those who work on concurrency-related libraries) had an interesting talk recently where the demonstrated something like 9 different approaches/techniques to tackling concurrency, all of them with different benefits and drawbacks, with the main conclusion of “pick your poison”.

I think this is one of the core distinctions between Scala's diverse community and other, more anglo-saxon-centric communities: People who have grown up in the US just love to swallow shallow marketing non-sense and respond extremely well to claims about “one true way” or “silver bullets”.

If somebody came with that approach to the Scala world, people would tell him/her that he/she is either lacking experience, has poor judgement, or probably both and show him/her why he/she is wrong.

The way people carefully evaluate different approaches and document its pros and cons instead of following the next hype is exactly why I'm using Scala.

Scala's strength is shipping efficient, reliable and fault-tolerant software at a rapid pace.


Well, best of luck with Scala, then. I am aware that there are people out there who like Scala, some of them even like it for the reasons you mention, and some of those even seem to find it elegant (BTW, I watched a talk[1] by Marting Odersky in which he tries to explain why he thinks all those Scala features should be crammed into a single language; even he didn't seem half as convinced as you are :)). It's good to have choices in the JVM ecosystem.

[1] https://www.youtube.com/watch?v=iPitDNUNyR0


I still can't see what you mean with "all those Scala features [...] crammed into a single language", is there anything more specific?


I'm curious what you find it is about Scala that lets you solve hard problems that you don't find in Clojure. Is it static typing and/or OO support? I'm also curious what you don't like about having syntax literals for vectors/maps/sets?


For instance Scala's support for composition and modularization of library fragments which allows you to separate even heavily interwoven concerns into tidy parts and put them together whenever and wherever you like it (or exchange some parts of it completely)

Static typing is certainly a factor, too. Scala allows me to not only design APIs which make it hard to be abused or misused, it makes it possible to encode many things I care about into types so that “wrong” code won't even compile.

With macros, there is now a whole new breed of libraries which add support for types to tasks which were to get wrong before, for instance

- the whole type provider business where one points to soma data source (like a database) and tells the compiler to figure out the right types on it own

- compile-time checked and SQL-injection-safe string interpolation like sql"""SELECT * FROM PERSONS WHERE $predicate"""

- sqlτyped (github.com/jonifreeman/sqltyped) which can compute types from SQL strings

- macros which transpile Scala code to JavaScript, inline (jscala.org)

- macros which can synthesize typeclass instances on-the-fly, like used in Scala's new serialization library (speakerdeck.com/heathermiller/on-pickles-and-spores-improving-support-for-distributed-programming-in-scala)

- Scala's async library (github.com/scala/async)

Regarding collection literals ... it certainly isn't that important in languages like Clojure where performance is not of great importance, because everyone just picks the one which come with the language and hopes it won't be too bad. Implementing new collection types is just not common here (like in many other untyped languages like PHP, Ruby, Python, JavaScript, etc.).

In Scala, blessing a few chosen collection types with special rules and syntax just won't fly. Developers demand first-class support for all collection types including the ones they define themselves.

Reserving some special rights which no one except the language creators are able to use just gives them an unfair advantage. All implementations should compete on the same ground so that the best one can win, and not the one which benefits from special-cased, hard-coded syntax rules.


Let's just say that I have a bit more, up-to-date experience with the language and I value its consistency, coherence and elegance.

One thing I really like is that Scala pushes for more general, generic solutions, instead of ad-hoc additions and hacks: implicits instead of extension methods, traits (instead of abstract classes + defender methods), objects (instead of “static” members), types (without arbitrary rules about what is allowed as a type and what not), pattern matching via apply/unapply, for comprehensions via map/flatMap/withFilter, methods instead of methods and properties.

That tons of languages (Java, Kotlin, Ceylon, ...) are copying Scala's design decisions (often badly, but nevertheless) is another sign that Scala got a lot right.

Ok ... whatever, if I have already written so much, I can just answer to your claims one by one (I hope that the time I spend on this will at least be slightly appreciated):

PART ONE (Hackernews complains that it is too long)

  what is the problem Scala is trying to solve
Being a modern, typed language which gives people the right tools to solve today's and tomorrow's engineering requirements.

  I know that Erlang and Clojure try to solve the problem 
  of writing concurrent code (and fault-tolerant code in 
  Erlang's case).
Scala fixes a some issues of Erlang's design and improves on it in a few substantial areas (which can't be fixed in Erlang itself anymore due to backward compatibility). It has better performance and better monitoring support.

Additionally, it offers better and more diverse tools to tackle concurrency than Clojure.

  Haskell tries to make writing correct code easier.
While Scala does not enforce purity by default (there is an effect system plugin for that) it gets you a long way towards Haskell's “if it compiles it is most likely correct” guarantees.

  Ruby and Python were made for ease and productivity
Apart from the “batteries included” approach (Scala prefers a minimal standard library instead¹) my experience is that it can easily match or beat Rupy's or Python's productivity. ¹ It also provides better tools to fetch additional dependencies than the languages mentioned above.

  both Ruby and Clojure are great for DSLs
Well, people say that about Scala, too. I don't see the big deal about DSLs, I just try to design and implement the best API a library can possibly have and Scala gives me the right tools to make that happen.

  Java and C are used nowadays for performance
Scala can match and beat Java's performance (looping seems to be faster than in Java, but I never understood why, optimization, specialization, macros, ...).

  Java is relatively good for architecting huge software 
  systems
Scala's better OO and module support improves on that.

  Now that's great, and Kotlin is all that, too.
Kotlin is a train-wreck. They promised a lot of things, but failed to deliver on pretty much everything. Sadly, those parts which were not just direct copies of Scala's design show the lack of experience in language design.

I think it is pretty ironic how their beloved talking points about “why not Scala?” has been reduced to almost nothing as they have continued to learn why Scala did things in a certain way. Just compare one of their first presentations with one of their latest ones.

They should really stop talking and start shipping if they want to be taken serious, because as a paying JetBrains customer I'm getting really tired of their vaporware and FUD.

  Why the immutable data-structures, then? To make 
  concurrency better?
Partially. It makes reasoning about the program much easier in general and allows safe, structural sharing of data.

  In that case, why is mutability just as easy?
Because Scala is not Haskell. Scala gives you tools to get your job done, it doesn't require you to adopt some ideology or religion. Sometimes, a mutable algorithm/data structure fits a requirement exactly and Scala won't annoy you for picking it.

  And what are implicits
Generally speaking, implicits are a generic way to guide the compiler towards closing a gap. What's such a gap? It can make existing types implement new interfaces (think arrays, java.lang.String, ...), it wires up type class instances with methods which require them, it can make incompatible things compatible (e. g. types which come from different third-party Java libraries).

Have a look at how String is made to support Scala's collection API. Have a look how the `to` method in Scala's collection library can work with arbitrary collection types (which don't even need to be known to the standard library).

They wouldn't be necessary in a perfect world, but Scala is pragmatic language and its designers acknowledge that we are not living in a perfect world. The cost/benefit ratio of implicits compared to things like extension methods is magnitudes better.

  and these new cringe-inducing macros for?
They provide a general way to make APIs more safe and implementations more efficient. They can be used to report more specialized errors right at compile times, they can be used to make sure that your closures don't close over things you don't want, they can be used to implement LINQ to query databases while using the bog-standard collection API, they can be used to implement F#'s type providers. This can all be done with full type-checking and refactoring support from the IDE/compiler instead of having to resort to such terrible things as annotation processors, bytecode rewriting and Java agents.

They are a huge improvement over Java's approach and Oracle is now copying parts of it.

  DSLs? Why would a high-performance, statically typed 
  language make it easy to write DSLs?
Why not? Just because it is a DSL, it doesn’t mean it has to such on the performance/safety front.

  Why all the OOP, then?
Because OO is a good tool to solve some problems, just like FP is a tool to solve some other requirements.

  Oh, it's to combine the too; in that case why do they 
  feel so strenuously glued together
I think you have to be more precise here. Even people coming from OCaml or F# concede that Scala has done an incredibly good job at combining OO and FP, so I'm happy to hear what issues you have found.

  (classes vs. case classed,
Well, for some things it make sense to have the additional methods of a case class, for some use-cases it doesn't.

  an entire collection package replicated twice, once for 
  the mutable case and once for the immutable).
Pick the best tool for your job. Some algorithms work best with immutable data-structures, some with mutable. Scala spells out explicitly which guarantees are made and people can safely rely on it. Experience has shown that Java's approach had good intentions but just didn't work. Even the designers of Java agree with that these days. Scala has learned from those mistakes and doesn't repeat them (unlike Kotlin).

  So the language offers a powerful compiler but absolutely 
  no guidance on how a program should be written.
That has not been my experience. There is some local immutable-OO-with-FP-with-typeclasses optimum and people regardless of where they come from are almost magically converging towards it.

  If at least Scala had somehow provided all of these 
  features and stayed elegant,
I think it does.

  but man, it would take you weeks just to understand how a 
  freaking Scala collection works, just because the 
  language designers wanted to be so clever and prove that 
  you could transform collections and still never require 
  casts.
Well, it's a bit more than that, right? Anyway, I think everyone in the Scala space is open towards a better solution, but frankly even after years no other language has come up with an approach which comes even close to collection API's ease of use.

  It seems that at every turn Scala favors cleverness over 
  clarity
In my experience, readability and clarity are considered more important these days. Cleverness is deemed to be OK if it is used to improve the lives of people using that piece of API. It's just like mutability: It's ok as long as you keep it localized, confined and don't unnecessarily expose your users to it.

  features over a cohesive philosophy
I think I disagree with that. Consistency is still one of the most important requirement and I don't have seen much features to make it in the last versions.

Anyway, Scala has much less features than Java 8, C#, F# and many other “competitors” in that space, so I think we are fine here.

  Scala chooses, over and over, to try and address more 
  goals (most are really unimportant), and in the process 
  has lost the most important thing in a language: 
  coherence.
As mentioned, this has not been my experience, but I'd love to see an example.

  Scala sees something nice in another language and 
  immediately adopts it.
No, absolutely not.

  And I gotta say, writing a compiler that compiles code
  that's both javascript and Haskell is an impressive feat 
  of engineering. But it comes at such high a price...
Huh? That doesn't make sense.


We might as well debate religion.

JVM... you can keep it.


I promise not to argue, then, but I'm seriously interested to know why would anyone not like the JVM? (this isn't the first time I've heard this view, but I never asked before) I've been writing software for some 20 years now, and have never seen a more impressive environment (in terms of performance, pluggability, monitoring). Is it the startup time? The classpath? The monolithic JRE? (the last two annoy me, too, but they should be resolved in Java 9. I hope)


I guess the JVM is just too sophisticated for my taste.

I think Mies van der Rohe, if he were a coder, would like Go for all the same reasons that Java programmers don't like it.

Like I said, this is religion.

If Java works for you, phenomenal.

I'm not a big fan of languages that become corporate standards, or languages that are now controlled by companies like Oracle.

Perhaps this is just rebellion without cause... but I'm pretty happy worshiping my gods.


Fair enough.

I think Mies van der Rohe, if he were a coder, would like Go

Really? Not Clojure? :)


He would rather design chairs than use Clojure.


I guess my extended argument is -- okay, so Go will get there eventually (I'll concede that) -- but then what? IMO, Go will still not be attractive enough to warrant a major migration. cgo is interesting, but we already have a Java analog - the JNI.


> Go will still not be attractive enough to warrant a major migration

One of pron's concessions above is that Go is much easier to learn. That could be its killer feature: a new generation of developers and founders may choose to use Go for their projects because of the lower barrier to entry.

A common theme with these write-ups is that the developer felt comfortable with Go after about a week. That's pretty incredible.

I had the same experience. Coming from a scientific computing background, I wanted to get my feet wet with a an application level or system level language. I chose Go, and I felt like I was off and running in about a week. I've since looked at Java, and I'd rather not get involved with it unless absolutely necessary.

If that turns out to be the experience of many others, it won't be a matter of migration. It'll be a matter of growth from the ground level.


That is a good point, and it just might be that way, but I'm not so sure for several reasons.

First, languages like Python and Ruby (and I think Clojure, too) are as easy to learn and Go, and most would say are more productive.

And if you want a system level language, then you're probably not a beginner. Yes, Go is easier to learn than Java, but certainly not by much (it is, granted, much easier to get a small program running in Go). On the other hand, Java (or Kotlin) are more versatile.

Someone once said that easy always wins, and this might be the case with a system level language, too, but maybe when people specifically look for power then easy is not their top priority? I don't know.


> Those languages have been out longer. I think Go will get there.

Maybe so, but that doesn't help today.


What exactly is Go missing for you?


I think go does have a niche, and it is "python/ruby devs". Yes, its niche is a set of people, not a particular role. Because go is so simple for a typeless developer to learn, and has essentially no downsides and a massive upside (performance) compared to python and ruby, I see a lot of ruby and python devs upgrading to go.

I also think you are being unfair to D. D is basically just what go initially claimed to be: a new systems language. D is a fantastic replacement for C++, go doesn't really enter the world C++ lives in.


> Because go is so simple for a typeless developer to learn, and has essentially no downsides and a massive upside (performance) compared to python and ruby

I think that's a brilliant observation! It's all of the duck typing goodness with far less of the static typing overhead. It's static types "lite" for duck typing lovers.


Dynamic typing has advantages and disadvantages. E.g. Static typing violates the DRY ("Don't repeat yourself") principle concerning code reuse whereas compile time type checking reduces the effort for testing (but for very simple cases only).


>Static typing violates the DRY ("Don't repeat yourself") principle concerning code reuse

How did you come to that conclusion? I can't see any way that static typing "violates" DRY.


I think it's because you're declaring the type, which is obvious from context, when you could save characters and have the runtime system figure it out for you.


You are conflating explicit type declaration with static typing. While often found together, they are different things.


You're right. Further discussion in sibling thread.


That isn't an issue of static typing, it is an issue of explicit typing. I program in a statically typed language. When I don't feel something warrants a type signature, I simply don't write one. Type signatures are compiler enforced documentation.


Sometimes type declarations are required for disambiguation in statically typed languages, no? The 'auto' keyword in C++ doesn't always work.

Anyway, I believe this is the source of the DRY claim.


In Haskell, for example, you only need type declarations for disambiguation when the information is actually missing from the program. That is, if the program were dynamic, you'd also need to write that down.

For example, the type of:

  show . read
is ambiguous, because "read" parses a String into some "Readable" type, and "show" converts a "Showable" type into a String. Which type? It could be anything, so the compile complains it is an ambiguous type.

You could say something like:

  (show :: MyType -> String) . read
To resolve the ambiguity.

In a dynamic language, a function like "read" is not possible at all, since it uses static types to determine which type to parse into. So in Python, this would look like:

  lambda x: MyType.parse(x).show()
Same information, same DRY violation/non-violation.

Note that in some other cases, where the type can be determined, I can write:

  [1, 2, read "3", 4]
Whereas in a dynamic language I'd need:

  [1, 2, Int.parse("3"), 4]
So it is actually dynamic typing that violates DRY here.


In your example with the arrays, what were you trying to illustrate? I don't understand.


The notation [x, y, z] denotes a homogeneously-typed list.

So I used it as an example where type inference can figure out the type of an expression from its use, rather than from its intrinsic value.

Dynamic types only work based on the intrinsic value, so whenever a statically-typed language can figure out the correct types from context, a dynamically-typed language is going to have to require redundant type hints.

So [1, read "2", 3] is a list of ints, so the read call there is known by type inference to return an int, so the read parser chosen is the parser for ints. In Python, even if you had some value that is required to be an integer, and you wanted to parse a string to that value, you'd need to say: Int.read("1"), which is redundant.


That you don't need a type signature, or any explicit type information at all. Because the compiler already knows it has to be a list of Ints, you can just call read and it knows it has to be converting that string to an int. In a unityped language you still need to supply the information about what type to convert to.


There are different degrees of type inference. Go's `:=` operator and C++'s `auto` keyword use the simplest kind, where the right-hand side must evaluate to a concrete type. More powerful than that are systems that can infer the type of all variables in a function scope depending on how they are used throughout the function (such as Scala and Rust, though Scala's is waaaay more advanced than Rust's afaict), but require functions themselves to always be explictly typed. More powerful still are languages with whole-program type inference such as ML, which can infer even function signatures.

(Please note that I'm very rough on approaches to type inference, corrections to the above are welcomed!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: