It's interesting to see that more and more internal projects at Google are getting redone in Go. However, this sentence stands out to me in the article:
"Before doing the rewrite, we realized we needed only a small subset of the functionality of the original system -- perhaps 20% (or less) of what the other projects were doing with it."
I'm guessing that a lot of the benefit of the rewrite came simply from the simplification of the core logic and dropping extraneous functionality. That said, having written a little bit of both C++ and Go, I can completely see why the author found that Go was both far more readable and more maintainable than C++.
OP here. It is absolutely true that we did not rewrite the entire original system in Go; I tried to be very explicit about that in the blog post. But, I feel confident that it could be done, in much less code, with greater clarity and modularization. The Go language by itself does not force good software design. A rewrite in any language would have been better than the original system, but our decision to use Go turned out to be fortuitous in that we managed to do so in record time and with much greater programmer productivity.
Was there anything about the language that you found particularly forced clarity and modularity? I'm giving a talk comparing Go and Ruby next month and I'm curious as to what people with experience with larger Go programs find to be most helpful.
The Go module system for sure. Also I really like Go's interface model (as opposed to Java or C++ classes) as it is more flexible and in some ways more precise. Note that I don't know Ruby so I can't compare Go to that...
Thanks for the article, it was a very interesting read! Since this always comes up in Go discussions: did you miss generics in this project or was the lack of generics a non-issue?
Hmmn. I think there's a fundamental advantage in any rewrite that's allowed to drop functionality - even a C++ to C++ rewrite.
It's not just the language. It's the fact that (a) you already have an instantiation of the idea to look at as a reference and (b) you're happily cutting bits off the original. The latter suggests that you're not only not supporting all of the original use cases, you're probably in a nice state of organization where you're free from the temptation to "astronaut" up a more general system than you really need.
It's always nice to have this power, but it shouldn't be confused with the wonderfulness of the language.
ironically, that's not the common experience; the term 'second system syndrome' was coined for a reason.
"allowed to drop functionality", as used by you, is a strawman -- the team _added_ functionality to the part they were rewriting without incurring the wrath of the second system. sure, the line counts are off, but your argument does not stand in the general.
If I remember Brooks correctly, second system syndrome does not refer to all successor systems, and frequently complex/over-built systems spawn elegant ones in turn (oversimplifying a bit, but MULTICS -> UNIX springs to mind). I seem to recall Brooks regarding this as cyclical, with leaner third systems spawning bloated fourth systems, and so on.
i was referring to recent examples such as python 3.0, perl 6, or even, say, apache 2.0. eventually they were usable, but the blood sweat and tears involved did not make for cute blog posts such as the one we're commenting on.
I think a component in a service oriented architecture would lend itself more cleanly to reimplementation, with less chance for "Second System Syndrome", because you still have an api contract you are responsible for.
I think a "released" software, like those you mentioned, would be somewhat more likely to become a stereotypical "second system".
> It's interesting to see that more and more internal projects at Google are getting redone in Go.
I think one of the key drivers of this which hasn't come up yet in the comments is this one:
"The #1 benefit we get from Go is the lightweight concurrency provided by goroutines. Instead of a messy chain of dozens of asynchronous callbacks spread over tens of source files, the core logic of the system fits in a couple hundred lines of code, all in the same file."
I think it would be quite rare to see a project of any significance at Google which didn't have an async aspect to it. My guess is that this is particularly the case when it was decided to be written in C++ over Java or Python: speed for this project matters, and the way to get the fastest execution is a distributed C++ program.
When you combine that with Rob Pike's assertion that most of the programmers moving to Go are from Python rather than C++, and you can see why rewrites are starting to gain momentum. I think there are a handful of people who truly do love C/C++, but I think most programmers (Google or no) see it as a necessary evil to meet the desired speed requirements. Now Go is stable and proven to work at Google, I can see lots of teams getting internal pressure to start trading out components from C++ to Go. I can see managers greenlighting it (perhaps as a 20% project) if only for the readability argument.
This is why a "go" rewrite story is not that appealing to me. You can rewrite a subset of a system and gain many improvements simply by benefits of hindsight and clearer requirements regardless of languages.
Original production code is messy, especially the ones that have been maintained over the years through multiple requirement changes. A rewrite will make those messiness go away, for now, regardless on languages used.
The number of strongly typed garbage collected compiled languages is surely shrinking. I can understand someone's disgust towards Java considering dwindling velocity in adding next-gen features and now sleeping in same bed as lawyer run Oracle. That pretty much had left C# in the arena until Go came on the scene. I was honestly hoping Go would give us head-to-head battle with C# but
from initial looks Go pretty much feels like C# 1.0. No generics, no partial classes, no linq... C# 5.0 even has async and dynamic types. I like minimalism but these features are something we take for granted in languages that were created post-2000 era. So it all boils down to people choosing Go for singular reason: It's not from Microsoft! If licensing for C# wasn't a barrier and let's stretch our imagination by thinking C# was created by some other group in Google, would Go have a chance?
I'd take a second look at Go. It has a number of innovations over C# - and of course also a number of shortcomings. It's quite a different language, much more so than Java compared to C# (1.0 even).
A couple of things of the top of my head:
* goroutines and efficient multithreading, both in terms of syntactical constructs and language implementation. Go doesn't have await/async because it fundamentally doesn't need them.
* a type system that works without inheritance or subtypes but rather static duck typing, quite different from any other language
* very memory efficient layout of data structures, internal pointers
* small set of builtins with maps, slices, channels (that do have 'generic' types for their members) to reuse, rather than re-implement
* comes with a well designed build and packaging system
* comes with a very convincing standard library
It's worth taking a look. C# surely is a nice language, but it inherited many of the design flaws of Java, in particular with respect to the runtime and memory layout. Google does use Java internally a lot, Go gets uptake in spite of that competition.
Regardless of language merits, licensing C# can indeed be a problem, as can be the operational complexity of managing Windows running on hundreds, thousands, ten thousands etc of machines.
* Their implementation sucks so much that most decently built
concurrency implementation will beat Go-routines by a wide
margin. Just have a look at all those people on the Go mailing
list whining about the scheduler. Not even talking about the
fact that building concurrency mechanisms into the language is
plain stupid. We all saw how well that worked out in plenty of
other languages before.
* You know why pretty much no one uses structural typing?
It's not because the “brilliant” designers of Go “invented” it,
it's because most language designers realized that it is a poor
idea before shipping the language. I think it is pretty telling
that in languages which support both nominal and structural
typing pretty much nobody is using structural typing.
* I'm not seeing how a language which requires passing around
void* pointers in every data-structure and casting at pretty
much every corner can be considered memory efficient.
* Hard-coding collection classes with special cased syntax into
the language, so that everyone who needs to have something
slightly different is completely fucked ... what is this? 1980?
* Their packaging system is an unmitigated failure. Nuff' said.
* Do they have a working Unicode implementation yet (I mean more
than the “We use UTF8 for strings ... which is like 0.5% of
what Unicode is about”)? What about (Big)Decimal? A working
date/calendar abstraction which isn't a terrible joke?
Take his criticism (not sure it even merits that title) with a grain of salt. Many people on HN and on the mailing list have sung the praises of Go's packaging system.
Yet another Go fanboy getting a bit defensive without having to add anything constructive?
If you actually checked your “claims” you would see that people are singing so many praises that the mailing list is continually filled with proposals to fix the worst parts of Go's packaging system.
Your post has no substance, other than to say some people on the mailing list don't like it. But other people on the mailing list do like it. So your post is meaningless and useless.
If you actually have problems with Go's packaging, and care about it enough to write as much as you have, why didn't you simply point out the flaws?
> Because there are already dozens of people who did that already?
Do you really expect people to scour a newsgroup to find the meaning of your vague and unsubstantiated claims?
Someone could similarly say "Scala is an unmitigated failure of a language. If you want to know why, just track down its detractors on the Internet. There are dozens." Would you be impressed by that?
The C# and the Go cultures are very different. Culture is a big part of what makes programming languages different.
Python and Ruby are basically the same language, but the culture of the users is very different and so the libraries and tool are quite different.
Go is from the unixy C culture, C# is from the Java/win32 culture and the libraries and tools reflect that.
This isn't a fair statement. A better statement would be "Go would be hardly mentioned here if it weren't for all of the effort and documentation put into it." There is no doubt though that Go would not be moving along so quickly if it weren't for Google, but I think this is a case of 'correlation is not causation.' Dart comes from Google also, and while I see it from time to time here, nowhere like I do Go.
Myself, I tend to trust and prefer Mozilla as a company more. That said, I find Rust not only unusable, but really too big of a language. I find Go the perfect mix of features I desire, and I cannot be the only one. So, I like Go as a community and as language and certainly not because Google is behind it.
Go seems to be gaining traction. Does it matter if that's because of where it came from? Maybe in a cultural, I-want-to-think-about-why-some-languages-get-adopted sense, sure, then it can matter. And I think it's worth having those discussions. But I get the impression that people think there's some absolute injustice in the idea that people are adopting a language, and it's partly because it's from Google. Consider:
"Yes, C would be hardly mentioned here if it wasn't being developed at Bell Labs."
Java as a platform is absolutely not dwindling in velocity. All that is happening is people are looking into alternative JVM languages like Scala, Clojure and Groovy. This trend has been happening for a while and is as much to do with Servlets/Spring as it is with Java the language. It has absolutely nothing to do with Oracle which has been a fantastic steward for the platform.
I don't think you actually have much understanding of enterprise environments otherwise you would know how ridiculous the idea that Go or C# is going to take over from Java anytime soon.
Something similar happened to me years ago when I started picking up Perl. All the BS C++ and Java made me go through to get anything done seemed like such a huge waste of time I ended just writing lots of stuff in Perl until I had pretty much forgotten how to C++ and how to Java.
Go might be a new Perl in that sense.
I'd also add that as Google moves more and more production systems onto Go, it will literally become responsible for handling billions of dollars of business. That and because Google is using it (and everybody wants to work at Google so knowing some Go ahead of time will be useful) will lead universities to start covering it and before long this virtuous cycle will result in Go everywhere.
I've also noticed a number of posts here over the last few months extolling the virtues of moving off of slow frameworks built around slow scripting languages in terms of huge reductions in infrastructure costs. For each system that Google brings on-line with Go, even if it took a team of 10 6 months, they've likely saved themselves millions of dollars in new capex infrastructure acquisition. Imagine serving 40x the requests off of the same old hardware. Alternatively, you might be able to use slower, but more energy efficient hardware to serve the same number of users and save on power and heat management. All this combined with Moore's law means this is a fantastic idea at this kind of scale.
"For each system that Google brings on-line with Go, even if it took a team of 10 6 months, they've likely saved themselves millions of dollars in new capex infrastructure acquisition"
I thought the old systems were in c++. So, how did the infrastructure savings accrue? In fact, I don't recall anyone doing analysis on that front.
Matt linked to my course, but also: I did a tiny analysis of this with my pi searcher (http://da-data.blogspot.com/2013/05/improving-pi-searchers-s... ) -- it's a toy compared to the kind of system that Matt described, but my experience was extremely positive. Rewriting it in Go made it easier to architect the system right to take advantage of persistence. It's hugely faster.
We also have a paper accepted to SOSP this year (the academic weenies will understand that) where the system - a Paxos variant - is implemented in Go. We also implemented about four other Paxos variants in Go to compare against them, a task that would have been absolutely grueling in C++. In Go it wasn't too bad. Our performance isn't super-blazing, but it turns out to be one of the faster publicly available Paxos implementations anyway. I'll have to write up a bit about our experience with it -- I'm with Matt 100% on this one.
We've done lots of load testing and the CPU and memory footprint of the Go version is better than the C++ version. Not surprising since we reduced the code size so much, but at least using Go did not involve significant bloat.
I have no doubt it must have been easier programming in go. My question was on resource efficiency. I doubt anyone programs in C++ for fun anymore :-). The issue with toy problems you can write a vanilla python or perl program that processes text faster than C++ (without extensive fiddling) however the memory and CPU utilization can be 5-10 times higher. This doesn't matter on a single box for toy problems, but matters when you have an infrastructure costs of Google that run into the billions.
I think you'd have to agree that C++ does not make it easy to learn how to do it properly, and nor does it make it easy to actually do it properly. Go at least makes it far easier for most programmers to do an acceptable job which has to count for something.
> I think you'd have to agree that C++ does not make it easy to learn how to do it properly, and nor does it make it easy to actually do it properly.
I know C++ since 1993. Having used it alongside many other programming languages and followed quite closely the standardization process in "The C++ Report" and "The C Users Journal", latter named "The C/C++ Users Journal".
I only used C instead of C++ when forced to do so.
Yes it is a complex language, requiring a good background in programming languages to use it properly, but so are quite a few other languages that provide a similar set of abstractions.
> Go at least makes it far easier for most programmers to do an acceptable job which has to count for something.
True, although in a similar way as Java 1.0 was a better C++, however we are no longer in 1995.
You're right. I think I got ahead of myself in what I was thinking, conflating the rewrite of the C++ system into Go with the some of the other speedups I've been reading about coming from other languages/frameworks.
I'd be interested to know more from Matt about speedups or slowdowns in specific subsystems that are feature-for-feature complete (as opposed to comparisons of Go to C++ where the Go rewrite is simply doing less stuff because lots of cruft was tossed out).
> If I could get out of the 1970s and use an editor other than vi, maybe I would get some help from an IDE in this regard, but I staunchly refuse to edit code with any tool that requires using a mouse.
So you've shown that Go is appealing to rigid curmudgeons. Personally I'm still hung up on the "every function (edit: that does anything which might itself return an error code, which in large scale code is quite a lot) must return an error code" thing. Just like not understanding the purpose and usefulness of graphical editors, the creators of Go seemingly (despite their legendary status...) seemingly not understanding the purpose and usefulness of exceptions (waving them off as "they result in convoluted code" with no explanation) is keeping me pretty skeptical.
It just requires I take in some well written, idiomatic Go code so that I can finally "get it", how I'd live without exceptions, but I'm too busy being productive in Python (and enjoying Pypy's ever increasing speedups) to get that interested.
It is not the case that every function most return an error.
The Go creators do understand the purpose and usefulness of exceptions. They chose not to use exceptions with this knowledge. See http://golang.org/doc/faq#exceptions for their reasons.
I don't know. The quality of this FAQ entry is in my opinion narrowed by their inclusion of FileNotFoundException, which it seems they did not understand. The use case for this exception is not to allow lazy programmers to use it instead of checking that a file exists, but to signal to a programmer that his world view of the state of his program may be wrong despite his best efforts, e.g.
if(file.exists())
{
//do something
}
There's a race condition between the if and do something which invalidates the programmers world view (someone can delete the file between these two statements). And this is an exceptional situation the program has to deal with. Error codes may tell the programmer this, but it is quite likely that the programmer will just ignore it, because "I've already checked that it exists - what could possibly go wrong?". Exceptions, especially checked exceptions (in my opinion the only good exceptions for "normal" program code)(1), force the programmer to deal with this problem. Or to say - deliberately - "Hey, program, I don't care for the stability of my software. Just explode if this happens!".
(1) Languages which have only unchecked exceptions do, in my opinion, cave in to the laziness of programmers: "but, but, it is so much WORK to deal with all of this. Can't it just go away? Please?" - the result is code which can explode everywhere. This is even worse than no exceptions. With return codes you know at least that you have to check the code yourself very, very carefully all the time.
> There's a race condition between the if and do something which invalidates the programmers world view (someone can delete the file between these two statements). And this is an exceptional situation the program has to deal with.
It's not an exceptional situation. It's a bug in your program. And it's not lazy to open() and check for existence. It's actually the _only_ sane way to check that the file exists, if you plan to open it.
> Error codes may tell the programmer this, but it is quite likely that the programmer will just ignore it, because "I've already checked that it exists - what could possibly go wrong?"
The following code is obviously wrong because of the race condition you mention. Nobody in their right mind would write code like this.
if exists(file) {
f = open(file)
// do something with f
} else {
// handle "file not found" case
}
This code is correct, to some degree:
try {
f = open(file)
// do something with f
} catch (file not found error) {
// handle "file not found" case
}
As is this Go code:
f, err := os.Open(file)
if os.IsNotExist(err) {
// handle "file not found" case
} else if err != nil {
// handle any other error that might arise
}
// do something with f
The thing is, the above code is not me being extra careful about checking errors. It's just bog standard Go code. Checking error values is the only way to write even half-decent Go code, so everyone does it all the time.
No reasonable programmer would write Go code like this:
_, err := os.Stat(file)
if err != nil {
// handle "file not found" case
} else {
f, _ := os.Open(file)
// do something with f
// (or explode if file doesn't exist)
}
To my Go programmer eyes, the underscore (where I'm ignoring the error) in the os.Open line sticks out like a sore thumb. You wouldn't write it, and when reading you would certainly notice it as bad code (or at the very least extraordinary code).
Go basically does have exceptions, though they call it something else, it is slightly less powerful than you would normally expect, and the idiomatic usage is very different.
Honestly I that Go's idiomatic usage of panic/recover is a good idea even in other languages. Exceptions passing package boundaries is typically a pain in the ass in practice, it makes for ugly code.
(My "day job" is typically Java these days, some Scala. I am not speaking from inexperience like many Go detractors like to assume. There seems to be a perverse notion that anyone who dislikes exceptions must not understand them. Silly.)
Go doesn't require all functions to return error codes, and Go has exceptions (called panics).
In Go, conventionally, the publicly exposed functions in a module shouldn't usually panic but should use error returns to report unusual conditions. But that's a convention to keep the behavior of functions clear from the interfaces, not a language limitation.
Of course you are supposed to use them when appropriate; they have their uses just as goto can have its valid uses. The implementation of the standard library contains some examples. But they should be used judiciously and not for mere error handling and not across package boundaries.
> The language has panics. But you aren't supposed to use them. So...
Not true. Panic/defer/recover is there to be used. But panic/recover should typically be internal to a package and/or used in a situation where the program itself suffers an error; not because of faulty user input/validation/etc.
A function isn't required to return an error code, but if you plan to deal with errors, it's the easiest way to do that.
I think the Go creators understand the purpose and usefulness of exceptions, they just chose to not implement them in favor of other approaches which work fine, of course, it's a matter of taste, as with almost any "language difference" argument. If you get real hung up on these differences, one might argue that you are the curmudgeon.
I didn't find learning/implementing a service in Go to be terribly challenging, even though I translated an existing Python service to Go. There were frustrating caveats that I had to work around because of some of the great things Python does, but over all the benefits of doing it in Go paid off, and it's about getting something for your time/effort.
Your last argument is totally legitimate though, but it's hard to say what would help you "get it" by throwing random bits of code at you, it's really something that you have to care about, spend the time to dig into, and make the realization for yourself. It may never happen, and as long as Python does everything you need, of course you'd have no incentive to use Go.
There seems to be a common theme (generally speaking) on HN that after the front page reaches some saturation on a particular subject, people start being very critical of it, not on its merit, but because it is taking up space where they expect to see a diverse set of content. I can sympathize with this, but I think it's best to try to dig for the value that others seem to be getting out of things, rather than sighing at the constant cheers of others, just my opinion though.
Don't confuse not understanding something with understanding it and thinking it is a poor idea. The Go authors understand exceptions perfectly -- they just thought that it was a bad idea: http://golang.org/doc/faq#exceptions
yes I've read that and their reason is "it results in convoluted code" - which is not at all my experience, programming in Java and Python for many years, it's worse in Java for sure due to the heavy emphasis on checked exceptions, but in Python they are a dream. "It also tends to encourage programmers to label too many ordinary errors, such as failing to open a file, as exceptional." also not true in my experience. I really disagree with the notion of hobbling a language just to prevent beginners from writing bad code. Beginners will always write bad code no matter what. I'm not a beginner, and I really don't need to be denied useful tools just because beginners will misuse them - I mean we're talking about improving on C/C++ for chrissakes, in the hands of a beginner those languages are like nuclear weapons. This particular FAQ entry makes it seem very much like the authors have just not seen exceptions used effectively, which I find kind of astonishing.
I watched Bruce Eckel's talk at Pycon, "Rethinking Errors - Learning from Scala and Go" (http://us.pycon.org/2013/schedule/presentation/52/) and I was so ready to be converted. But his arguments were pretty unconvincing.
Go's error handling model has nothing to do with schooling beginners. In fact there are several rough edges in Go that make sense for experienced programmers, but are difficult for newcomers to understand. (The distinction between the new and make functions is one example.)
The reason Go does not have exceptions is because, on balance, they make the language more complex without actually improving the code you write. I have seen countless A/B comparisons, and it seems that if you want robust error handling, it's going to be relatively verbose regardless of the approach you take.
To handle errors well requires your attention. Error handling is at the core of what most programs do, and it should be visible. In Go, errors are a computable value that you handle using the same control flow as any other value in the system. It shouldn't be relegated to a side channel that must be managed using separate, often invisible, control flow mechanisms.
I think a lot of people HAVE made the case convincingly, at least well enough for me. I have worked in major C++ shops that ban the use of exceptions (and enforce it).
Just to add another person who regrets exceptions to the big pile, http://250bpm.com/blog:4 (The ZMQ Author)
"Thus, the error handling was of utmost importance. It had to be very explicit and unforgiving.
C++ exceptions just didn't fill the bill. They are great for guaranteeing that program doesn't fail — just wrap the main function in try/catch block and you can handle all the errors in a single place.
However, what's great for avoiding straightforward failures becomes a nightmare when your goal is to guarantee that no undefined behaviour happens. The decoupling between raising of the exception and handling it, that makes avoiding failures so easy in C++, makes it virtually impossible to guarantee that the program never runs info undefined behaviour.
With C, the raising of the error and handling it are tightly couped and reside at the same place in the source code. This makes it easy to understand what happens if error happens..."
That is a problem with the way exceptions work in C++, not with the exceptions as concept.
C++ exception's design suffer from being added to the language in the last years of the language's design and having to support the possibility of being turned off if desired.
This, coupled with the manual resource management of the language is with lead to some of the issues with exceptions in C++.
Not all languages with exception's support suffer from the same problems.
If you think exceptions are the right way, you can use Go's exceptions (called panics). Whatever they thought about the desirability of using exceptions, its not like the creators of Go didn't build them into the language.
fine, but I've learned enough languages to know that the absolute worst thing you can do when you start out with language Y is make it act just like your previous language X. It's a very natural instinct for almost everyone (just read "Python is not Java" for an example), but for at least the first year or two of using a new language I think you have to do it as idiomatically as possible, before you have any insight into how to challenge the designer's idioms.
> fine, but I've learned enough languages to know that the absolute worst thing you can do when you start out with language Y is make it act just like your previous language X.
Sure, if you want to learn the idiomatic Go way of doing things, you do things the idiomatic way. Once you've reached the point where you have familiarity with the idiomatic way and have a reasoned analysis of why you believe the idiomatic way is wrong (at least for you doing the things you want to do with the language), that's no longer the case.
If you reached the point where you feel comfortable arguing that the idiom is wrong, you've should also have reached the point where you can use the language constrained by features, not by conventional idiom.
Most of the arguments against exceptions presented in that document apply specifically to C++, particularly in a codebase with a large amount of exception-unsafe code (which is a C++-specific problem).
"On their face, the benefits of using exceptions outweigh the costs, especially in new projects...Things would probably be different if we had to do it all over again from scratch."
No, we can just see the "if err != nil". It's still there in Java or Ruby or Python, lurking in the notional space between the lines and waiting to branch to an exception handler you installed in the caller's caller's caller.
Same in Go, because of panic and recover. For example, "foo.Bar", if foo is a nil pointer, can invisibly branch to an exception handler you installed in the caller's caller's caller.
Basically he goes on to show how many people still code like the 70's instead of adopting languages and paradigms that were already possible in late 60's systems.
i hate watching videos but yes, this is really interesting - von neumann: "I don't see why anyone would need anything other than machine code". I deal with the resistance thing a lot in my work with ORMs, I should integrate some of this into my writing.
I was hung up on error as return instead of exceptions but came in terms eventually. Now i think, compared to exceptions the error handeling is better off with go's approach. They are local, and require more concious effort. Specially while coding for the critical systems this helps as you are not passing the buck to some other section of the code.
There's no requirement at all that functions return error codes. If a function can't fail, or if you don't care about if it fails, don't return an error.
Exceptions are just glorified gotos. The longest single time in my life chasing a bug was because an exception was firing somehere down udner and nobody noticed, because it was kind of part of the logic but a fringe case.
this is a thoroughly debunked argument that Joel tried to make. Unlike goto, exceptions have stack traces, so when used correctly, their source and propagation are immediately obvious and traceable. An unreported exception in your buggy program is certainly no worse than an ignored error code.
The exception was catches somewhere It formed an unexpected, rare execution path that nobody could think of. It was not detectable by reading the local code. A missing error return would have been.
inappropriately catching and ignoring an exception is the same as ignoring an error return code, deeper down the call stack and not in the code you're looking at. the difference is, catching and ignoring the exception requires that it be actively done, whereas ignoring an error code is the default if no action is taken.
You know, every time I see some Googler shocked at the effectiveness and various advantages of coding in Go, I wonder why Google never adopted Erlang. They could have been getting all these same advantages (and then some) a decade ago :)
(full disclosure: I work at google and also like erlang)
Erlang has fantastic facilities for robustness and concurrency. What it does not have is type safety and it's terrible at handling text in a performant fashion. So if you don't care about either of those things and only care about robustness and concurrency then Erlang is great. There were internal discussions about Erlang here but the upshot was. We had already basically duplicated Erlangs supervision model in our infrastructure, only we did it for all languages and Erlang didn't offer any benefits in performance for us. It's only benefit would have been the concurrency model. That's much less benefit than Go gives.
Go gives you Erlangs concurency model, a similar philosophy of robustness, type safety, all batteries included, and performance. Equating the two languages works in 1 or 2 dimensions but not on all the dimensions google cares about.
Interesting, thanks for that; it's pretty much what I guessed (especially the bit about the supervision tree and hot-code-upgrade advantages being mooted by your infrastructure.)
On a tangent, though:
> What it does not have is type safety
I've tried to work this out before (I'm designing a new language for Erlang's VM), but as far as I can tell, type safety is in practice incompatible with VM-supported hot code upgrade.
If you have two services, A and B, and you need to upgrade them both, but you can't "stop the world" to do an atomic upgrade of both A and B together (because you're running a distributed soft real-time system, after all), then you need to switch out A, and then switch out B.
So, at some point, on some nodes, A will be running a version with an ABI incompatible with B. In a strongly-typed system, the VM wouldn't allow A's new code to load, since it refers to functions in B with type signatures that don't exist.
On the other hand, in a system with pattern-matching and a "let it crash" philosophy, you just let A's new code start up and repeatedly try-and-fail to communicate with B for a while, until B's code gets upgraded as well--and now the types are compatible again.
> type safety is in practice incompatible with VM-supported hot code upgrade.
That's not true.
First, it's very easy to hot reload changes that have been made to the code that are backward compatible. The JVM spec describes in very specific details what that means (adding or removing a method is not backward compatible, modifying a body is, etc...).
This is how Hotswap works, the JVM has been using it for years.
As for changes that are backward incompatible, you can still manage them with application level techniques, such as rolling out servers or simply allow two different versions of the class to exist at the same time (JRebel does that, as do other a few other products in the JVM ecosystem).
Erlang doesn't really have any advantages over statically typed systems in the hot reload area, and its lack of static typing is a deal breaker for pretty much any serious production deployment.
> lack of static typing is a deal breaker for pretty much any serious production deployment.
Are you talking about Google only where they made it a mandate or in general? There are serious production deployments on Python, Ruby, Erlang and Javascript.
I will trade expressiveness and less lines of code with a strong but dynamically typed language + tests over more a static typed language with more lines of code all being equal.
Or put it another way, if strong typing is the main thing that protects against lack of faults and crashes in production, there is a serious issue that needs to be addressed (just my 2 cents).
> As for changes that are backward incompatible, you can still manage them with application level techniques, such as rolling out servers or simply allow two different versions of the class to exist at the same time (JRebel does that, as do other a few other products in the JVM ecosystem).
Neither of these allow for the whole reason Erlang has hot code upgrade in the first place: allowing to upgrade the code on one side of a TCP connection without dropping the connection to the other side. Tell me how to do that with a static type system :)
Tomcat (and other app servers) has support for doing hot reloads of Java web apps while not reloading the HTTP layer (and not dropping TCP connections).
I have implemented a similar system for JRuby apps running inside a Servlet container. There are many caveats. I don't actually recommend it because for a while you're using nearly twice the memory (and JRuby is particularly memory hungry). Also there are many ways to leak the old class definitions such that they are not GC'd (e.g. thread locals). But it's certainly possible.
I suspect that Erlang, Java, and all languages are in the same boat: some parts can be upgraded live in the VM while other parts require a full restart (maybe coordinating with multiple nodes and a load balancer to achieve zero-downtime).
Out of curiosity, where/why would such an exotic feature be needed in today's internet architectures where you always front a group of servers with a load balancer ?
Not all Internet protocols are HTTP. If you're running a service where long-lived connections are the norm, "simply fronting a bunch of servers with a load balancer" can require a pretty smart load balancer. E.g. IMAP connections often last hours or even days, and are required to maintain a degree of statefulness.
Not everything is a website! Also, not everything is stateless. Consider writing a chat application for the web for example and letting users on one page communicate with another one.
There are a number of significant differences between Erlang's and Go's concurrency models: Asynchronous vs synchronous communication, per-thread vs per-process heaps, send to process vs send to channel.
Go has asynchronous communication and synchronous communication.
And the other things you mention are in practice not significant differences. The model both use is based on Hoares CSP and the same general ways of using apply. Some of the specifics must accomodate differences but those are differences of implementation not the general model.
No, they are different models [1]. Go does not have asynchronous communication. Bounded channels are still synchronous, because there is still synchronization happening; the consumer can't get too far behind the producer.
it wasn't accidental -- it was written on purpose by a programmer (a conversion from Writer to WriteCloser). it was immediately acknowledged as an error and eventually may be caught by the standard code examining tool "vet".
eventually, within the context of the `go vet` tool, the http://godoc.org/code.google.com/p/go.tools/go/types package may be used to analyze interface conversions ("I said I expected interface type A, but I'm using it as interface type B", which is unusual for go programs). i think that answers "yes" to your second question, but I'm not on the go team, so take my opinion with a grain of salt.
short of disallowing interface-to-interface casts, it is indicative of an error and should be vetted as such. the particular case I described earlier was covered by the Go 1.0 guarantee, so it had to be documented rather than fixed.
Hmm. Seems hard to do soundly in the presence of higher order control flow (e.g. pass the interface to a closure in a global variable—will the closure downcast it to another interface you didn't expect?)
Benefits in "performance" is somewhat vague. What performance criteria was measured? Throughput? Memory consumption? Latency? Long tails? Standard deviation? Mean response time?
It's highly unlikely that any language can possibly provide all the above and Erlang makes certain tradeoffs (as does Go). Low level languages like C++ let the user choose the tradeoff at any given point with enough effort.
What was Google's criteria in making such a decision?
When people ask «Why not Erlang instead of Go, Erlang is X and Y and Z...» they seem to be oblivious to the fact that Go is C-like and Erlang has a pretty weird Prolog-like syntax.
I've had some experience with Prolog before touching any Erlang. It's not a syntax or a programming style that I liked, not at all. When I came to do some Erlang [1], I found the same style and it was not a pleasant surprise.
I learned C-like languages first, so maybe that's why my view is flawed. But most people also learn C-like languages as their first language, so it might be that Erlang looks like an ugly beast when they come across it. And thus as a concurrent language, Go seems to be first of it's kind.
Meanwhile, Go has a very simple style that pretty much everybody can read out of the box.
[1]: CS Games in Canada, one challenge was a 'debugging' competition where programs in ~10 languages had bugs we had to find and fix in 90 mins. One of them was in Go, another one in Erlang. Out of about 20 teams participating, me and another one (2/20) managed the Erlang one while the vast majority of the other teams managed to do the Go one. FWIW, this can speak to people's ability to understand Go vs. Erlang.
I guess I learned Erlang as my 15th or 16th programming language, so "syntax" wasn't really a concern; I really was oblivious to it. What's that Matrix quote?--all I see is the AST :)
Still, when I say "Erlang", I don't mean the syntax, I mean mostly the VM and stdlib (the platform semantics, in other words.) You can get those with Elixir or LFE or any number of growing projects, just like you can get Java's platform semantics with Scala or Clojure. Everyone agrees Erlang's syntax is hideous, after all--even its developers.
> they seem to be oblivious to the fact that Go is C-like and Erlang has a pretty weird Prolog-like syntax.
Yeah it has Prolog-like syntax. However, when designing and working on large distributed system looking at just the syntax is a kind of shortsighted. The problem is not syntax (which is different, and is actually pretty simple, and a lot less ambiguous than say Javascript or has less "features" than C++), the problem is _semantics_. And by that I mean structuring your programs as a set of multiple concurrent actors. That is the hard part.
Another way to put it. Erlang is probably getting looked at because someone wants to either scale, build a distributed system or a highly fault tolerant system. At that point, if dots vs semicolons is a major stumbling block what are they going to do when they hit a netsplit.
Now, Erlang like any tool has trade-offs. But those are about isolation and private heaps vs raw sequential performance. Hot code upgrades and compiled static code and pointers referenced everywhere in the code is not going to work well. Stuff like that. Single assignment is also another common one.
(And if syntax is a major stumbling block, there is Elixir or a lisp like languages LFE and Joxa that all take advantage of the actor model and run on the BEAM VM).
One silly yet simple reason could be the Not Invented Here Syndrome. For a company that size relying on Ericsson or Erlang Solutions for support could have been seen as something they didn't want to deal with. So they just wrote their own.
It is too bad though. Erlang, I think, has some features I like better such as hot code reloading, better supervision strategies, separate heap per lightweight process (hence no stop-the-world GC), and is battle tested for longer. Also, at least to me, actors with pattern matched receive, as central building blocks, makes more sense than channels. So I'll keep Erlang as my concurrent server side go-to language for now.
Yes but intentionally so. There is an intrinsic tension between latency and throughput. Erlang chooses willfully to optimize for the former rather than the latter. This works when the majority of the tasks occurring concurrently are typically small and lightweight (aka, a web server).
More than Erlang I think what Google really wanted was Ada. Since speed (ada can be very fast with low memory usage) and programming at scale were as much concerns as concurrency (they both take inspiration from CSP). Ada trades verbosity for clarity and rarely matched safety (design by contract, modules and extensive runtime checks). While I've never written a line of it before, proponents of Ada always have interesting things to wistfully say about how ahead of its time and slept on it was/is.
While touted a complex and big language when it appeared in the early 80's, it is actually smaller than C++.
The main problems related to its adoption had to do with the price of the compiler systems back in the day and its verbosity for the curly-bracket fans.
Nowadays there is GNAT, but the language ecosystem is very different.
> You know, every time I see some Googler shocked at the effectiveness and various advantages of coding in Go
Me too, but for other reasons.
Many of the Go nice features were already available in other languages back in the 80's and got lost with C and later C++ becoming mainstream.
That normal developers don't know them is understandable, but Googlers, given the requirements to be part of the Chocolate Factory, it surprises me every time.
For just one example, Go produces static native binaries, while Erlang produces bytecode for a virtual machine. But the Erlang virtual machine is tiny and it's standard practice (with tool support) to ship it with your application as a "release", so either way you get the effects of having one self-sufficient blob of code in a folder that you can "just run" without having to think about runtimes or libraries.
What I would say is that, for every IO-bound highly-concurrent C++ project Google is rewriting into Go, the same project could have been rewritten into Erlang, and they'd see most of the same advantages relative to C++: better, more transparent concurrency; "batteries included" for solving various client-server and distribution problems; being able to just drop a code package on a server and run it; etc.
You're assuming IO-bound highly-concurrent C++ server don't have other requirements besides those two. Maybe it's IO-bound highly concurrent text processing. Erlang will suck at this despite the two pieces it's excellent at. Go is pretty fast at processing text and google does a lot of text processing.
What exactly does "text processing" mean, by the way? Erlang is very good at processing streams of bytes--you can pattern match on binaries to get new sub-binaries (which are basically equivalent to Go's array slices) to pass around, etc. It just gets awkward when you have to convert those streams into codepoints to do case-insensitive comparisons and such.
But to reply more directly, "IO-bound" means something specific--that the problem will be using a negligible amount of CPU, no matter what constant coefficient of overhead the language adds, and so scaling will never be a problem of "oops it's using too much CPU to do the text processing" but rather "oops the gigabit links are saturated we need to add more boxes."
You answered your own question when you acknowledged the areas Erlang get awkward in. Google supports more than 50 different languages. Doing that requires performant analysis of more than just bytes but of unicode codepoints.
Collectively, google believes they are right and the world is wrong. Anything pre-existing is dirty and unworthy of their genius if they didn't invent it themselves.
So, even though we have 20 years of Erlang and production concurrency experience out there in one solid language, it's just ignored (except for parts they want to get "inspired by").
All of that is fine in isolation, but then the fadsters jump in. You know who they are. They're the people who live to consume fads and only do the latest thing without any consideration to what came before. Soon you'll have thousands of blog posts about how Go is changing the future of programming because they invented VM-scheduled lightweight green threads load balanced over your CPU topology.
Collectively, google believes they are right and the world is wrong. Anything pre-existing is dirty and unworthy of their genius if they didn't invent it themselves.
Someone a bit more objective might say "Google believes that it's quite hard to integrate third-party software into their huge existing infrastructure and make it work at their enormous scale."
They might also say "Google believes the infrastructure they use is the right choice for their business needs, and Googlers like to tell the world about some of it, in case it's the right choice for them, too."
But I'm a Googler and you clearly have an axe to grind, so it's unlikely we're going to agree.
I used to work at a place that tried to reinvent everything from the wheel up, too. Over time I came to realize that avoiding third party libraries and technology stacks was more about long term support and hackability than anything else.
Unfortunately, newer recruits never realize this and accept the internal stack as religion. They never bother to learn third party alternatives.
We had a joke that went: "If you were ever fired and had to find a job elsewhere, you'll have to start by implementing your own X", X being a heavily used internal library that allowed an entire generation of developers to do certain things without ever knowing the underlying system calls.
Google is heavily dependent on Java, Linux, Python, C++ etc. About two seconds of thought is all it takes to realize what an absurd claim this is.
Google is almost alone in building internet services at its scale. The people calling for adoption of exotic tech like Erlang without understanding their unique requirements are the fadsters.
"Exotic tech" is the most amusing ad-hominem insult I've heard in a while. Erlang has more users than Go, at least, and more companies you could name off the top of your head have Erlang deployed somewhere (Github and Heroku, for just two.)
Also, Google doesn't have "unique requirements." They have a unique set of overlapping, pretty common requirements.
Some requirements in that set (e.g. serving data on dl.google.com) could be solved perfectly well by Erlang, and could have been for years, but can also now be solved perfectly well by Go. That they are now being solved by Go is likely an effect of the "Golang advocacy group" that has formed at Google.
Others can't be solved by Erlang, but can be solved by Go (e.g. CPU-bound matrix-multiplications for PageRank index calculation), or vice-versa (e.g. deploying new Google Talk daemon code without dropping the XMPP connection.)
I'm not saying Google could have used Erlang for everything. I'm not even saying there's not a place for Go. Just that Erlang has been around for a long time filling nearly the same niche as Go, and if Google are really rewriting all this software for the sake of switching to a language that is more apt for the problem-domain, then they could have done that years ago, without having to develop their own little language to do it with first.
Plenty of sophisticated users have taken a long hard look at Erlang and chosen other tech, for a variety of good reasons. I guess Twitter is a bunch of NIH bumblers for passing on it as well?
Go clearly has a pretty different set of design priorities, not the least of which are static typing and native code compilation. I'm inclined to give the people running Google's infrastructure the benefit of the doubt in making these choices.
Hey, don't group me in together with the downvoted guy here. I agree that Google have their own reasons to pick their own tech.
I'm just saying that Erlang was probably a better solution than C++ for some of the things they were doing, and they could have switched to it years and years ago. They might then have created Go, and switch from Erlang to Go for those same projects. There'd be nothing wrong with that. I'm just surprised they were using C++ of all things to begin with, before rewriting in Go.
I would say Google has a couple of fairly uncommon requirements: ridiculous scale and the fact that even a brief outage is world news.
In terms of "rewriting all this software", I wouldn't say it's at all for the sake of switching to Go. It would be more accurate to say "well, we need to rewrite this thing anyway because it's no longer scalable or maintainable. Let's give Go a shot instead of C++/Java)"
Fairly uncommon compared to your everyday website sure. But I can think of hundreds of companies that have ridiculous scale and brief outages would be newsworthy:
Airline bookings, Stock exchanges, Betting markets, Postal services, Major websites (Facebook, Twitter, Pinterest, LinkedIn), Online Games, Video services, Payment gateways, Banks etc.
Scale in and of itself isn't the basis of a sufficient argument though. You can write software at that scale in Visual Basic if you have billions of hexcore machines at your disposal.
Well, they wrote their own infringing java VM for mobile, tried to fix Python (but it didn't work out), essentially run Google Linux internally with various levels of contribution back upstream, and their C++ is (I'm just guessing here) nigh unreadable by non-übernerds.
"and their C++ is (I'm just guessing here) nigh unreadable by non-übernerds."
Actually google C++ codebase is some of the most carefully written and extensively commented codebase I've ever had the pleasure to work on. Of course a large amount of google C++ codebase is open source (Chromium, leveldb etc.) and you're free to read it and form informed opinions instead of guessing.
I did, and to be honest it is not much different than Microsoft with Visual J++.
Without the "Google is cool glasses" on, I came to the conclusion that Google just took enough care to avoid all the legal traps that could make them loose a suit like it happened to Microsoft.
> Google totally slimed Sun. We were all really disturbed, even Jonathan: he just decided to put on a happy face and tried to turn lemons into lemonade, which annoyed a lot of folks at Sun.
Disclosure: I work for Microsoft but that is a very new thing (the J++ dispute was many years ago)
IIRC, the J++ suit had to do with the specific terms of a license agreement that Sun and Microsoft had entered into. I don't think it was much like this where Google claimed to have done a non-infringing clean-room reimplementation claiming this didn't require a license from Sun/Oracle at all.
Google did a clean room implementation, while making a clear distinction between Java the language and Java the VM, while avoiding doing any kind of public statement that could violate the Java licensing trademark.
So now you have an environment, where Java the language version 6 can be used, while Java the language version 8 is going to appear next year, without any signs of ever appearing in Android.
Now, quite possibly as consequence of the litigation, Java developers targeting Android have to live with Java the language version 6 forever.
The end result is no different than the fragmentation Microsoft attempted with J++, but since it is Google, it is ok to do so.
Yeah. My sense is that Android developers aren't nearly as interested in 'Java the write once run everywhere platform' as developers who were adopting Java in the 1990s. So, like Apple and ObjectiveC, mobile developers are just willing to go where the platform takes them rather than push for evolution of the language per se.
Maybe it's just that mobile apps are much smaller than the monsters enterprise devs need language help to manage.
Speaking as a sometime Android developer, I use Java for Android because I have to, not because I'm particularly fond of Java. The biggest upside of a JVM on Android in this respect is that I have alternative languages like Kotlin available.
Dart is excellent in few cases on the client. First, if you can forget about older browsers, and second if you don't need to use a lot of pre-existing JavaScript libs in your app, and third if you don't need to use the web control on iOS. Dart as a Chrome Packaged App should shine. I really enjoy developing in it. I don't have any experience with it on the server, so couldn't say there.
> Go's type inference makes for lean code, but requires you to dig a little to figure out what the type of a given variable is if it's not explicit. So given code like:
foo, bar := someFunc(baz)
> You'd really like to know what foo and bar actually are, in case you want
to add some new code to operate on them.
The gocode utility (which works with several editors) does exactly this.
It disheartens me a bit when people compare Go with C++. It gave me the impression that Go was created to be better C++ (or Java) mostly to make life easier for the thousands of developers at Google who are stuck with those two languages.
That is at least partially the case. The other part of the story is that no existing languages were specifically designed for programming at a large scale.
If you use Scala in a way that has not a whole lot of extra cognitive load, it's a slightly prettier re-skin of Java. If you use it like it's Haskell, it has a cognitive load that's as bad as Haskell (or worse). And if you inherit a codebase from someone else, you can bet they've been tempted to be oh-so-clever. That's why I'd never let Scala near production.
Go has almost no cognitive load beyond the complexity of the algorithm itself.
It sounds to me like you're lumping things like writing for loops and mutating state in with the inherent complexity of the algorithm. I think reducing the ability to create abstractions like generic map/filter increases cognitive load. Reducing cognitive load is the whole reason I'm learning Haskell and currently use Clojure.
As with most programming languages, there is a small Scala following internally, but it's not a production language.
I don't know anything about any discussions about Scala. If I was thinking of deploying Scala, I'd be worried that Scala just doesn't offer enough over Java to warrant writing in a new language. Writing in Scala means not that you have to know it, but everyone who inherits it, and everyone who has to interoperate with it. The overhead is just too great to make a good case for writing something in Scala.
Ken Thompson (a Go lead) has held hostility towards C++ that is well-known and of long standing. (To wit, at the ACM Turing Centennial last year: 'I'm not sure what OO is supposed to look like, but I do know for sure it's not anything like C++.' (not verbatim, but accurate in the sentiment he expressed.)) As note below, Go was indeed designed to address perceived shortcomings of C++, among other goals.
Go is a nice language with awesome features except that it doesn't fucking allow unused imports/variables. It makes exploratory programming with Go annoying as hell.
Interesting that Go's lack of HM type inference is that problematic. In, Rust or Haskell, one could simply use that value further on with some type and have it inferred correctly.
OP here. I do actually use vim but I have yet to adopt most of the fancy plugins that vim provides -- put me in a time machine back to 1977 and I would be very capable of programming on any UNIX system that you'd drop me in front of, provided it had vi installed. I'm not defending this choice of lifestyle; it's just how I learned to program :-)
Even in the 2010s I've found proponents of the UNIX console style of programming because, as you say, they can be dropped in front of any similar system and off they go.
However, tools improve and you're denying yourself productivity benefits. One of the advantages of a statically typed language is that the computer knows the type of the arguments. I love environments that allow me to 'mouse over' a function or variable etc. and tell me exactly what it is (and allow me to navigate to its definition). You don't have to use a mouse but what's the disadvantage in having alternatives to keyboard shortcuts where appropriate?
I would love to open source this system, but even if we did I'm not sure we'd be able to convince you that using Go was better than some other language in terms of developer productivity. How would having access to the source help?
You mentioned in the article that you read a section of code by one of the lead engineers and immediately you became convinced that using Go is the "way-to-go". Well, I would like to experience that eureka moment as well.
Good point. Maybe you could check out one of the many other open source Go projects out there as an alternative. I'm not sure reading our code would give you that eureka moment, since it only made sense to me since I was so familiar with the old code :-)
I'm not sure anyone is asking you to "validate the claims". If you'd like some reassurance that OP knows what he's talking about, you might consider that Harvard tends not to hire stupid professors and then give them tenure.
I am well aware of Matt Welsh's reputation and what institution he was from. However, even Einstein or Feynman's scientific papers are subjected to peer review.
"Before doing the rewrite, we realized we needed only a small subset of the functionality of the original system -- perhaps 20% (or less) of what the other projects were doing with it."
I'm guessing that a lot of the benefit of the rewrite came simply from the simplification of the core logic and dropping extraneous functionality. That said, having written a little bit of both C++ and Go, I can completely see why the author found that Go was both far more readable and more maintainable than C++.