Hacker News new | past | comments | ask | show | jobs | submit login

You know, every time I see some Googler shocked at the effectiveness and various advantages of coding in Go, I wonder why Google never adopted Erlang. They could have been getting all these same advantages (and then some) a decade ago :)



(full disclosure: I work at google and also like erlang)

Erlang has fantastic facilities for robustness and concurrency. What it does not have is type safety and it's terrible at handling text in a performant fashion. So if you don't care about either of those things and only care about robustness and concurrency then Erlang is great. There were internal discussions about Erlang here but the upshot was. We had already basically duplicated Erlangs supervision model in our infrastructure, only we did it for all languages and Erlang didn't offer any benefits in performance for us. It's only benefit would have been the concurrency model. That's much less benefit than Go gives.

Go gives you Erlangs concurency model, a similar philosophy of robustness, type safety, all batteries included, and performance. Equating the two languages works in 1 or 2 dimensions but not on all the dimensions google cares about.


Interesting, thanks for that; it's pretty much what I guessed (especially the bit about the supervision tree and hot-code-upgrade advantages being mooted by your infrastructure.)

On a tangent, though:

> What it does not have is type safety

I've tried to work this out before (I'm designing a new language for Erlang's VM), but as far as I can tell, type safety is in practice incompatible with VM-supported hot code upgrade.

If you have two services, A and B, and you need to upgrade them both, but you can't "stop the world" to do an atomic upgrade of both A and B together (because you're running a distributed soft real-time system, after all), then you need to switch out A, and then switch out B.

So, at some point, on some nodes, A will be running a version with an ABI incompatible with B. In a strongly-typed system, the VM wouldn't allow A's new code to load, since it refers to functions in B with type signatures that don't exist.

On the other hand, in a system with pattern-matching and a "let it crash" philosophy, you just let A's new code start up and repeatedly try-and-fail to communicate with B for a while, until B's code gets upgraded as well--and now the types are compatible again.

It's an interesting problem.


> type safety is in practice incompatible with VM-supported hot code upgrade.

That's not true.

First, it's very easy to hot reload changes that have been made to the code that are backward compatible. The JVM spec describes in very specific details what that means (adding or removing a method is not backward compatible, modifying a body is, etc...).

This is how Hotswap works, the JVM has been using it for years.

As for changes that are backward incompatible, you can still manage them with application level techniques, such as rolling out servers or simply allow two different versions of the class to exist at the same time (JRebel does that, as do other a few other products in the JVM ecosystem).

Erlang doesn't really have any advantages over statically typed systems in the hot reload area, and its lack of static typing is a deal breaker for pretty much any serious production deployment.


> lack of static typing is a deal breaker for pretty much any serious production deployment.

Are you talking about Google only where they made it a mandate or in general? There are serious production deployments on Python, Ruby, Erlang and Javascript.

I will trade expressiveness and less lines of code with a strong but dynamically typed language + tests over more a static typed language with more lines of code all being equal.

Or put it another way, if strong typing is the main thing that protects against lack of faults and crashes in production, there is a serious issue that needs to be addressed (just my 2 cents).


> As for changes that are backward incompatible, you can still manage them with application level techniques, such as rolling out servers or simply allow two different versions of the class to exist at the same time (JRebel does that, as do other a few other products in the JVM ecosystem).

Neither of these allow for the whole reason Erlang has hot code upgrade in the first place: allowing to upgrade the code on one side of a TCP connection without dropping the connection to the other side. Tell me how to do that with a static type system :)


Tomcat (and other app servers) has support for doing hot reloads of Java web apps while not reloading the HTTP layer (and not dropping TCP connections).

http://www.javacodegeeks.com/2011/06/zero-downtime-deploymen...

I have implemented a similar system for JRuby apps running inside a Servlet container. There are many caveats. I don't actually recommend it because for a while you're using nearly twice the memory (and JRuby is particularly memory hungry). Also there are many ways to leak the old class definitions such that they are not GC'd (e.g. thread locals). But it's certainly possible.

I suspect that Erlang, Java, and all languages are in the same boat: some parts can be upgraded live in the VM while other parts require a full restart (maybe coordinating with multiple nodes and a load balancer to achieve zero-downtime).


Erlang is not in that boat. Generally, you can upgrade the entire thing (except the lowest level libraries obviously) without too much fuss.


Out of curiosity, where/why would such an exotic feature be needed in today's internet architectures where you always front a group of servers with a load balancer ?


Not all Internet protocols are HTTP. If you're running a service where long-lived connections are the norm, "simply fronting a bunch of servers with a load balancer" can require a pretty smart load balancer. E.g. IMAP connections often last hours or even days, and are required to maintain a degree of statefulness.


Not everything is a website! Also, not everything is stateless. Consider writing a chat application for the web for example and letting users on one page communicate with another one.


Go gives you Erlangs concurency model

There are a number of significant differences between Erlang's and Go's concurrency models: Asynchronous vs synchronous communication, per-thread vs per-process heaps, send to process vs send to channel.


Go has asynchronous communication and synchronous communication.

And the other things you mention are in practice not significant differences. The model both use is based on Hoares CSP and the same general ways of using apply. Some of the specifics must accomodate differences but those are differences of implementation not the general model.


No, they are different models [1]. Go does not have asynchronous communication. Bounded channels are still synchronous, because there is still synchronization happening; the consumer can't get too far behind the producer.

[l] https://en.wikipedia.org/wiki/Communicating_sequential_proce....


Go doesn't have strong type safety either; I remember a recent story about a Go stdlib function "accidentally" calling an interface it shouldn't.


it wasn't accidental -- it was written on purpose by a programmer (a conversion from Writer to WriteCloser). it was immediately acknowledged as an error and eventually may be caught by the standard code examining tool "vet".


What would the static analysis that "vet" is performing enforce to stop this? No interface-to-interface downcasts?


eventually, within the context of the `go vet` tool, the http://godoc.org/code.google.com/p/go.tools/go/types package may be used to analyze interface conversions ("I said I expected interface type A, but I'm using it as interface type B", which is unusual for go programs). i think that answers "yes" to your second question, but I'm not on the go team, so take my opinion with a grain of salt.

short of disallowing interface-to-interface casts, it is indicative of an error and should be vetted as such. the particular case I described earlier was covered by the Go 1.0 guarantee, so it had to be documented rather than fixed.


Hmm. Seems hard to do soundly in the presence of higher order control flow (e.g. pass the interface to a closure in a global variable—will the closure downcast it to another interface you didn't expect?)


Benefits in "performance" is somewhat vague. What performance criteria was measured? Throughput? Memory consumption? Latency? Long tails? Standard deviation? Mean response time?

It's highly unlikely that any language can possibly provide all the above and Erlang makes certain tradeoffs (as does Go). Low level languages like C++ let the user choose the tradeoff at any given point with enough effort.

What was Google's criteria in making such a decision?


When people ask «Why not Erlang instead of Go, Erlang is X and Y and Z...» they seem to be oblivious to the fact that Go is C-like and Erlang has a pretty weird Prolog-like syntax.

I've had some experience with Prolog before touching any Erlang. It's not a syntax or a programming style that I liked, not at all. When I came to do some Erlang [1], I found the same style and it was not a pleasant surprise.

I learned C-like languages first, so maybe that's why my view is flawed. But most people also learn C-like languages as their first language, so it might be that Erlang looks like an ugly beast when they come across it. And thus as a concurrent language, Go seems to be first of it's kind.

Meanwhile, Go has a very simple style that pretty much everybody can read out of the box.

[1]: CS Games in Canada, one challenge was a 'debugging' competition where programs in ~10 languages had bugs we had to find and fix in 90 mins. One of them was in Go, another one in Erlang. Out of about 20 teams participating, me and another one (2/20) managed the Erlang one while the vast majority of the other teams managed to do the Go one. FWIW, this can speak to people's ability to understand Go vs. Erlang.


I guess I learned Erlang as my 15th or 16th programming language, so "syntax" wasn't really a concern; I really was oblivious to it. What's that Matrix quote?--all I see is the AST :)

Still, when I say "Erlang", I don't mean the syntax, I mean mostly the VM and stdlib (the platform semantics, in other words.) You can get those with Elixir or LFE or any number of growing projects, just like you can get Java's platform semantics with Scala or Clojure. Everyone agrees Erlang's syntax is hideous, after all--even its developers.


> they seem to be oblivious to the fact that Go is C-like and Erlang has a pretty weird Prolog-like syntax.

Yeah it has Prolog-like syntax. However, when designing and working on large distributed system looking at just the syntax is a kind of shortsighted. The problem is not syntax (which is different, and is actually pretty simple, and a lot less ambiguous than say Javascript or has less "features" than C++), the problem is _semantics_. And by that I mean structuring your programs as a set of multiple concurrent actors. That is the hard part.

Another way to put it. Erlang is probably getting looked at because someone wants to either scale, build a distributed system or a highly fault tolerant system. At that point, if dots vs semicolons is a major stumbling block what are they going to do when they hit a netsplit.

Now, Erlang like any tool has trade-offs. But those are about isolation and private heaps vs raw sequential performance. Hot code upgrades and compiled static code and pointers referenced everywhere in the code is not going to work well. Stuff like that. Single assignment is also another common one.

(And if syntax is a major stumbling block, there is Elixir or a lisp like languages LFE and Joxa that all take advantage of the actor model and run on the BEAM VM).


One silly yet simple reason could be the Not Invented Here Syndrome. For a company that size relying on Ericsson or Erlang Solutions for support could have been seen as something they didn't want to deal with. So they just wrote their own.

It is too bad though. Erlang, I think, has some features I like better such as hot code reloading, better supervision strategies, separate heap per lightweight process (hence no stop-the-world GC), and is battle tested for longer. Also, at least to me, actors with pattern matched receive, as central building blocks, makes more sense than channels. So I'll keep Erlang as my concurrent server side go-to language for now.


The author did say CPU efficiency is key, and Erlang isn't exactly known for its raw speed.

In general search is very CPU intensive, maybe that's why Google never adopted Erlang.


Yes but intentionally so. There is an intrinsic tension between latency and throughput. Erlang chooses willfully to optimize for the former rather than the latter. This works when the majority of the tasks occurring concurrently are typically small and lightweight (aka, a web server).


More than Erlang I think what Google really wanted was Ada. Since speed (ada can be very fast with low memory usage) and programming at scale were as much concerns as concurrency (they both take inspiration from CSP). Ada trades verbosity for clarity and rarely matched safety (design by contract, modules and extensive runtime checks). While I've never written a line of it before, proponents of Ada always have interesting things to wistfully say about how ahead of its time and slept on it was/is.


While touted a complex and big language when it appeared in the early 80's, it is actually smaller than C++.

The main problems related to its adoption had to do with the price of the compiler systems back in the day and its verbosity for the curly-bracket fans.

Nowadays there is GNAT, but the language ecosystem is very different.


One of the big projects they're working on in the next version of Go is to make the internal scheduler more Erlang-esque =)


Yep, full preemptive scheduler, I am very excited about it!


> You know, every time I see some Googler shocked at the effectiveness and various advantages of coding in Go

Me too, but for other reasons.

Many of the Go nice features were already available in other languages back in the 80's and got lost with C and later C++ becoming mainstream.

That normal developers don't know them is understandable, but Googlers, given the requirements to be part of the Chocolate Factory, it surprises me every time.


Are Go and Erlang that fungible with each other? Honest question.


Not in theory, but yes in practice.

For just one example, Go produces static native binaries, while Erlang produces bytecode for a virtual machine. But the Erlang virtual machine is tiny and it's standard practice (with tool support) to ship it with your application as a "release", so either way you get the effects of having one self-sufficient blob of code in a folder that you can "just run" without having to think about runtimes or libraries.

What I would say is that, for every IO-bound highly-concurrent C++ project Google is rewriting into Go, the same project could have been rewritten into Erlang, and they'd see most of the same advantages relative to C++: better, more transparent concurrency; "batteries included" for solving various client-server and distribution problems; being able to just drop a code package on a server and run it; etc.


You're assuming IO-bound highly-concurrent C++ server don't have other requirements besides those two. Maybe it's IO-bound highly concurrent text processing. Erlang will suck at this despite the two pieces it's excellent at. Go is pretty fast at processing text and google does a lot of text processing.


What exactly does "text processing" mean, by the way? Erlang is very good at processing streams of bytes--you can pattern match on binaries to get new sub-binaries (which are basically equivalent to Go's array slices) to pass around, etc. It just gets awkward when you have to convert those streams into codepoints to do case-insensitive comparisons and such.

But to reply more directly, "IO-bound" means something specific--that the problem will be using a negligible amount of CPU, no matter what constant coefficient of overhead the language adds, and so scaling will never be a problem of "oops it's using too much CPU to do the text processing" but rather "oops the gigabit links are saturated we need to add more boxes."


Google has both CPU-bound and IO-bound problems at massive scales. A language that is not highly performant in either area would be insufficient.


You answered your own question when you acknowledged the areas Erlang get awkward in. Google supports more than 50 different languages. Doing that requires performant analysis of more than just bytes but of unicode codepoints.


Not particularly. They are basically just both languages that were inspired by CSP; the similarities don't go much deeper.


A minor correction. Erlang was inspired by actors while Go is inspired by CSP. Both models share a lot but have a few distinguishing factors in how they handle communication. http://en.wikipedia.org/wiki/Communicating_sequential_proces..., http://en.wikipedia.org/wiki/Actor_model_and_process_calculi


Erlang is slow


I wonder why Google never adopted Erlang.

Collectively, google believes they are right and the world is wrong. Anything pre-existing is dirty and unworthy of their genius if they didn't invent it themselves.

So, even though we have 20 years of Erlang and production concurrency experience out there in one solid language, it's just ignored (except for parts they want to get "inspired by").

All of that is fine in isolation, but then the fadsters jump in. You know who they are. They're the people who live to consume fads and only do the latest thing without any consideration to what came before. Soon you'll have thousands of blog posts about how Go is changing the future of programming because they invented VM-scheduled lightweight green threads load balanced over your CPU topology.


Collectively, google believes they are right and the world is wrong. Anything pre-existing is dirty and unworthy of their genius if they didn't invent it themselves.

Someone a bit more objective might say "Google believes that it's quite hard to integrate third-party software into their huge existing infrastructure and make it work at their enormous scale."

They might also say "Google believes the infrastructure they use is the right choice for their business needs, and Googlers like to tell the world about some of it, in case it's the right choice for them, too."

But I'm a Googler and you clearly have an axe to grind, so it's unlikely we're going to agree.


I used to work at a place that tried to reinvent everything from the wheel up, too. Over time I came to realize that avoiding third party libraries and technology stacks was more about long term support and hackability than anything else.

Unfortunately, newer recruits never realize this and accept the internal stack as religion. They never bother to learn third party alternatives.

We had a joke that went: "If you were ever fired and had to find a job elsewhere, you'll have to start by implementing your own X", X being a heavily used internal library that allowed an entire generation of developers to do certain things without ever knowing the underlying system calls.


Google is heavily dependent on Java, Linux, Python, C++ etc. About two seconds of thought is all it takes to realize what an absurd claim this is.

Google is almost alone in building internet services at its scale. The people calling for adoption of exotic tech like Erlang without understanding their unique requirements are the fadsters.


"Exotic tech" is the most amusing ad-hominem insult I've heard in a while. Erlang has more users than Go, at least, and more companies you could name off the top of your head have Erlang deployed somewhere (Github and Heroku, for just two.)

Also, Google doesn't have "unique requirements." They have a unique set of overlapping, pretty common requirements.

Some requirements in that set (e.g. serving data on dl.google.com) could be solved perfectly well by Erlang, and could have been for years, but can also now be solved perfectly well by Go. That they are now being solved by Go is likely an effect of the "Golang advocacy group" that has formed at Google.

Others can't be solved by Erlang, but can be solved by Go (e.g. CPU-bound matrix-multiplications for PageRank index calculation), or vice-versa (e.g. deploying new Google Talk daemon code without dropping the XMPP connection.)

I'm not saying Google could have used Erlang for everything. I'm not even saying there's not a place for Go. Just that Erlang has been around for a long time filling nearly the same niche as Go, and if Google are really rewriting all this software for the sake of switching to a language that is more apt for the problem-domain, then they could have done that years ago, without having to develop their own little language to do it with first.


Which hominem am I ad-ing exactly here?

Plenty of sophisticated users have taken a long hard look at Erlang and chosen other tech, for a variety of good reasons. I guess Twitter is a bunch of NIH bumblers for passing on it as well?

Go clearly has a pretty different set of design priorities, not the least of which are static typing and native code compilation. I'm inclined to give the people running Google's infrastructure the benefit of the doubt in making these choices.


Hey, don't group me in together with the downvoted guy here. I agree that Google have their own reasons to pick their own tech.

I'm just saying that Erlang was probably a better solution than C++ for some of the things they were doing, and they could have switched to it years and years ago. They might then have created Go, and switch from Erlang to Go for those same projects. There'd be nothing wrong with that. I'm just surprised they were using C++ of all things to begin with, before rewriting in Go.


I would say Google has a couple of fairly uncommon requirements: ridiculous scale and the fact that even a brief outage is world news.

In terms of "rewriting all this software", I wouldn't say it's at all for the sake of switching to Go. It would be more accurate to say "well, we need to rewrite this thing anyway because it's no longer scalable or maintainable. Let's give Go a shot instead of C++/Java)"


Fairly uncommon compared to your everyday website sure. But I can think of hundreds of companies that have ridiculous scale and brief outages would be newsworthy:

Airline bookings, Stock exchanges, Betting markets, Postal services, Major websites (Facebook, Twitter, Pinterest, LinkedIn), Online Games, Video services, Payment gateways, Banks etc.


Scale in and of itself isn't the basis of a sufficient argument though. You can write software at that scale in Visual Basic if you have billions of hexcore machines at your disposal.


Well, they wrote their own infringing java VM for mobile, tried to fix Python (but it didn't work out), essentially run Google Linux internally with various levels of contribution back upstream, and their C++ is (I'm just guessing here) nigh unreadable by non-übernerds.


"and their C++ is (I'm just guessing here) nigh unreadable by non-übernerds."

Actually google C++ codebase is some of the most carefully written and extensively commented codebase I've ever had the pleasure to work on. Of course a large amount of google C++ codebase is open source (Chromium, leveldb etc.) and you're free to read it and form informed opinions instead of guessing.


Wrote an "infringing Java VM" now did they?

Did you actually read the court's ruling on that?


I did, and to be honest it is not much different than Microsoft with Visual J++.

Without the "Google is cool glasses" on, I came to the conclusion that Google just took enough care to avoid all the legal traps that could make them loose a suit like it happened to Microsoft.


The president of Sun himself praised Google's Dalvik initiative. What legal trap were they trying to avoid here?

http://www.techdirt.com/articles/20110724/11263315224/oracle...


Not having enough money to sue them, so the alternative was praise.

From James Gosling himself,

http://nighthacks.com/roller/jag/entry/my_attitude_on_oracle...

> Google totally slimed Sun. We were all really disturbed, even Jonathan: he just decided to put on a happy face and tried to turn lemons into lemonade, which annoyed a lot of folks at Sun.


Disclosure: I work for Microsoft but that is a very new thing (the J++ dispute was many years ago)

IIRC, the J++ suit had to do with the specific terms of a license agreement that Sun and Microsoft had entered into. I don't think it was much like this where Google claimed to have done a non-infringing clean-room reimplementation claiming this didn't require a license from Sun/Oracle at all.


Yes, this is what I mean by taking care.

Google did a clean room implementation, while making a clear distinction between Java the language and Java the VM, while avoiding doing any kind of public statement that could violate the Java licensing trademark.

So now you have an environment, where Java the language version 6 can be used, while Java the language version 8 is going to appear next year, without any signs of ever appearing in Android.

Now, quite possibly as consequence of the litigation, Java developers targeting Android have to live with Java the language version 6 forever.

The end result is no different than the fragmentation Microsoft attempted with J++, but since it is Google, it is ok to do so.


Yeah. My sense is that Android developers aren't nearly as interested in 'Java the write once run everywhere platform' as developers who were adopting Java in the 1990s. So, like Apple and ObjectiveC, mobile developers are just willing to go where the platform takes them rather than push for evolution of the language per se.

Maybe it's just that mobile apps are much smaller than the monsters enterprise devs need language help to manage.


Speaking as a sometime Android developer, I use Java for Android because I have to, not because I'm particularly fond of Java. The biggest upside of a JVM on Android in this respect is that I have alternative languages like Kotlin available.


> Now, quite possibly as consequence of the litigation, Java developers targeting Android have to live with Java the language version 6 forever.

I suspect this was one of the motivations behind Dart.


Actually I did wonder about it at this year's Google IO, as it was hinted Dart is being used for some projects that would be in due time revealed.

That is actually the only way I see any future for Dart.


Dart is excellent in few cases on the client. First, if you can forget about older browsers, and second if you don't need to use a lot of pre-existing JavaScript libs in your app, and third if you don't need to use the web control on iOS. Dart as a Chrome Packaged App should shine. I really enjoy developing in it. I don't have any experience with it on the server, so couldn't say there.


> their C++ is (I'm just guessing here) nigh unreadable by non-übernerds.

Judging by their coding guidelines the C++ they use is a smallish subset of the full C++ language hence very bearable.


Why guess if you can look for yourself? Here's the Chromium source for instance: http://src.chromium.org/viewvc/chrome/trunk/src/ In my opinion, Google's C++ code is exceptionally readable.


> except for parts they want to get "inspired by"

Go's concurrency was not "inspired by" Erlang. It was inspired by Hoare's CSP. Pike and Luca Cardelli created a CSP based language called Squeak in 1985: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.8...


I was just ranting from a base of https://news.ycombinator.com/item?id=6235030. I don't actually know anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: