Hacker News new | past | comments | ask | show | jobs | submit login
Less is Exponentially More (2012) (commandcenter.blogspot.com)
145 points by korethr on March 8, 2018 | hide | past | favorite | 75 comments



Rob Pike's confusion about why C++ programmers didn't want to move to go is partially explained by his focus on a particular problem domain at the time Go was being developed. Google in those days was primarily concerned with writing highly concurrent request processing software, and at that time the vast majority of that software was written in C++. Writing highly concurrent C++ in the days before the language had things like lambdas, std::function, std::future, etc was definitely gross. In practice, a lot of the concurrent C++ code from that time period took on the callback-hell flavor of node.js code, but without the concise nature of JS. It was gross.

Go is obviously a superior language to C++ for writing concurrent programs in every way except possibly when you need to achieve maximum possible throughput (as in google's search clusters) or minimum possible latency (are HFT shops switching to go? I honestly don't know). However, outside of a few companies that have huge capacity problems, people weren't writing concurrent request serving programs in C++. They were using Python, Java, etc., and later, Node.js. So its no surprise that people flocked to Go from those languages and not C++, because there wasn't a huge population of C++ programmers in this problem domain looking for a better language in the first place.

The other C++ programmers were down the hall, working on Chrome and Android (which has a ton of C/C++, despite Java being the primary app development language). The reason they didn't, can't, and won't switch is that managed languages like Go are not well suited to highly resource constrained environments like phones, or browsers that are supposed to run on win XP machines. (To say nothing of games).

The advantage of C++ is not, as Pike pompously suggests, that C++ programmers have too much fun playing around with templates and other navel-gazing pursuits. The advantage is that it is a high-level language that lets you do what you need to do with very, very close to the minimum possible resource usage. And there are programming environments where that really matters, and those environments are never going away.


Being a C++ dev, I think you're on-point about the reason people still choose C++ but my suspicion is that it part truth, part group-think and it won't really hold out.

I could be totally wrong, but have you had to optimize a tight loop in C/C++? It sucks and a lot of stuff is missing : simd, likely/unlikely branches, the compiler has a lot of trouble knowing when lines of code are independent and can be done in parallel (b/c const =/= immutable), in-lining can't be forced when you know it gives better performance (if you use GCC extensions you can alleviate a lot of this..but that's tying you to a compiler). Yeah C++ is generally faster than Go, but there is a lot of performance typically left on the table. The language kinda start to work against you when you start to dig down. So you slap some shit together and the compiler will do a decent job, but most C++ devs aren't even looking at the compiler output

But the elephant in the room is that more and more of our available flops are on the GPU and C++ isn't helping you there at all. Not only that, but the GPU is giving you way more operations per Watt (and that's what a lot of those people care about). And finally, when you throw stuff onto the GPU you are also leaving the CPU available to do other things. So there are a lot of "wins" there. As you illustrate, the areas of C++'s relevance is shrinking, and shrinking into the area that is very GPU friendly.

So the way I see it, C++ folks will start to write more OpenCL kernels for the performance critical pieces and the rest won't matter (Go or Clojure or whatever). The GLSL is kinda lame and too C99ish... so maybe someone will write a better lang that compiles to SPIR-V, and it's not exactly write-once-run-everywhere, but it could be much better than writing optimized C++ and it can run everywhere. It's more of the cross-platform-assembly C/C++ wants to be


> SIMD

Intrinsics are directly callable from C++.

> likely/unlikely branches

Most compilers have extensions that will allow you to do this (__builtin_expect and so on).

> in-lining can't be forced when you know it gives better performance

Again, most compilers have this, not just GCC, e.g. __forceinline.

> the compiler has a lot of trouble knowing when lines of code are independent and can be done in parallel (b/c const =/= immutable)

This is true, as aliasing is a real issue. The hardware itself has some say over this anyway, dependent on its instruction scheduling and OOE capabilities.

What you don't mention, however, is the fact that almost no other languages offer any of these, let alone all of them. Rust may be the exception here, although some of this is still in the words (SIMD, I'm not sure about the status of likely/unlikely intrinsics).

For GPU programming, if you're using CUDA, you're almost certainly using C or C++, or calling something that wraps C/C++ code. Not everything is suited to GPU processing anyway, there's still a lot of code that's not moving off the CPU any time soon that needs to be performant.


right, so things that are not part of the language, not crossplatform and not crosscompiler. That's called fighting the language in my book :)

I'm not saying you can't get C++ to output the assembly you want - it just sucks trying to coerce it to do things that are honestly not that complicated. And even when you do get what you want you find you can't use the code anywhere else. To me that feels like a language failure...

> is the fact that almost no other languages offer any of these

I guess you missed my point. It seems to me that we're at a point where you no longer need these features as part of your core application language. The idea is that with OpenCL/SPIR-V we'll be able to

1- be more explicit and not fight the language (so even if you're 100% on the CPU it makes sense)

2- target every platform (you can finally write code for your GPU)

3- can be called from any parent language

You're right that not all performance critical problems boil down to tight shared-memory loops that can be thrown onto an OpenCL kernel - but my experience so far tells me that that's the vast majority of performance problems. So C++'s usefulness will shrink. But maybe my experience is biased and I'm off base. I haven't done much OpenCL myself - but I'm definitely planning to use it more in the future


> right, so things that are not part of the language, not crossplatform and not crosscompiler

You just have a header with different #defines for the different platforms you are going to ship on, or use a premade open source one.

If you want to ship on everything, you won't get full optimization stuff everywhere. It would be better if some of these features were in the standard, but in practice it isn't such a big issue for those two in particular.


These are all good points, but I'd say two things:

1. Whatever C++'s weaknesses in this area are, it's superior to Go, so C++ programmers aren't going to switch to Go because of this.

2. Not everything is about raw throughput. You can't do anything latency sensitive on a GPU. Consider a game: the pixels get drawn on the GPU, and the physics might happen on the GPU, but you still have a ton of highly latency sensitive things that are going to have to be done by the CPU, such as input handling and networking. Also, even with low driver overhead APIs like Vulkan, you still have to have something on the CPU telling the GPU what to do. Finally, GPUs aren't good at branch-heavy workloads in general.


I agree your points regarding C++, but it's not true that Pike was focused on the request-processing niche.

Go was specifically designed to be suitable for programs like web browsers, compilers, IDEs, maybe even OSes. We know because Pike said so - see the "Systems Language" page at https://web.stanford.edu/class/ee380/Abstracts/100428-pike-s... (warning, PDF).


I know Pike _said_ that, but I think the differing level of success of the language in these different niches speaks louder.

In particular, I think the idea that anyone serious is going to build a large desktop application like a web browser in a language that doesn't expose OS-level threads to the programmer is pretty laughable. Similarly for the idea of building an OS (or at least an OS kernel) in a garbage collected language.

edit: I realize I didn't really respond to your point. My point is that regardless of what Pike said, request processing permeated the atmosphere at Google, and deeply influenced the design of Go, whether Pike realized it or not at the time.


That's really interesting! Go gravitated towards request-processing services in part because that was the largest audience in the immediate environment, which in turn gelled Go's future.

We can imagine Go with the same engineers but at Microsoft or Apple or Mozilla, and its evolution would have been different because its first clients would have been different. The times make the man, and the language!


And because that's just about the only thing it's good for. Go has all the capabilities of Pascal circa 1990. No one in their right mind is going to use it to write a modern web browser.


No one in right mind should have written high perf backend servers in Javascript, desktop apps in electron or databases in Java but here we are.


> In particular, I think the idea that anyone serious is going to build a large desktop application like a web browser in a language that doesn't expose OS-level threads to the programmer is pretty laughable.

Why? It doesn't seem inherently laughable to me, at least. Why couldn't someone write a browser in Go? I anticipate that it'd be easier to understand & easier to extend than one written in C++ or C.


I don't think I will say that's the advantage of C++, but rather the perceive advantage of C++. Anytime I've met a great C++ programmer, they start off by ratting of a list of C++ features they don't use.


At my work, we don't use the standard library and we don't use references. And it's the only C++ code base I've ever liked working on.


How can you write efficient programs without references? And why are references so painful for you anyway?


A C++ reference is basically a pointer that pretends to not be a pointer. If they have any efficiency advantage over const-correct pointers, I've never heard of it (though I'm at best an intermediate C++ programmer).

Points often cited as disadvantages of references include:

- There's nothing that clearly distinguishes them from regular variables at the point of use. They can therefore obscure the fact that you're not passing by value or mutating a local/member variable (particularly if you don't have good IDE support, and good IDE support for C++ is damn hard for some environments/toolchains/targets)

- There is no concept of a null reference, which can create friction with interfaces that use NULL for optional parameters

These points are also often cited as crucial advantages of references. So it goes.


The second is absolutely an advantage of references, and other than having to use (if I'm being honest, legacy and/or bad) interfaces, there is absolutely not a downside in this regard.

The first can be a legitimate complaint, although I find it's rarely an issue in practice.


Contrast with Rust references, which are very much their own type (they more resemble pointers in their behaviour, with explicit dereferencing and address-of needed). As a C++ dev, this took some getting used to! But I never was a fan of how passing-by-reference was invisible at the call site.


References being non-nullable is a huge advantage of references.

If you need the equivalent of a nullable reference, you can just use std::optional<std::reference_wrapper<T>> instead and make the nullability explicit.


If we used std and wanted every type to be 50+ chars long to type, sure. Or we could just use a pointer, which just so happens is a nullable reference.


We have to use pointers for various reasons. Legacy interfaces use them all over the place, so we continue to do so, and they don't really work like pointers at all, so they're horribly inconsistent. Boilerplate things like conditional & partial initialization require jumping through hoops to get working for references. We also have lots of shared & cached objects, so everything needs to be validated anyway, you don't really gain much by knowing a pointer isn't null.


Out of curiosity, what do you use? I hope it isn't back to raw pointers and size_t offsets for lengths. What do you do if you want to pass a container of ... anything around? What's the replacement for:

   U do_something(const std::vector<T>& vec);


Raw pointers can still be used for pass-by-reference without size.

    U do_something(std::vector<T> const *vec) {
        T item = (*vec)[1];
        ...;
    }
We can get away with this because the type `std::vector<T>` is known at compile time. It goes back to C and passing pointers of structs all over the place.

The const on the right side makes sure the compiler keeps the memory at the pointed location safe like the const used in your initial question.


Sure, you can pass by pointer instead of reference, but the post also mentioned that they didn't use the STL (hence no std::vector) and I'm curious if that means they used a replacement library or something else.


My guess: they effectively turned C++ into Python with everything allocated from heap and wrapped with shared pointers ... and custom iterators and collections like old-style RogueWave Tools.h++.


Not OP but...

    output_type do_something(buffer_view input);


Huge caveat: This article was written in 2012. So do yourself and fellow reader a favor and not argue over content that is now six years-old.

Obviously, Go has not replaced C++ usage. And, these days, I would see Rust as the more likely step from C++.

(Although, I still feel that it remains to be seen whether Rust will make a huge dent there; is memory-safety the killer-app feature that makes people want to use Rust? Do enough people feel that they Need To Use Rust to make it stick? I'm interested to see how that plays out.)

What Go has done, I think, is replaced interpreted language use (PHP, Python, Ruby) in backend code. Which makes sense, to me--those are already GC languages, so you're pretty familiar with the lay of that land. Generics may not make a huge deal for you because there were no generics to use in those other languages. And Go is quite a bit faster than any of the aforementioned interpreted languages.


> What Go has done, I think, is replaced interpreted language use (PHP, Python, Ruby) in backend code. Which makes sense, to me--those are already GC languages, so you're pretty familiar with the lay of that land.

One of the less talked about reasons Go is successful at replacing these languages is the devops story - essentially no runtime dependencies.


This can be easily done in C and C++, by static linking. In java, by building custom jars. This is in fact common practice in large companies, for exactly the reason you mention.


It cannot be done as easily with interpreted languages (Python, PHP, Ruby, Node.js), though--and those are precisely the languages from which Go has been stealing users.


Vendoring in your dependencies is definitely not hard in a dynamic language, just for some reason most projects never bother. I doubt it's a significant reason why they've been moving to Go.

I think it's more prosaic reasons, like that's the language their job uses. People who use dynamic languages tend not to think of themselves as specialists in their stack.


> Vendoring in your dependencies is definitely not hard in a dynamic language, just for some reason most projects never bother.

Not to take a side in static-vs-dynamic linking or the language argument, but that is absolutely incorrect. Single-static-binary is a very significant reason for lots of people moving to Go: that, plus a good cross-compilation story make a lot of problems go away.

Vendoring dependencies is hard in general. It's much harder in dynamic languages.

Some evidence/examples:

- Look at how many different ways of packaging things that Python has.

- Ruby, which most people consider to be one of the scripting languages that got vendoring right out of the gate, still struggles with system libraries used to bootstrap the Bundler process on deployment targets.

- Node.js, another one which is considered to have gotten vendoring right out of the gate, has massive problems with its implementation: package assets in node_modules take forever to fetch/inflate deployment times and artifact sizes, and put strain on systems. People argue that the difference between "my node_modules directories have so many files I ran out of inodes" and "my golang binary is really big" is just a difference of degree, but it's a big difference regardless.

- Vendoring/deploying compiled/native dependencies are a massive hassle in dynamic languages as well: better make sure that you compiled those deps in a way compatible with your target system (a big hassle if you are, say, building an old Perl C/XS extension on OSX and targeting Linux for deployment), and make sure they all link correctly once there, and, if they link, hopefully they link with system libraries that don't have behavior differences from wherever you tested the code. And a lot of popular libraries have a native component.

- There's also the problem of dependency resolution. Several dynamic languages have hard-coded system library paths, which means that if your vendoring misses a spot, you might be loading an unexpected version of something, or failing to start. The "just put everything in the system lib path" ignores the reality of multitenant/multi-use systems, and as a whole 'nother piece of expertise.

- The popularity of Docker/containers is largely driven by the fact that they let you "statically link" your whole stack. That demand indicates that some folks, at least, found the vendoring story for dynamic languages difficult.

> People who use dynamic languages tend not to think of themselves as specialists in their stack.

This sounds suspiciously like "if you use $language you're an idiot/inferior". Spare me your arrogance and language elitism, please. There are specialists, generalists, experts, and idiots on every platform ever invented--in very, very similar proportions.


The argument was made from devops perspective. Which means, no GC is a show stopper. So that eliminates C/C++. Heck the whole point of languages like Perl was C/C++ had huge practical limitations as languages when one wants to get work done quickly.

Also Java never really worked in Devops. You need to a lot of ceremony to just open files and do simple regex work. Unfortunately Python went down the same path.


In Go it is straightforward to write some code directly on servers and deploy. Java can't be written without its famed heavy duty IDEs and other tools which need full desktop environment not just bunch of command line tools.


You can use static link in C/C++ although you may find that the DLL you're statically linking tend to dynamically load the underlying DLL anyway that defeats the purpose.

In most C/C++ world, you're better dynamically linking anyway due to most prior to Visual Studio 2012 it would determine the Service Pack/Dependencies of your host operating system and include that into the exe manifest.


Go has replaced C/C++ in some circles.

Go look at most recent go projects. Kubernetes will never have been done in PHP/Python/Ruby. It would have been a Java or C++ project. Same with cockroachDB, Docker, Etcd, Fleet, Lime, InfluxDB, Prometheus, etc.


I think Rust will stick. Until its metaprogramming abilities can match or outpace C++ and be as fast, it won’t replace C++. I do expect it to replace a lot of Java use cases, and perhaps some Go.


> Generics may not make a huge deal for you because there were no generics to use in those other languages.

While this is technically true, in practice it is a good deal easier to create generic data types in those languages because you don't have to switch back and forth between type-world and no-type-world.


Sort of--you get dynamic types, so you can stuff whatever you want into your collections. You can do that in Go, too--just use interface{}. Sadly, that has not quieted the Generics Brigade.

If you used arrays in PHP, or Ruby, or Python, you can get those--with static typing!--in Go, either with slices for sequential arrays, or maps for associative arrays. I think that satisfies the vast majority of collection use-cases that arise in practical applications of those three languages.

(Note: I think generics would be a Good Thing for Go, and I think they'll probably happen at some point. They keep doing user surveys, and the user surveys keep bringing up the lack of generics as one of if not the number-one issue that users would like to see addressed.)


> You can do that in Go, too--just use interface{}.

This is what I mean by switching back and forth between type-world and no-type-world. If you implement a data type this way, I need to convert your no-type-world (interface{}) data to type-world data at some point after it pops out of your library.

> If you used arrays in PHP, or Ruby, or Python, you can get those--with static typing!--in Go, either with slices for sequential arrays, or maps for associative arrays. I think that satisfies the vast majority of collection use-cases that arise in practical applications of those three languages.

You do often (though not always) see "primitive obsession" in those languages, and Go encourages it even more so due to its only generic containers being the primitives provided by the language.

I don't mean to come off as a Go hater at all. I think it takes the pragmatic side of a ton of trade offs. But I do think that results in some weaknesses that people should be aware of.


Can you expand on what you mean by "primitive obsession"?


Here's an article on C2: http://wiki.c2.com/?PrimitiveObsession. There's a bunch more stuff out there too, if you search for it.


> Sort of--you get dynamic types, so you can stuff whatever you want into your collections. You can do that in Go, too--just use interface{}. Sadly, that has not quieted the Generics Brigade.

This argument, made often by the Go team, contradicts other arguments made by the Go team. Generics done like this have no type safety, which is the central reason for Go.

> If you used arrays in PHP, or Ruby, or Python, you can get those--with static typing!--in Go, either with slices for sequential arrays, or maps for associative arrays. I think that satisfies the vast majority of collection use-cases that arise in practical applications of those three languages.

Of course what everybody wants is trees, sorted maps, sets, ... WITH static typing.

> (Note: I think generics would be a Good Thing for Go, and I think they'll probably happen at some point. They keep doing user surveys, and the user surveys keep bringing up the lack of generics as one of if not the number-one issue that users would like to see addressed.)

No they won't. The real issue is that implementing them is pretty difficult in the compiler. Go's compiler is extremely, extremely simplistic, even to the point that it's badly written. It needs a LOT of cleaning up before anyone can reasonably contemplate adding generics.


> Generics done like this have no type safety, which is the central reason for Go.

Type safety is not the central reason for Go.


I couldn't find the initial announcement, but here is one of the very first presentations by Rob Pike:

https://web.stanford.edu/class/ee380/Abstracts/100428-pike-s...

"The target

Go aims to combine the safety and performance of a statically typed compiled language with the expressiveness and convenience of a dynamically typed interpreted language."


I suppose time will tell whether generics are added or not. I'm--not exactly buying your argument that Go's compiler is badly written, or itself the reason that generics can't be added. But god bless ya for having an opinion.


> is memory-safety the killer-app feature that makes people want to use Rust?

There are algebraic types with pattern matching, sane generics, actually good type inference, strong types that will help you declare behavior only once (as in normal vs. modular arithmetic), and a huge push for not having undefined behavior (that is mostly but not completely successful).

Any of those (and yes memory safety) would be enough of a killer feature on my view. There are still some things I dislike in Rust, but compared with C it's a complete no-brainer.


> C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way.

For the first time, I think I'm motivated to learn Go. My big beef with C++ is there's no simple mode. Every single decision ever made in the syntax and libraries is geared for only the most complicated, high performance needed situations.

My problem is that 90% of my C++ code doesn't need performance at all, and could be Python for all I care.

To be fair, some of the modern changes are bringing some simplicity back. But, it still feels like wading through mud to write what should be easy things like file i/o, compared to Python or JavaScript. Using hashes and mixing hashes with other containers is so verbose and nit-picky in C++ compared to today's scripting languages that I just long to mix languages most of the time I'm writing general C++.


There is easy mode. It is called C. I am not trying to be flippant. A lot of people started in C++, writing essentially C code and then going to classes, inheritance, templates, wider usage of standard library and boost, little by little.


I’d agree with that, and I’ve done lots of object oriented C. But fwiw, that’s not quite what I meant. You can’t get away from pointers in C, and easy mode should be as pointerless / loopless / functional as possible, like what STL is doing now but simpler and even less verbose. Arduino style C is super fun though...


But you're not supposed to program that way in C++ at all. Ask a veteran C++ programmer, and they'll tell you that buck n00bs should start out using the STL, smart pointers, and generic algorithms, make sparing use of inheritance, and never touch bare pointers.


This former buck noob took about three months to get to that exact place, because C++ is literally a messy archaeology of random CS ideas, and the guiding philosophy seems to “shovel these funky features in, maybe someone will use them.”

Beginner books meanwhile will be five, ten, or fifteen years out of date, and reliably give you a correspondingly wrong view of current best practices.

At this point C++ is almost an oral tradition. You need someone who knows what they’re doing to tell you which books to read and which parts of the language to ignore.

It’s horrendously difficult to learn if you do the usual thing and try to self-teach - and needlessly so, because the core of the language isn’t that complex.


C's not really easy mode, either. When I look at all the Unix utilities written in C, I want to weep, because it made things like buffer flows & other security flaws such attractive nuisances. C is easier than C++, certainly, but it's still a pretty rotten language to actually write code in.

And I write that as someone whose second language was C, and who loved C with a passion, for many many years. I didn't realise how much I was fighting the language, and I didn't realise how much in any C program the trees of incredibly verbose C-isms obscured the forest of what I was actually trying to do.


>>Every single decision ever made in the syntax and libraries is geared for only the most complicated, high performance needed situations.

This.

The very first C++ code base I worked on was a monolith pile which had some thing like 50 features. There fore when you assemble something like that into a monolith your abstraction hunger kicks in and you run into all these crazy inheritance or template based work.

These days people don't seem to write stuff as such huge monoliths. And Microservices architecture has become quite a thing now.

Absence of many features in Go feel more like a positive than a negative. You get to eliminate a lot of complexity by default.


If you don't need performance, just use Java where everything works fine and is easy to understand.


A counterpoint is that more can be simpler when it allows better abstraction.

For example, calculus is more complicated that arithmetic and algebra. However, calculus can unify and show relations between different algebraic representations. For example, when I learned physics in middle school, I thought that remembering

delta position = velocity * time

but struggled with

delta position = initial velocity * time + 1/2 (acceleration * time^2)

When, I learned about derivatives and integrals, the formulas all made sense.

In my opinion Go suffers from the ability to abstract which is most represented in its lack of generics. Instead, of being able to represent operations in a generic way, you are forced to cut and paste, and cannot express the relationship accurately, much as how I considered the relationship between acceleration and position to be opaque before I understood calculus.


That formula makes sense when you draw a velocity vs time graph and realise the area under the curve is the distance travelled. With linear acceleration, the area is a rectangle with a triangle on top, and everything starts becoming clear, even without calculus.


You can also visualize the meaning of d/dx x^n = n x^(n-1) geometrically: There are n faces being extruded which contribute a volume of x^(n-1) dx each.


I'm one of the people who never understand the absence of generics in Go and, to be honest, I don't find his reasons here very convincing.

> Early in the rollout of Go I was told by someone that he could not imagine working in a language without generic types. As I have reported elsewhere, I found that an odd remark. [...] What it says is that he finds writing containers like lists of ints and maps of strings an unbearable burden. I find that an odd claim. I spend very little of my programming time struggling with those issues, even in languages without generic types.

Does he suggest we write containers from scratch every time we need them? There is a lot of nontrivial logic, which I'd prefer not to have to get right again every time. (Not to mention that would feel very much the opposite than "programming in the large")

> But more important, what it says is that types are the way to lift that burden. Types. Not polymorphic functions or language primitives or helpers of other kinds, but types. [... long rant why types are for mediocre people and interfaces are awesome... ]

I very much agree that interfaces are awesome and composition is way more useful than inheritance, but what does that have to do with generics?

If you have a function that takes a list of things and returns another list of the same things in sorted order, wouldn't you still want to have a way to keep track that the returned list contains the same things as the list you passed to it? That seems independent of whether the things are specified through types or interfaces.

Languages primitives for containers are a good idea, but the way Go eventually implemented that (hardwiring the few most-used containers into the language and making it really cumbersome to use custom containers) seems very unsatisfactory.


> C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way.

Yeah, because it cannot possibly be that many C++ programmers saw Go as step down.

Pike thought that C++ programmers would flock to Go in large numbers. That didn't happen, which means that Pike didn't really understand what motivated these C++ programmers. I don't think this summary gets that much closer to the truth.


> Yeah, because it cannot possibly be that many C++ programmers saw Go as step down.

You just paraphrased his point.


I realize that it might sound like that. But I would paraphrase his point like this:

"C++ programmers spent a ton of time learning C++ so they're not going to switch to a language that isn't C++"

Which is basically saying that none of the design decisions that went in to Go matter to C++ programmers because at the end of the day, Go isn't C++. Which in turn is pretty damn uncharitable towards C++ programmers. A lot of technical criticism was offered, but it's easier to ignore that and focus on something you cannot change.


What is interesting is that Rust has seen a lot of interest from C++ programmers.

Many C++ programmers like the fact that it takes their informal rules of safe memory/thread management (use RAII, don't use raw pointers, don't use mutable shared state, be careful of iterator invalidation in containers) and formalizes them in the type system so the compiler can check them.

In addition, Rust has some nice features like pattern matching, algebraic data types, and Traits that are not yet available in C++.

Go on the other hand, was not as compelling to C++ programmers.



I guess I don't get it. I agree the abstract idea that a simpler solution is usually preferable. But I don't see how this translates for a language like C++ or Go.

I understand the backlash against "magic," but this more a concern of "spooky action at a distance" rather than overhead.

Could somebody who understands the sentiment help articulate what type of solutions can be more easily expressed in Go than in say Scala (short of performance-centric cases? Or is this solely about performance-centric cases?).


The claim isn't that it helps specific things, but that the more detail work you have to do, the less attention you can pay in the large. This is why, for example, Go has the proverb "a little copying is better than a little dependency"; you make a downpayment in the small to make your architecture cleaner.

This is something C++ makes exceedingly difficult (Linus Torvald's famous rant is half on this topic). This is also something Scala doesn't do well at, opting to provide a lot of features that make bits of code "prettier" at the cost of being able to see what is being built.


"Programming in the large" is mentioned a few times in the article but we're still thinking programming one OS process at a time, fiddling with low level data structures - is this really 'in the large'?


programming in the large in the sense of many people being involved in a large program


So what are the best examples of large programs written in go?


etcd, kubernetes, docker, rkt, grafana, cockroachdb, elastic beats



To add to the other comments: syncthing is another project that I use all the time, and it is written in Go.


not sure if it's large but some pentesting framework went from ruby to go. devs seem to like the language enough to rewrite it all, also, performance benefits have inherent value for non small handling of lots of network events.


kubernetes, docker


Is it coincidence that this is up on the front page at the same time as "C++ Core Guidelines" (rules for modern C++) and I started reading this just after glancing at that? The churn and complexity introduced into C++ in the name of simplicity is simply mind-boggling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: