Hacker News new | past | comments | ask | show | jobs | submit login

Rob Pike's confusion about why C++ programmers didn't want to move to go is partially explained by his focus on a particular problem domain at the time Go was being developed. Google in those days was primarily concerned with writing highly concurrent request processing software, and at that time the vast majority of that software was written in C++. Writing highly concurrent C++ in the days before the language had things like lambdas, std::function, std::future, etc was definitely gross. In practice, a lot of the concurrent C++ code from that time period took on the callback-hell flavor of node.js code, but without the concise nature of JS. It was gross.

Go is obviously a superior language to C++ for writing concurrent programs in every way except possibly when you need to achieve maximum possible throughput (as in google's search clusters) or minimum possible latency (are HFT shops switching to go? I honestly don't know). However, outside of a few companies that have huge capacity problems, people weren't writing concurrent request serving programs in C++. They were using Python, Java, etc., and later, Node.js. So its no surprise that people flocked to Go from those languages and not C++, because there wasn't a huge population of C++ programmers in this problem domain looking for a better language in the first place.

The other C++ programmers were down the hall, working on Chrome and Android (which has a ton of C/C++, despite Java being the primary app development language). The reason they didn't, can't, and won't switch is that managed languages like Go are not well suited to highly resource constrained environments like phones, or browsers that are supposed to run on win XP machines. (To say nothing of games).

The advantage of C++ is not, as Pike pompously suggests, that C++ programmers have too much fun playing around with templates and other navel-gazing pursuits. The advantage is that it is a high-level language that lets you do what you need to do with very, very close to the minimum possible resource usage. And there are programming environments where that really matters, and those environments are never going away.




Being a C++ dev, I think you're on-point about the reason people still choose C++ but my suspicion is that it part truth, part group-think and it won't really hold out.

I could be totally wrong, but have you had to optimize a tight loop in C/C++? It sucks and a lot of stuff is missing : simd, likely/unlikely branches, the compiler has a lot of trouble knowing when lines of code are independent and can be done in parallel (b/c const =/= immutable), in-lining can't be forced when you know it gives better performance (if you use GCC extensions you can alleviate a lot of this..but that's tying you to a compiler). Yeah C++ is generally faster than Go, but there is a lot of performance typically left on the table. The language kinda start to work against you when you start to dig down. So you slap some shit together and the compiler will do a decent job, but most C++ devs aren't even looking at the compiler output

But the elephant in the room is that more and more of our available flops are on the GPU and C++ isn't helping you there at all. Not only that, but the GPU is giving you way more operations per Watt (and that's what a lot of those people care about). And finally, when you throw stuff onto the GPU you are also leaving the CPU available to do other things. So there are a lot of "wins" there. As you illustrate, the areas of C++'s relevance is shrinking, and shrinking into the area that is very GPU friendly.

So the way I see it, C++ folks will start to write more OpenCL kernels for the performance critical pieces and the rest won't matter (Go or Clojure or whatever). The GLSL is kinda lame and too C99ish... so maybe someone will write a better lang that compiles to SPIR-V, and it's not exactly write-once-run-everywhere, but it could be much better than writing optimized C++ and it can run everywhere. It's more of the cross-platform-assembly C/C++ wants to be


> SIMD

Intrinsics are directly callable from C++.

> likely/unlikely branches

Most compilers have extensions that will allow you to do this (__builtin_expect and so on).

> in-lining can't be forced when you know it gives better performance

Again, most compilers have this, not just GCC, e.g. __forceinline.

> the compiler has a lot of trouble knowing when lines of code are independent and can be done in parallel (b/c const =/= immutable)

This is true, as aliasing is a real issue. The hardware itself has some say over this anyway, dependent on its instruction scheduling and OOE capabilities.

What you don't mention, however, is the fact that almost no other languages offer any of these, let alone all of them. Rust may be the exception here, although some of this is still in the words (SIMD, I'm not sure about the status of likely/unlikely intrinsics).

For GPU programming, if you're using CUDA, you're almost certainly using C or C++, or calling something that wraps C/C++ code. Not everything is suited to GPU processing anyway, there's still a lot of code that's not moving off the CPU any time soon that needs to be performant.


right, so things that are not part of the language, not crossplatform and not crosscompiler. That's called fighting the language in my book :)

I'm not saying you can't get C++ to output the assembly you want - it just sucks trying to coerce it to do things that are honestly not that complicated. And even when you do get what you want you find you can't use the code anywhere else. To me that feels like a language failure...

> is the fact that almost no other languages offer any of these

I guess you missed my point. It seems to me that we're at a point where you no longer need these features as part of your core application language. The idea is that with OpenCL/SPIR-V we'll be able to

1- be more explicit and not fight the language (so even if you're 100% on the CPU it makes sense)

2- target every platform (you can finally write code for your GPU)

3- can be called from any parent language

You're right that not all performance critical problems boil down to tight shared-memory loops that can be thrown onto an OpenCL kernel - but my experience so far tells me that that's the vast majority of performance problems. So C++'s usefulness will shrink. But maybe my experience is biased and I'm off base. I haven't done much OpenCL myself - but I'm definitely planning to use it more in the future


> right, so things that are not part of the language, not crossplatform and not crosscompiler

You just have a header with different #defines for the different platforms you are going to ship on, or use a premade open source one.

If you want to ship on everything, you won't get full optimization stuff everywhere. It would be better if some of these features were in the standard, but in practice it isn't such a big issue for those two in particular.


These are all good points, but I'd say two things:

1. Whatever C++'s weaknesses in this area are, it's superior to Go, so C++ programmers aren't going to switch to Go because of this.

2. Not everything is about raw throughput. You can't do anything latency sensitive on a GPU. Consider a game: the pixels get drawn on the GPU, and the physics might happen on the GPU, but you still have a ton of highly latency sensitive things that are going to have to be done by the CPU, such as input handling and networking. Also, even with low driver overhead APIs like Vulkan, you still have to have something on the CPU telling the GPU what to do. Finally, GPUs aren't good at branch-heavy workloads in general.


I agree your points regarding C++, but it's not true that Pike was focused on the request-processing niche.

Go was specifically designed to be suitable for programs like web browsers, compilers, IDEs, maybe even OSes. We know because Pike said so - see the "Systems Language" page at https://web.stanford.edu/class/ee380/Abstracts/100428-pike-s... (warning, PDF).


I know Pike _said_ that, but I think the differing level of success of the language in these different niches speaks louder.

In particular, I think the idea that anyone serious is going to build a large desktop application like a web browser in a language that doesn't expose OS-level threads to the programmer is pretty laughable. Similarly for the idea of building an OS (or at least an OS kernel) in a garbage collected language.

edit: I realize I didn't really respond to your point. My point is that regardless of what Pike said, request processing permeated the atmosphere at Google, and deeply influenced the design of Go, whether Pike realized it or not at the time.


That's really interesting! Go gravitated towards request-processing services in part because that was the largest audience in the immediate environment, which in turn gelled Go's future.

We can imagine Go with the same engineers but at Microsoft or Apple or Mozilla, and its evolution would have been different because its first clients would have been different. The times make the man, and the language!


And because that's just about the only thing it's good for. Go has all the capabilities of Pascal circa 1990. No one in their right mind is going to use it to write a modern web browser.


No one in right mind should have written high perf backend servers in Javascript, desktop apps in electron or databases in Java but here we are.


> In particular, I think the idea that anyone serious is going to build a large desktop application like a web browser in a language that doesn't expose OS-level threads to the programmer is pretty laughable.

Why? It doesn't seem inherently laughable to me, at least. Why couldn't someone write a browser in Go? I anticipate that it'd be easier to understand & easier to extend than one written in C++ or C.


I don't think I will say that's the advantage of C++, but rather the perceive advantage of C++. Anytime I've met a great C++ programmer, they start off by ratting of a list of C++ features they don't use.


At my work, we don't use the standard library and we don't use references. And it's the only C++ code base I've ever liked working on.


How can you write efficient programs without references? And why are references so painful for you anyway?


A C++ reference is basically a pointer that pretends to not be a pointer. If they have any efficiency advantage over const-correct pointers, I've never heard of it (though I'm at best an intermediate C++ programmer).

Points often cited as disadvantages of references include:

- There's nothing that clearly distinguishes them from regular variables at the point of use. They can therefore obscure the fact that you're not passing by value or mutating a local/member variable (particularly if you don't have good IDE support, and good IDE support for C++ is damn hard for some environments/toolchains/targets)

- There is no concept of a null reference, which can create friction with interfaces that use NULL for optional parameters

These points are also often cited as crucial advantages of references. So it goes.


The second is absolutely an advantage of references, and other than having to use (if I'm being honest, legacy and/or bad) interfaces, there is absolutely not a downside in this regard.

The first can be a legitimate complaint, although I find it's rarely an issue in practice.


Contrast with Rust references, which are very much their own type (they more resemble pointers in their behaviour, with explicit dereferencing and address-of needed). As a C++ dev, this took some getting used to! But I never was a fan of how passing-by-reference was invisible at the call site.


References being non-nullable is a huge advantage of references.

If you need the equivalent of a nullable reference, you can just use std::optional<std::reference_wrapper<T>> instead and make the nullability explicit.


If we used std and wanted every type to be 50+ chars long to type, sure. Or we could just use a pointer, which just so happens is a nullable reference.


We have to use pointers for various reasons. Legacy interfaces use them all over the place, so we continue to do so, and they don't really work like pointers at all, so they're horribly inconsistent. Boilerplate things like conditional & partial initialization require jumping through hoops to get working for references. We also have lots of shared & cached objects, so everything needs to be validated anyway, you don't really gain much by knowing a pointer isn't null.


Out of curiosity, what do you use? I hope it isn't back to raw pointers and size_t offsets for lengths. What do you do if you want to pass a container of ... anything around? What's the replacement for:

   U do_something(const std::vector<T>& vec);


Raw pointers can still be used for pass-by-reference without size.

    U do_something(std::vector<T> const *vec) {
        T item = (*vec)[1];
        ...;
    }
We can get away with this because the type `std::vector<T>` is known at compile time. It goes back to C and passing pointers of structs all over the place.

The const on the right side makes sure the compiler keeps the memory at the pointed location safe like the const used in your initial question.


Sure, you can pass by pointer instead of reference, but the post also mentioned that they didn't use the STL (hence no std::vector) and I'm curious if that means they used a replacement library or something else.


My guess: they effectively turned C++ into Python with everything allocated from heap and wrapped with shared pointers ... and custom iterators and collections like old-style RogueWave Tools.h++.


Not OP but...

    output_type do_something(buffer_view input);




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: