Hacker News new | past | comments | ask | show | jobs | submit login
“Modern” C++ Lamentations (2018) (aras-p.info)
146 points by 0xedb on Jan 11, 2022 | hide | past | favorite | 194 comments



I concur.

All programming languages have their intrinsic features, but they also have a culture that surrounds them.

C++ has a rather odd culture... it's very a very fundamentalist/extremist one, all providing the optimal way to do things with no / little extra runtime overhead, but with no regard at all to compile times, readability, etc. It's not necessarily a bad thing, and it has its uses for games, ultra low latency software, all that stuff.

I am proficient in C++, I've worked in it for a bunch of years and can get stuff done in it and solve problems and ship software. C++ job interviews tend not to pay much regard to that, and focus on the arcane finer points of move semantics, rvalue references, ranges, concepts, all this new stuff... which has its place, but there is a lot of that to learn, and in many cases it just doesn't matter. After all, the language existed without it for a bunch of years.

Compare this with Java, or python or whatever, where the culture and focus is very much about "can you solve a problem / implement a solution using this language <or not>?" rather than "of the dozens of different ways to do this... what are the differences between each one".

For me, I really don't mind using and programming in C++, and I would probably even choose it for many / most projects, but I can do without the C++ people and culture.


"C++ has a rather odd culture..."

I have been writing C++ professionally for 15 years and agree. It has excellent ecosystem in several domains. On the other hand, in some areas the user needs to deal with incomprehensible and user hostile tooling and ecosystem glitches.

The weird thing is that more often than not the latter pathologies are considered "just fine", which just baffles me. Modern setups like Rust and it's tooling has demonstrated clearly that high performance native language has no intrinsic requirements to be user hostile.

Some people like to think that dealing with C++'s unnecessary complexities is somehow virtuous, when in fact they are a a bug of the platform (but one you need to deal with since if you need to write C++, you really need to write C++).


> Compare this with Java, or python or whatever, where the culture and focus is very much about "can you solve a problem / implement a solution using this language <or not>?" rather than "of the dozens of different ways to do this... what are the differences between each one".

We cannot optimize much in Java or Python, so of course we do not focus on the different ways of doing the same thing. C++ is needed in cases where the differences matter, so of course we care about the differences.


I would phrase this slightly differently.

We can't push Java or Python to be maximally efficient (not wasting a single cycle or byte over the absolute optimum), so we don't use them for use cases where that would matter. Therefore, we don't try to push Java or Python optimisation as far as possible. Being faster is always nicer, but it can never be essential to be optimal, because you wouldn't be using those languages if it was.

Whereas we can push C++ that far, so we do use it for use cases where absolute optimality matters. Hence the community around it values the ability to do that, and we end up with all sorts of weird and awkward stuff.

C++ also has a lot of weird and awkward stuff that is nothing to do with this, i should add. Most of that is ultimately due its age and C heritage, i think.


Firstly, let's dismiss this idea that C++ can avoid wasting a single cycle or byte, it doesn't aim for that. It aims for a much lower bar, albeit still a significant achievement - it says you won't do better by hand. It will very occasionally not meet this lower bar because some people like perverse challenges. That ludicrously fast Fizz Buzz implementation (so fast that inter-core CPU bandwidth alters how fast it seems) in raw machine code can't be duplicated in C++ for example. But on real problems with real programmers you won't do better by hand and should stop trying.

OK, so there are limits. Further though the fact that you sometimes want to be able to do something did not require that C++ insist upon you doing that all the time.

There are places where they didn't. HN recently discussed structure layout and padding. You can write a C++ program with explicit structure layout and padding (I think this may not be portable though) which can significantly alter performance - but you don''t have to. If you're OK not worrying about it, C++ won't optimise layout for you (as Rust would) but it also won't ask you questions about alignment and layout, they're just left alone.

But for some things they insist. Manual memory management for example. The language doesn't (in practice) provide garbage collection, so you are responsible for picking up the trash even if you don't care about that problem. Arithmetic is another, even if it's not performance critical for you, C++ insists you need to cope with the vagaries of machine arithmetic anyway and care about things like overflow and Not-a-Number.

And this burden imposes a real cost on programmers which, at the end of the day, means they actually produce slower/ bigger code than they might have.


> Firstly, let's dismiss this idea that C++ can avoid wasting a single cycle or byte, it doesn't aim for that. It aims for a much lower bar, albeit still a significant achievement - it says you won't do better by hand.

You're referring to the "zero-overhead" goal of new C++ features. I think the above poster was talking more generally about one's ability when using a language to implement some critical subset as fast as possible or with as little memory use as possible. I suppose it's arguable if you're still doing C++ when you use intrinsics. Though I think pragmas are certainly fair game. At least C++ (and C considered as a subset) is the biggest high-level language with such a possibility.


Zero overhead abstraction is often discussed, but the explicit design goal of C++ according to its author is that there shouldn't be a language between C++ and ASM. I.e. while you can do better in asm, it shouldn't be possible to do better in another higher level language C++. If you can it means that C++ is missing some feature.


Most optimization techniques work just as well in Python as they do in C++ or Brainfuck.


> C++ has a rather odd culture... it's very a very fundamentalist/extremist one, all providing the optimal way to do things with no / little extra runtime overhead, but with no regard at all to compile times, readability, etc.

It think this is attributed to fundamental rule of not breaking legacy language features. That is the number one rule (much like how Linus is iron fisted about not breaking user space in the Linux environment). However, there is a lot of wiggle room in the way binaries are generated, so compiler authors can get really creative with their optimisation efforts.


> It think this is attributed to fundamental rule of not breaking legacy language features. That is the number one rule

No. The number one rule in C++ is "don't pay for what you don't use".


To be more specific it's "don't pay in runtime cost for what you don't use". As the OP says that courtesy isn't extended to compile times - simply enabling C++17/20 can balloon your compile times even if you don't touch a single new feature, because the standard headers get more and more bloated with each new version.

https://build-bench.com/b/FW3EPgB1t0fmpIr1TB_vJbbb0Vw

Which is exacerbated even more by the leaky #include system, where you can easily end up pulling enormous standard headers into translation units that don't even reference them directly. The only reprieve is to ban most of the standard library from your project and write your own leaner version from scratch, as most big C++ projects end up doing.


There are ways to speed up compilation. It can be done in parallel, and you can get a lot of cores into a developer desktop these days. Plus you can get unlimited amounts of distributed cores in the cloud. Compilation can also be sped up with pre-compiled headers, which will soon be replaced with C++20 modules which will help compilation improve in the future as compilers improve their support for modules. With incremental compilation, you do not need recompile the whole project after every change. Linking in large statically linked binaries is a big bottleneck, but incremental linking is possible in some cases. You can also break your project into multiple shared libraries (DLLs) , and link dynamically at run-time for debugging. It can then be re-built with static linking for a release build. Continuous Integration systems can also help hide the latency of compilation from developers by continuously running build and test processes in the background on a cluster.


ccache can considerably cut down compile times. Simple to install and minimal config, no change to tooling or workflow...

https://ccache.dev/


> There are ways to speed up compilation. It can be done in parallel, and you can get a lot of cores into a developer desktop these days.

I'd start with the very basics, such as using forward declarations, encapsulation, and the pimpl idiom to not drag unnecessary #includes into your translation units.

Also, the compilation bottleneck sometimes lies in IO, thus moving your build folder to a RAM drive can speed up things significantly with zero effort.


It's possible, and I know first hand it is. It requires quite a bit of work though.

It might be easier to just have quick compile times though!


Number two rule is staying similar to C and not breaking legacy features. That's the fundamental difference with Rust (although you can argue that when languages get bigger you are always bound to become a slave of the language)


I'd argue not breaking old code has precedence over zero cost abstraction.


It's not a contest. They're both fundamental requirements. Mom doesn't like one of them best.


But indeed, when the option is between maintaining backward compatibility or further reducing overhead, the C++ committee usually choses the former, at least on the library side. See for example the less then ideal unique_ptr constructor or the invalidation guarantees of the unordered containers.


> It think this is attributed to fundamental rule of not breaking legacy language features.

Is this really all that unique to C++? Or are you saying that it's just particularly painful for C++ due to where it started from?


It's not that unique to C++. It's a property of useful things that have been successful for a long time.


Not unique to C++, but strongly enforced. Work groups that standardise new C++ features is very reluctant to make any breaking changes.


Stick to non-modern c++ imo.

Also in java everything has to be under the constraints of an object before you can do the stuff you want to do. It does seem more extremist to me because it's literally encoded in the compiler ruleset.


> it's very a very fundamentalist/extremist one

This is C++'s moto and core feature.

Don't think it's for you? Then it's not for you.


> and focus on the arcane finer points of move semantics, rvalue references, ranges, concepts, all this new stuff...

This is new stuff to learn but once you do you realize it's super helpful.


    inline constexpr auto for_each =
  []<Range R,
     Iterator I = iterator_t<R>,
     IndirectUnaryInvocable<I> Fun>(R&& r, Fun fun)
        requires Range<indirect_result_t<Fun, I>> {
      return std::forward<R>(r)
        | view::transform(std::move(fun))
        | view::join;
  };
Unless it solves world hunger or COVID permanently I am not typing this abomination, much less try to understand it. Very seriously whatever "team" is working on C++{20,21,..} should be disbanded and sent to someplace where they don't have access to computers. Nothing modern including Kubernetes ecosystem induces this much rage in me.


That's the pedantically, fully reusable, correct version. Most of the time I would type something like this:

  template<Range R, class Fun>
  auto for_each(R&& range, Fun fun) {
      return std::forward<R>(r)
        | view::transform(std::move(fun))
        | view::join;
  }
I.e. I wouldn't bother to type check Fun and I would use a function instead of a function object. Both features are important for library code though.

Forward and move are still pointlessly noisy, I wish there was a keyword for those (especially forward). And I wish we had proper UFC instead of abusing pipes.


Absolute rubbish readability


I don't see the problem. It's a lambda function template using the std::ranges library's pipe style for algorithims. You are don't have to use the pipe style if you don't want to.


To be fair, you could write this in a more comprehensible way. I wouldn't say "Rust sucks" based on reading

    Err(Err::Error(Error::new("c1", ErrorKind::Digit))));
either, since I just assume its not the language's fault.


I am a bit more willing to put up with Rust syntax monstrosities as at least the compiler tries to help me not shoot myself in the foot and when I try the error messages seem to be more helpful than C++ compliers'.

Also I don't have as much of an issue with sane subset of C++ as a language - it's the modern crap they keep adding pompously without regard to complexity and usability that irks me.


That's not idiomatic code.


I think people vastly undervalue static linking when it comes to compile times. If every file in a project changes every minute then yea - you're hooped... But usually certain groups will work in certain areas with those changes being synced only so often. If you examine the dependency tree and manage to split up the codebase into modules that you compile into .a files - then have a final step where you whip together any overly broad files along with just extracting the .o files from the archives[1] you'll live a happy life.

This can terribly fail if you have header files depended on by everything that constantly change - but honestly... that's a pretty bad code smell so either make an archive that's a dependency of all those others (and occasionally feel real pain when the header file actually gets updated) or else refactor things for sanity's sake.

A year and a half into my first job I was tasked with revising the build system and there are some really effective things you can do there without very much training. You want to lean on `make` a lot - no seriously, a lot - but if you do you can make some amazing things happen. You've also got a lot of cross platform tooling if you need to compile on different architectures - though scripting how those interact does get progressively more difficult in those cases.

1. Just in case you're unfamiliar - linux static libraries are essentially just an archive of a bunch of object files... so you can freely use ar to manipulate them in all sorts of ways https://tldp.org/HOWTO/Program-Library-HOWTO/static-librarie...


I don't completely disagree, but templates become a big trap with respect to header files and trying to get static linked files precompiled.


I agree a bunch with this. Templates can actually get really complicated quickly... In C++ if you want to generate the bytes to execute FooBar<string> then you actually need to reference FooBar<string> somewhere in the code to signal its necessary inclusion and later compilation steps can't build this for other types on the fly.

If you have very generic and flexible templates defined in a static library the library needs to have actually forced the compilation for all the types you intend to use. So I do agree that templates remain a big pain!


If you know your types then you can define the templates' bodies into a static library and keep the definitions there.


The problems being:

* External libraries don't do this. See: All those "header-only" libraries you see, starting back with boost

* It's a total PITA once the types get complicated

Source: Have worked for 10 years on a large codebase with "template auto instantiation" off.


Eigen is the worst for this. :(


Eh, of all the things to criticize C++ over, compile times is somewhere near the bottom of the list. Compile times matter in extreme cases, but 3 seconds is not really something I would worry about. I have seen Lisp compilers take longer than that just to start the REPL.

The fact that we are still dealing with weird problems with pointers, the fact that C++ has lambdas without garbage collection (explanation of the problem is too long for this comment), and the incoherent type system are much bigger issues than weird syntax or long compile times. The C++ feature set is a bunch of semi-compatible, sometimes outright incompatible (ahem destructors vs. exceptions and coroutines), ideas that keep getting extended further from compatibility by the standards committee.


I emphatically disagree -- compile times are definitely on my short-list of worst things about C++. Long compile times disrupt flow, and it requires great ongoing mental effort to work around slow compilation.

Here's a poignant anecdote. At one point while working on HHVM at Facebook I finally snapped, spent 2-3 days doing nothing but optimizing the build system to speed up trivial incremental compilation (e.g. a whitespace change). My efforts resulted in... 24 seconds best case. I spent years on that project pipelining my coding, that is, making a small change, asynchronously launching a build, fixing issues from the previous build attempt, ad nauseam, and the pipelining was oftentimes 3+ deep due to latency. That cognitive load severely impacted productivity.

That said, I agree that C++ has a lot of other terrible problems too!


> Long compile times disrupt flow, and it requires great ongoing mental effort to work around slow compilation.

We once reimplemented automake in KDE land (into a tool 'unsermake' by Stephan Kulow), for a few reasons but foremost among them was that it reduced compile times, sometimes drastically.


What build system do you use at Facebook? Changing a whitespace in a .cc should always cause just that for to be compiled. Changing whitespace in a header is trickier and smart build systems can inner that you didn't change code as compared to whatever was cached.


Many setups combine multiple cpps into one, per module. I think for making debug builds faster and because the linker is the slowest part due to not being parallelizable until recently.


24 seconds is awful ? I have a 500kloc codebase which uses boost, Qt, and templates & modern features used very liberally and incremental compilation is ~1 second


Is it open source, and can you post a link? I have never seen this type of project even link in 1 second.


sure, it's https://ossia.io. Here's a video of my edit-compile-run cycle: https://streamable.com/az397y

To get to something that fast for incremental builds of course requires some tweaks from the buildsystem defaults:

- clang instead of gcc (I use clang on mac, windows, linux)

- ninja instead of make

- -gsplit-dwarf for debug info

- mold instead of ld (I used lld before which was already nice but mold is bewildering)

- PCH (very easy with cmake thankfully !)

- split in shared libraries of adequate granularity, e.g. the software is split in ~40 plug-ins (although with mold I'm not sure this is even relevant anymore, the complete link step if I don't use shared libraries is not that slow).

I've encoded most of these in my cmake toolchain-generator, cninja: https://github.com/jcelerier/cninja/

To give some reference, my hardware is a 8c/16t intel 6900k.

Edit: I did a complete build with everything statically linked instead of through shared libraries. To give a reference: lld (which is already fast compared to GNU ld and gold) links the entire software in 0.47 seconds ; mold links it in 0.3


Is there anything left out of the "standard" setup? :)

> split in shared libraries of adequate granularity, e.g. the software is split in ~40 plug-ins

So you had to alter your project structure to improve build times?


> Is there anything left out of the "standard" setup? :)

I don't believe in accepting things just because some engineer choose a default under some deadline 25 years ago

> So you had to alter your project structure to improve build times?

no, the software was designed as-is from the very beginning (and it's an architecture I'd recommend for any software which is supposed to be extensible from its very inception, it worked out very well)


> I don't believe in accepting things just because some engineer choose a default under some deadline 25 years ago

That's not the point. The point is the "standard" is suboptimal enough that you basically have to change it. Don't think that applies to much of the competition (with the exception of Java that isn't really competition?).


> The point is the "standard" is suboptimal enough that you basically have to change it.

but there's no standard, just CMake defaults that I change ? It's not more standard to call /usr/bin/clang++ than /usr/bin/g++ (and if I was running freebsd instead of linux, as far as I know that's what would happen by default) ; likewise, other build systems like Meson use ninja by default (and that does not make ninja any more of a standard than make is when using cmake under GNU/Linux). Those are just tools in a toolbox.


> but 3 seconds is not really something I would worry about

Except it's 3 seconds per file.

The project I'm currently working on has 1.5 million lines of code spanning ~5,000 c++ source files.

At 3 seconds per file it would take over 4 hours to do a clean compile.

Luckily for this project it's not 3 seconds per file, and a full rebuild only takes about 2 hours - which is still a major pain.

The concerns raised by the OP are completely valid, and cause issues for any medium-large c++ project.

I'd love to have faster C++ compile times.


My project is around 600k lines of C++ code, with around 1000 source files.

Not even half as big as yours, but for what it's worth my incremental build time is a bit under 3 seconds (and is almost entirely taken up by linking).

A full rebuild for me (on a Ryzen 5950x, doing the build in parallel across all 32 logical cores) takes about 80 seconds.

I feel that for the creative projects I work on, having an iteration time under five seconds is the most important thing for me. Once I've written the code, if that code isn't compiled, linked, launched, and visibly running on screen within five seconds of when I finished typing the code, then I'm going to be tempted to check email or otherwise context switch while I wait for it, and it might as well have taken half an hour.

I spend a lot of time early in development trying to figure out optimal approaches to getting that iteration time down, optimising for incremental build times, link times, and debug build launch times.

I find it pays huge dividends, and quickly, for me.


> I'd love to have faster C++ compile times.

It's not an inherent property of the language. With some care, it is possible to write C++ code with compilation speed comparable to good old C, even in large projects.

The worst offender is usually templates. Especially third-party libraries which use them heavily, like boost. Ideally, don't use these dependencies. Second best option, only include these libraries in *.cpp files which actually use them, and keep that number to minimum. When absolutely necessary, note C++ allows to split templates across h/cpp files; just because the standard library is header only doesn't mean non-standard templates need to follow the convention.

Another typical reason is insufficient modularity of the code. It can cause some of the source files (especially higher level ones, like the one containing the program's main function) to include ~all headers in the projects. The fix is better API design between different components of the software. A good pattern for complicated data structures is pure abstract interfaces, this way the implementation stays private, the consuming code only needs to include the (presumably tiny) interface definition. Another good pattern is FP-style. Regardless on the style, I sometimes write components with thousands of lines of code split across dozens of source/header files, with the complete API of that component being a header with 1-2 pages of code and no dependencies.

And of course you want all the help from the toolset you can get: precompiled headers, incremental builds, incremental linker, parallel compilation, etc. Most of these are disabled by default, but can be enabled in the build system and/or IDE.


Appreciate the suggestions, and these are all things I aim for in my code, however this is a legacy project that has not generally taken these things in to account, and sorting them out has not previously been a priority.

Luckily there is buyin not just from developers but also senior management to fix things, but it’s also in an extremely risk averse industry so changes need to be slow and careful.


> and a full rebuild only takes about 2 hours

The fact that there is an "only" in that sentence, tells me everything that is wrong about C++ compile times, and makes me happy I work with Golang.


Do you have a Golang project with 5000+ source files? If no, your project is not on the same level at all.


No I don't, and of of the reason why, is that its much easier to organise golang code (no header files).

And even if a project was that size, it would compile faster than the C++ equivalent.


I would suggest using parallel builds and dropping your Pentium 4 for a modern multi core system.


Thanks. I’m sure that will fix the problem.


Not my fault your calculation for a 4 hour recompile depends on having zero parallelism, in a year where 16 core processors are easily available and blocking IO is still a thing. So yes it will probably turn several hours of waiting into a short coffee break, the few times you need a full recompile.


The math for my 4 hour compile didn't take in to account parallelism, the 2 hour+ builds however do, and were built on a 16 core CPU with parallel builds enabled. It's not turning in to a short coffee break.


Anybody complaining about C++ compile time but not using ccache, mold, or ninja has lost all griping rights. Building a 5000 file project using only a single core is beyond silly.

Splitting the project into libraries that don't need to be rebuilt for normal development changes eliminates 90-99% of your build time.

We can get into separate dwarf files after you get the basics down.


> Splitting the project into libraries that don't need to be rebuilt for normal development changes eliminates 90-99% of your build time.

..for your particular use case*

* your particular use case may not be representative


The whole point of using C++ in 2018 is to have control over memory layout and CPU instructions emitted.

If you think lambdas with GC are a good idea then you would not use C++. You would maybe use Go.

> Compile times matter in extreme cases, but 3 seconds is not really something I would worry about.

I think a lot of people would be very happy if their projects compiled in mere 3 seconds.


> The whole point of using C++ in 2018 is to have control over memory layout and CPU instructions emitted.

Memory layout, yes.

CPU instructions emitted? Not so much.


I really disagree, on some of my use cases I measured the difference between -O3 (sse2) and -O3 -march=native (avx2) to be 30-to-50-ish percent faster


That's as may be, but nowhere in your program does it say that it should use either sse2 or avx2, which was the parent poster's point. At least that's what I understood it to be.

Technically you could have different code paths and inline asm, but that's not really specified by the C++ standard either.


I've found that `-march=native` with _any_ optimization level will almost always result in faster code. However, that faster code isn't always backwards-compatible to older generation hardware. And, where it is backwards-compatible, it can actually be slower.


> If you think lambdas with GC are a good idea then you would not use C++

Lambdas with GC are an extremely good idea, because of the silently shared environments and what not. Whether C++ should have lambdas if it can't guarantee GC is a completely different thing. C++ doesn't have to include everything to be fashionable, after all.


Some programming ideas might be "good ideas" in the abstract sense. But in the context of "I want to have control over memory layout and CPU instructions" they impose unavoidable costs. If lambdas must be accompanied by a GC, then they are a non-starter for someone who wants to use C++ in 2018.

In 2022, this applies just the same to someone who wants to use Zig or Odin.

If you want to be really high level and not worry about memory, you would not be using these languages to begin with.


I like the MOP in Common Lisp but I would never suggest that C++ should have a MOP, too. No language has to have everything.


Lambda GC doesn't matter if they're just syntactic sugar for plain old functions, which is how it is with G++.


C++ Lambdas are capturing, so they aren't plain old functions. They evaluate to closures: function + environment. In C++ the captured environment is explicitly written in the [] section of the lambda.

The issue that I think the top comment is alluding to is that lambdas cannot capture by reference and then return, due to that stack frame becoming invalidated, which is a fairly large limitation that prevents swaths of lambda heavy code from working.

I personally think that this is a fine tradeoff for what C++ is: there's still plenty of utility in lambdas for simple higher order functions like maps and filters.


Although this kind of code is also exactly where Rust shines because ownership is explicit throughout, so you can't just accidentally forget that you captured something byref.


But hold up - capturing a stack variable by reference in a lambda isn't defined behavior - it will be garbage later when you try to read from it.


I feel like some people may be assuming that a capturing lambda must necessarily dynamically allocate memory, and that's just not always true.

The large majority of lambdas that I have written in c++ didn't need to do any dynamic allocation because they were used in (and only in) the scope where they were defined. Most of the time the compiler probably just inlined them.


A c++ lambda is a struct with a call operator. The destructors of captured objects are appropriately called when the lambda's lifetime ends.

Managing the lifetime of referenced objects is something you always have to worry about in c++ anyway, lambdas don't change that.


If your lambda doesn't capture any variables, it can be converted to a function pointer.

You can also just capture by value, rather than by reference. If you really want, there's also reference counting with shared_ptr.


> You can also just capture by value, rather than by reference.

Those things are misnomers anyway. Having access to an enclosing environment via a free variable in a lambda expression is completely orthogonal to values vs. references. I suspect they should have named the whole thing differently.


> I have seen Lisp compilers take longer than that just to start the REPL.

Except with Lisp, you aren't starting the REPL potentially every couple minutes. I've worked in many Lisps and it's not uncommon to keep a REPL open for days (recent project had one open for _weeks_)

Back when I worked in C++, compile times drove me crazy. 3 seconds wasn't remotely normal even for tiny projects because there's the overhead of actually starting the compiling process, reading output, thinking about what you saw, repeat.

Compile times in "the minutes" was much more normal.


I've had C++ job where regular build took 30 minutes and full rebuild over 1 hour. And it wasn't even a very big project. We moved the code to Java and the compile time there was under 10 minutes WITH tests (C++ version had none). Without tests it was less than a minute. It's not a perfectly fair comparison (we changed some things and not everything was moved to Java), but still.


I have worked on C++ projects where the regular build was an overnight job.

Fortunately these days I work on more saner projects which use distributed builds and a properly parallelized makefile and a full build is just a 2-3 minutes.

Incremental builds are still slower than ideal though.


3 second would be fine; but I’ve heard of people with 45 minute compile times.


I envy them. A clean compile on the codebase I'm working on takes over 2 hours.


Reminds me of Haskell. Grass is always greener I guess :)


If you wait 2 hours, you are Doing It Wrong. It will take you less than 2 hours to fix your build.


That's not true. If it were that easy the builds would be quicker. My last project was 2 hours clean build on CI. I spent a week optimising it and got it down to 90 minutes, and within 2 months it was back up to to 2 hours again. Entropy is real!


Eternal vigilance is the price of a fast build!


Ingo Molnar spent a year trying to improve build times for the Linux kernel [0]

And that was for C, which is relatively straightforward compared to c++.

I suspect it would take months to sort out the project I am on now, and even then full builds would still be slow.

To suggest it could be fixed in hours is ignorance.

0: https://lore.kernel.org/lkml/YdIfz+LMewetSaEB@gmail.com/T/#u


I suffered 30 minutes until I installed ccache - now the build time depends on what has been changed but it's a tiny fraction most of the time - unless I hit a template, but even then it's still a massive improvement.


Can't the build system already detect what changed? Is that not what ninja does?


Why do lambdas need garbage collection? What does GC offer that RAII can't solve?


This is known as the upward funarg problem:

https://en.wikipedia.org/wiki/Funarg_problem


I’m not sure about C++ in particular, but in general a lambda can capture a lexically scoped value and expects to still be able to access it even after the stack frame it got allocated in is popped. GC is the most straightforward way to keep that from leaking.

On a tangential note, the amount of Greenspunning C++ has done over the last two decades is truly impressive. There sure is a lot of syntactic noise though.


That's why C++ requires you to explicitly list the stuff you want to capture, and whether you want to capture it by value or by reference, and the least-keystrokes way to capture is to capture by value. Capturing a reference to something on the stack is an option which requires extra work.

C++ is predominantly a language that's oriented around value types instead of reference types. If you have a reference it's because you've taken extra steps to ensure the thing you have is a reference. As opposed to Java, Python, C#, Javascript, PHP etc where most everything is a pointer to somewhere.

Suggesting that C++ adopt GC to solve the problem of capturing references on the stack in lambdas is akin to suggesting that the Netherlands solve its biking/transportation problems by subsidizing cars and gasoline. It's not even wrong.


C++ lambdas are effectively callable objects, basically an anonymous class with an implicit operator(...) built in. They can capture other objects by value, which means those objects' lifetime is tied to the lambda object itself. When the lambda object goes out of scope, the captured objects' destructor will be called. There will be no leaks. Of course, if you capture by reference, this does not apply.


"Capture by value" is meaningless since an object could itself contain references. Combined with the fact that lambdas can have side effects (there is no way to avoid this in C++) it is actually possible for a lambda to wind up owning itself. Imagine a class that has a pointer to a function object as a member, whose type is compatible with a lambda that captured an object of that class. Now that lambda might call a setter for that class member, using one of its own arguments as the argument to the setter. Apply the lambda to itself, and now the lambda owns itself via its ownership of the captured object.

I have no idea what happens in this situation, but it is not at all impossible to wind up with something like this in a complicated and large codebase.


Keeping track of which lambda owns which captured variables can quickly become impractical, especially for complex capture structures that can create shared and even cyclic ownership in unclear and unexpected ways. For example, you might have a class that has a callback as a member with standard accessors (getter/setter); an object of that class could potentially have been captured by a lambda that is then set as the object's callback, creating a cyclic reference and potentially leaking memory. A lambda could even wind up owning itself if you are not careful. Unlike objects, which have a class definition (or at least a base class definition) that can be used to clarify or enforce ownership, lambdas can be created anywhere -- you may not even have access to the code that created a closure your code takes as a callback.

So while C++ lambdas are useful as anonymous functions and when passed "downward" (if you are very careful about when and how ownership is transferred), the full power of lambda expressions is not really available to a C++ programmer. It is unfortunate that lambdas are so crippled in C++, because lambdas could become the basis for even more powerful language features (pattern matching, continuations, etc.). Of course, the C++ standards committee continues to see lambdas as a kind of shorthand for defining "function objects," rather than as a first-class type that can be used to define more advanced features.

RAII is not all it's cracked up to be. It works for memory management as long as you are careful, but it is a lot less robust for other resources. For example, fstream, a textbook example of RAII, awkwardly forces programmers to explicitly ask for exceptions to be thrown. This is necessary because close can potentially throw an exception, but RAII means that you are calling close in the destructor -- and exceptions cannot safely propagate from destructors because destructors are called as the stack is unwound during exception propagation. This is one example of what I meant when I said C++ features are not compatible with each other.

Ironically, adding a garbage collector could fix RAII by un-crippling lambdas. The idea here is to create something like the conditions/restarts system from Common Lisp (or call-with-current-continuation from Scheme), using lambdas to support CPS conversion and using continuations to implement "exceptions" (i.e. "conditions"). Garbage collection would allow more liberal use of lambdas, and thus the compiler could automatically generate lambdas to implement other language features (just like it generates an array of function pointers to support polymorphism for classes). CPS conversion involves (among other things) taking code appearing below a function call, wrapping in a lambda that captures the stack frame, and passing that lambda as an (implicit) argument to the function (just like "this" is an implicit argument to member functions). "catch" blocks are similarly wrapped in lambdas ("continuations") and passed as arguments, and "throw" statements would call the appropriate exception handling argument ("return" statements call the non-exceptional continuation). Since everything is a tail call, you wind up heap-allocating the "stack frames" and relying on the garbage collector to deallocate everything (at this point it should be clear that explicitly managing ownership in this setting is totally unworkable). Now instead of unwinding the stack you have control flow jumping directly to your exception handlers, which can potentially return control flow to a defined "restart" to avoid unwinding the stack or else unwind the stack at the end of the catch block (unless another exception is thrown, in which case unwinding will be further delayed). Destructor exceptions are no longer a problem, because they can only be thrown when there are no active exceptions (or more precisely, when there would be no ambiguity about which exception is "active").

Sure, C++ wouldn't be C++ if it did what I described above -- unless you had the ability to add a declaration that you want it to happen for certain functions/class methods, for example by having a "collected namespace foo {..." syntax or whatever. It would be a headache for compiler writers to have to manage two separate function call paradigms, but it is not technically impossible to mix CPS code with C-style call stack semantics, and anyway CPS conversion is not the only way to implement what I described. Sadly the C++ standards committee does not see the poor compatibility between features as a problem that needs to be solved, so I doubt anything like this will ever happen.


If you want GC shared_ptr is always available in C++. Sure, cyclic loops are always possible, but it is not something that happens often by mistake. In 15 year of professional work in C++ I've nerver seen them.

On the other hand lambdas would be significantly less useful if any use implied an allocation and GC was forced.

The issue with fstream is that if you really care about the error you call close explicitly, otherwise you let the destructor swallow it.

In fact if you really care about errors and you would use a transactional interface, with an explicit commit and a no-fail implicit rollback in the destructor. RAII works perfectly for that.

Regarding CPS, that's exactly what the new coroutines do. They end up heap allocating the stack frame, which has proven extremely contentious to say the least. Still, RAII works just fine there.


> Compile times matter in extreme cases

In an IDE, your program is partially compiled with every keystroke. So compilation time is a paramount concern, not in extreme cases but always, everyday, all the time.

> 3 seconds is not really something I would worry about.

That's 3 seconds for a single translation unit, and a short one at that with a single non-templated function. Now compile 1,000 translation units, each of which being, say, 3 times as complex (so, a decent-size project) - and your compilation time has become 144 minutes, nearly 2.5 hours - instead of 192 seconds, a little over 3 minutes.

Now, it's true that you don't recompile your whole project every time, but still, the difference is huge.

> The fact that we are still dealing with weird problems with pointers

Actually, this has been turning into a non-problem in C++ with smart pointers and spans. More generally

> C++ has lambdas without garbage collection

I wrote this: https://stackoverflow.com/a/48046118/1593077

a few years ago. You're welcome to link to an explanation of why you believe GC is necessary when using lambdas.

> bunch of semi-compatible

To some extent, certainly. But you need to account for two points:

1. C++ is multi-paradigmatic. You should not expect to use all features together.

2. While this may seem weird, or ridiculous, C++ is a work-in-progress language. There are issues which have been known for decades and are only now being addressed, or not even now. I mean, we've needed (some of) the ranges functionality since the STL was introduced in the early 1990s, and it has just now made it into the language. This may not be a good thing but it is _a_ thing, so it's actually not the case that the language

> keep getting extended further from compatibility by the standards committee.


Does anyone using C++ for a real project actually enjoy 3-second compile times? It wasn't a great example to make their case (except in a narrow comparison to C), but it doesn't make for a very realistic counter-point much either.

Compile times are definitely a headache for me.


Do you mean as high as 3 seconds, or as low as 3 seconds? C++ compile times on a template heavy project I used recently were in the hours- you'd basically compile overnight and before you went to lunch.


Only if you are too masochistic to fix up your build.


I'm sure projects like chromium and llvm would love for you to just fix their build times.


And Unreal Engine too, please.


I used to work for Epic, and I did a good chunk of work on the game projects build times with reasonable success. Unfortunately I couldn't really change too much inside the engine because of backwards compatibility. Removing headers from other public interface headers has the possibility of breaking users code, which is a no-no so their hands are pretty tied. There are definitely some big wins to be had if they're willing to break back compat though!


Even without that restriction it's not that easy. It generally comes down to a choice between:

a) #include <vector> -- Lots of dependencies, but I know it works and I don't want to reinvent the wheel

b) #include <my_vector.h> -- Ok, I'm going to reinvent the wheel, but at least my wheel might have slightly fewer dependencies.

c) //#include <vector> // I'll just do without the wheel entirely


With ue4 that's not really the problem, it mostly falls under category b). The problem is that a bunch of stuff has unneeded module dependencies that hide the real dependency tree. Say a.h includes b.h but doesn't use anything from module B, but b.h includes SomeFundamentalHeader.h which a does need. The choice here is leave it alone or fix the dependency but break any user code that relies on the same behaviour. I did just that in a few modules that were new and off by default but good luck making a change like that in any commonly used modules.


I have decent size project and when I change something here and there it takes about that long to recompile and run. Full rebuild goes for longer but it is parallelized and is stile very reasonable.


Ironically, compile times in the node ecosystem are much, much worse - and they are not even proper compilers, just bundlers/minifiers. Bundlers being written in JS with not great optimization, and the single-threaded nature of the whole thing means that people often sit for minutes while waiting for a change to compile. The only saving grace is that node projects on the side of millions of lines tend to be rare.


> Compile times matter in extreme cases, but 3 seconds is not really something I would worry about.

3 seconds for 50 lines of code. How many seconds for 100k lines of code?


As always, the answer is "benchmark it". It might be 3 seconds for 50 lines, but also 3 seconds for 5k lines, since the heavy stuff has been compiled once.

Benchmark it, otherwise it's sadly not useful


I've managed to quite succesfully stay away from C++ lately thank you.


It's 3 seconds for something like 30 lines of code, which is completely crazy. Imagine if you project is 300'000 LoC.


>3 seconds is not really something I would worry about

3 seconds to compile less than 100 lines of developer-written code?


What's this about lambdas and GC? Are they reference counted?


C++ lambda captures can follow the copy constructor, so that when the lambda is copied, the captures copy too. Or you can capture by reference, which is easier to wander into unsafe situations with.

So a lot of times if you want the captures to stay alive but don't want a deep copy, you'll make a std::shared_ptr<> and capture that, which leads to reference counted captures.


Lambda expressions are a place where it's particularly likely that your mental model of what's going on isn't correct, or fails to account for something important in some edge case and so you lose track of who "owns" objects and thus is responsible for cleaning them up and when they need to do so.

With GC, this might cause a small unexpected leak of some kind. But in a language like C++ it can be Undefined Behaviour and all bets are off.


That's part of why the capture list is explicit in C++. Automatic by-reference captures should only be used for lambdas whose scope is strictly lexical (ie, passed down the call stack, aka "downward funargs"). Otherwise stick with by-value captures.


Not at all. But if you are saving lambdas for callbacks or other async things you'll generally want to add your own layer of reference counting anyways, unless your software architecture is very clear from the beginning.


your comment makes no sense


I genuinely think that the 'zero-cost abstraction' feature of C++ is ultimately a poisoned apple that has caused far-reaching damage to the entire programming experience. The problem, is that zero-cost in C++ means that - if all the cards line up - feature X will not cause performance overhead in a Release build.

In C, I think it's reasonable to assume that there is a linear relationship between the number of expressions in ones code, and the amount of assembly instructions it generates.

No such relationship exists in C++ code.

The problem is that due to this zero-cost mentality, brain-dead simple libraries, such as iterators, often have dozens of layers of abstractions, which all show up in a Debug build.

This makes Debug builds harder to well, debug, as you have to understand all these abstractions, (and the reason why the designers thought they were a good idea, but that falls under the umbrella of psychiatry), as well as making the build unusably slow, of†en forcing C++ devs to debug Release builds, which means that they are staring at 3 lines of x86 assembly, with hundreds of lines of 'helpful' compiler generated source code around it.


> of†en

How did that dagger end up there?


I have genuinely no idea. I'm typing this on macOS in Chrome, with an English keyboard


This is 95% criticism about C++20 ranges (specifically the ranges v3 implementation), but spun as being about some more general trend. I think some of the criticism of ranges is fair - compile times do matter and non-optimized performance can also matter.

Stuff like `iota` being obscure is less convincing. It wasn't invented here and it is anyway something you learn once and then it's part of your vocabulary.


They mention the joke of adding boost being a fireable offense. Boost is a massive and very common dependency in less opinionated C++ codebases (i.e., a lot of corporate C++ codebases). There are plenty of other massive "all header, all template, all the time" C++ codebases. I think the trend for C++ libraries to be like this and to turn into major compilation time problems is already well established. If there has been any reversal of that trend, it's because of the observations and work of people like OP.

As a side note, a major folk selling point for modules was that it might help with compile times, and a ton of work has gone into it in order to try to deliver. I haven't checked in on it in about a year, but my understanding is that it has largely failed on this front.

Saying that there is a general trend/problem in C++ projects with very long compile times for even simple examples is, I think, an extremely well-grounded assertion (although ranges-v3 seems to be the new champion there).


Yeah, I agree that there are libs that have this problem, but it's not a new thing. Boost is a huge collection of libraries, with different properties and tradeoffs. It has been the case for two decades that a lot of the Boost libs are on the heavier side with regards to compile times. That's partly because Boost is an incubation chamber that pushes the limits. Perhaps an even bigger reason is that Boost libraries mostly err on the side of being generic, allowing users to wrap the lib around their data rather than the other way around.

But really, there's nothing new here. When a developer brings in a library to their project there are always a bunch of dimensions to evaluate. Compile time is simply one that C++ developers have to evaluate, just like developers who rely on NPM have to ask themselves: what happens if $author goes nuts?

FWIW, I think it's too early to declare failure on modules. The standard library hasn't been "modularized" yet, but the three major compilers all offer ways to import, rather than #include the standard lib. I just did an experiment the other day and switched my current hobby project to use import for my standard library dependencies and it nearly cut the build time in half. Very promising.


I don't understand how it's not a general trend but it's also not a new thing. Saying it's not a general trend is the particular point I disagreed with. It's a general problem in the C++ world, especially with the "modern" stuff, and it's either as bad or worse than before.

Some people say boost isn't "modern", because it has accumulated a lot of historical cruft to support buggy compilers (old versions and varied vendors). I don't think that's the most accurate interpretation. I think "Modern C++" became such a big thing after C++11 because the effort required to make and use boost style libraries dropped significantly -- no longer do you need your project to either depend on boost, or implement its own version of some NONCOPYABLE macro to disable copy/assignment(/move) operators, or what have you. I think saying "modern" means removing the hassle along the way to implementing boost, is to miss that the point is to be able implement things like boost -- implementing/using things like boost is what's "modern". The fact that an absolute ton of the new standard library features have an origin story in boost only further confirms this.

One thing I forgot to mention when I distracted myself with modules, is that I totally agree with the author that the problem is also about debug build runtime performance, not just compile time.

Interesting to hear about modules. I'm out of the C++ game but I'd be happy for it to be working well.


I will also note that, specifically in large "enterprisey" codebases, the benefit of all-encompassing libraries like Boost is that they only need to get approved once. If you ever had to jump through the hoops trying to get some third party code approved in an environment that is actively hostile to that, it's a godsend.


I feel this. On the subject of Boost Geometry I cut multiple minutes out of our build times by removing all instances of "#include <boost/geometry/geometry.hpp>". I hate that the example code seems to encourage this - https://www.boost.org/doc/libs/1_64_0/libs/geometry/doc/html... as it adds a few seconds to the compilation time for each TU that does this (which can be most of them if you have it in another header).


> other viable systems programming languages simply did not exist (now you at least have Rust as a possible contender).

(I have no horse in this race) But if a big part of their complaint is compile times, Rust may not be the best example of a contender.


Rust incremental (and debug) compilation times aren't that bad, though. It almost feels like a mindlessly perpetuated rumor at this point.

Sure, it's kinda slow when compared to something like Go, but the amount of extra work rustc does compared to pretty much every other compiler is very big. Taking this into account, I can forgive it.

But even with all that said, Rust has a compiler performance team that is constantly trying to improve things and keep track of regressions. I dunno if C++ implementations (eh gcc) have something similar?

But yeah, at the end of the day, I think compilation times are important, but not as important as some other aspect of the language... Like, in C++ you can't even catch some errors until you start compiling and it fails in template expansion, while I usually don't have to compile Rust until I'm checking business logic. This allows me to focus on the problem at hand, not the language.


As someone who loves Rust and uses it daily, compile times are abysmal, even compared to C++. It's the sole reason I bought a 32-core workstation. But you're right, when iterating, incremental compilation times are reasonable enough. Compile time is a very small price to pay for all the developer time that is saved over the life of the project.


Have you tried the mold [1] linker? It's a drop-in replacement with a single cargo config change and gave me a massive speed-up (> 10x) for incremental debug builds of a large project with around 450 dependencies.

[1] https://github.com/rui314/mold


C++ makes me miss `cargo check`.

Sure, Rust compiles slower, but in C++ I have to wait on a slow compile just to find something dumb like a typo. And it doesn't even have the borrow checker. I have to make an exe that doesn't work to see if I made one typo. And then the errors are so bad.


you might find clang-check useful.

https://clang.llvm.org/docs/ClangCheck.html

you will need a Compilation Database (CDB) in a compile_commands.json file. I know CMake knows how to generate one, but the fact that other build systems don't shows how fragmented and user- unfriendly the cpp ecosystem is.


... your C++ IDE does not show the errors as you type ? If you're using sub-par tools by choice you cannot complain..


The best thing that I found that works is Clion. Everything else in the ecosystem lacked polish, or ergonomic, or consistency. And Clion is lacking in these too, just not as much.

Maybe I'm setting the bar too high for an old technology? I don't know.

What IDE (or just DE) would you recommend that works well?


Qt Creator is fine, Kate with LSP plugin is also fine. I use the latter because I prefer, well, editor-style editors. The main reason is that I prefer to do build and run related things on the console, it's more quick and flexible.


I use QtCreator. Here's my experience while editing, what do you think is missing ? https://streamable.com/xm1xw9


This looks a lot more responsive/snappier than Clion. I sometimes have to wait half a minute for syntax highlighting to "catch up" (not always, so I don't understand what ails it), or for it to pick a new method/classname to appear in auto-completion results.

I'm not on my work machine now. I don't write C++ as much at my company anymore, as I was pushed into a more (regrettably), so I can't recall an example from the top of my head now. But there were definitely instances where you can pass in something, and it doesn't error until you hit the "build" button (or run a build via cli). And even then, the editor doesn't report where the error is in the "editor area", you have to read the file/line in the console.


I never found a C++ IDE capable of understanding our root CMake project so it can gives us autocompletion and inline errors.

We have a root project that's fully CMake, that orchestrates our sub libraries (also using CMake, declared with ExternalProject_Add) so the dependees are built before their dependents (and also to handle final packaging). The subprojects are added as git submodules.

I tried VSCode and QtCreator but none of them seem to support CMake multi-projects defined in this way. I have to create a build directory per subproject, which can be painful when we have so many subprojects.


> We have a root project that's fully CMake, that orchestrates our sub libraries (also using CMake, declared with ExternalProject_Add)

well, those are not sub-libraries but external projects with their own build-system. I have sublibraries in submodules and add them with add_subdirectory and everything shows and is auto-completed correctly.


Thanks. We've been using external projects since some of them are old style CMake, ripe with global variables, but apparently we could've used add_subdirectory for some other libs. I'll keep that in mind for future refactorings and libraries.


Even if you can't make the IDE work, having a link target that does -fsyntax-only and skip linking should still greatly speed up your compile-edit loop.


Try ccls


No, but thank you for making me aware of it. Linking with LTO takes forever on the projects I work on. I'm excited to see what Mold can do here.


Curious what kind of code you have. Sure, we don't have anywhere close to our C++ codebase in Rust, but we did integrate with a library that's on the bigger side, and compile times of the Rust part have been negligible compared to the C++ parts.


> So this lazy evaluation LINQ style [in the C# example] creates additional 0.03 seconds work for the compiler to do. In comparison, the C++ case was creating an additional 3 seconds of work, or 100x more! This is what you get when “features” are part of the language, as opposed to “it comes as hundred thousand lines of code for the compiler to plow through”.

imo this is the big takeaway here


I have 2 personal projects, one in C++ about 90k line. The other one in JavaScript (ReactJS). The full rebuild of the JS projects takes longer than the C++ project.


Are you using webpack, and if so is it possible to switch to esbuild?


Yes, I'm using webpack. Moving to esbuild will be non-trivial amount of work.


Yes, MSVC STL implementation is dog-slow in Debug. I solved the problem by creating a new build configuration called "RelNoOpt" that builds with "Release" runtime-libraries and STL, but turns off all optimizations. I get debugging experience of "Debug" build with none of its performance penalties. (Though the extra checks -- esp iterator invalidation -- in Debug STL have saved me tons of debugging time a couple of times on another project.)


It's interesting to note that the algorithm can be rewritten with 2 nested loops instead of 3; for each z and x, check that there exists an integer y. At 10,000 triples found, this is the difference between (on my system) 700ms and 42000ms, a factor of 60.


Wow, and colleagues dared complain that Rust has a terse, hard to read, syntax...


There are examples of far worse languages:

https://en.wikipedia.org/wiki/APL_(programming_language)#Exa...

At least the Hello World example is readable...


Incidentally, that obscure "iota" name that OP was complaining about came about by way of APL (which of course used an actual Greek letter to write it).


Actually I'd prefer greek letter. At first I read iota as itoa which is also a thing.


I always love watching that incredible old APL demo [1], though. Seems incredibly futuristic for the time!

edit: this is actually a different demo, but there's a similarly impressive one focused on showing APL, from the 70s I think? Will update if I find it!

[1] - https://youtu.be/yJDv-zdhzMY


IIRC, APL even has its own keyboard (to type the code)


In my opinion, simpler is often better than complex. C++ gives control over memory layout and memory (de) allocations. Aside from complexity of newer C++ template/stl features, it becomes harder to stay in control of memory (and performance). Also, code becomes harder to read. Hence in games (and often in embedded) the use of C-style C++ is popular.


Discussed at the time:

“Modern” C++ Lamentations - https://news.ycombinator.com/item?id=18777735 - Dec 2018 (249 comments)


I played around with ranges in dlang. The idea of C++ ranges kind of originates from there. In D it's very easy to use and the standard library lets you write beautiful code with it.

C++ made it so ugly. Especially if you wanna implement a custom range.

I can recommend this article by Andrei Alexandrescu, if you are interested in the idea of ranges.

https://www.informit.com/articles/printerfriendly/1407357


I think that the original boost.range (form 2003) predates ranges in D. There is a continuous evolution from the original boost range, boost range v2 and std range.


I also have to agree that ranges functionality in C++20 is somewhat warty. I wonder, though, how much time the Pythagorian triplets example takes to compile with C++20. The ranges library simulated functionality which would later go into the language, so the comparison is not entirely fair.


Senders and Receivers is the next big thing Eric is delivering to C++, stay tuned.


if i could have function overloading, operator overloading (particularly (), ++, --, *) and i could auto-convert my existing codebase to C, i'd move right now to C. i stay with C++03 and thats enough.


I'm starting to grudgingly think the same. Although I like some aspects of C++, the feature set and evolution over the years bothers me, C++20 seems almost like a different language from C++11, like C++11 was to C++03 but with none of the joy for me, with a lot of features that seem to over complicate things over time detrimental to the improvements they bring.


If you’re willing to stick to clang you can get function overloading in C. I don’t think you can get operator overloading though.

https://clang.llvm.org/docs/AttributeReference.html#overload...


hah i like operator overloading, making function objects etc.


No unique ptr, auto, or for each loop? No thanks


i write/maintain solo this software for 15 years and just dont want to start doing autos in for loops. then i have to go and change all the loops otherwise it would bother me. i dont want to go back and start messing with code that is pretty solid all these years.


The answer is simple: just don't.

Write new code the new way, and leave old code that works alone. There will never be a case where you wonder why old code looks like old code. Continuing to write new C++03 is just masochism.

You might find reasons to switch some old stuff to use move semantics, eliminating reference counts, which is a small job. Leaving alone stuff that doesn't need to change needs only discipline, not time. Discipline builds with use. I recommend it.


> Continuing to write new C++03 is just masochism

disagree. this keeps things simple, readable and maintainable. beyond STL i havent found need for any new data structures or algorithms. i roll my own for domain specific (audio/graphics).

> Discipline builds with use. I recommend it

sure daddy. perhaps if the c++ committee had discipline they wouldnt have added these crazy things to the language too and made it a mess that it is today. however we masochists are pleased with the concession offered by stroustrup's one-liner: 'if u dont use it, u dont pay for it'


> disagree. this keeps things simple, readable and maintainable

Unique pointers pre C++11 are fundamentally broken, and smart pointers are the poster child for modern C++. Other features like nullptr, enum class, range based loops, and constexpr provide huge gains in readability too.

> i havent found need for any new data structures or algorithms

That's great - however the unordered contains are for many people a drop in replacement with a performance boost that came in C++11 .


How hard is it to read a "for" loop? Even when programming in python, half the time I find myself switching to a simple loop over an index because I realize I need the index. Often it's for debugging, because that index has a real meaning I can interpret when I see it.

Along those lines, I think the real disconnect in the argument here is in terms of what kind of problem people are programming for. For many applications, advanced C++ is solving problems they don't have. The above poster mentioned audio. The data there is in single (or perhaps few) big buffers with a very simple and regular structure determined by international standards. They only need a few simple pointers. Or maybe just std::vector. Similarly just a few types also with real-world meanings. Debugging may require them to check the processing in ways that abstraction, type-checking, etc. will only get in the way of. And new features may not work within whatever abstractions had been put into place for previous features anyway, because the features relate to the underlying data.


> How hard is it to read a "for" loop?

    for (std::unordered_map<std::string, int>::const_iterator it = some_map.cbegin();it != some_map.end(); ++it)  
is pretty damn unreadable (and for bonus points it doesn't compile) compared to

    for (const auto& keypair : some_map) 
Which doesn't have the possibility of the issue in the previous snippet (cbegin != end rather than cbegin != cend)

> For many applications, advanced C++ is solving problems they don't have.

No-one here is talking about advanced c++, we're talking modern c++.

> The above poster mentioned audio. ... They only need a few simple pointers...

And we all know that audio is immune from massive security vulnerabilities, right? [0]

"New" features like nullptr (instead of NULL), unique_ptr, move semantics as examples can just flat out avoid classes of bugs that come up in low level programming, and features like constexpr and static assert can make runtime checks compile time checks instead. All these features can be escape hatched if you really really need to just cast to a void pointer to fill a buffer at the end, but the surface area for nasty bugs is significantly reduced if you do so.

[0] https://msrc.microsoft.com/update-guide/vulnerability/CVE-20...


Your examples might fit the discussion better if you made unreadable C++03 code that worked. But nonetheless, I don't agree. I can directly read what your first attempt at a loop is trying to do. Your second one hides almost everything from me. If the second one fails to compile and gives me a paragraph-long encrypted complaint, I need to somehow half-comment the thing out and create a bunch of test code so I can go in the debugger and figure out what you are actually doing.

Are you saying the creation of security vulnerabilities has decreased in the last ten years.

edit: and as for: "No-one here is talking about advanced c++, we're talking modern c++."

I am. Reconsider your examples in ternms of not just whether someone needs the improvement, but also whether someone actually needs the preceding alternative you are complaining about.


> Your examples might fit the discussion better if you made unreadable C++03 code that worked.

I deliberately made it not work - calling cend instead of end fixes the issue (one which isn't possible possible the range based loops)

> Your second one hides almost everything from me. If the second one fails to compile and gives me a paragraph-long encrypted complaint, I need to somehow half-comment the thing out and create a bunch of test code so I can go in the debugger and figure out what you are actually doing.

And I disagree here. In practice the only issue I've ever seen with range based for loops at compile time is no iterator support. The error on clang and msvc is incredibly clear here. Chances are the compiler error message about comparing a const iterator to a non const iterator is going to be more obtuse

> also whether someone actually needs the preceding alternative you are complaining about.

I cannot think of a _single_ situation where the NULL macro is required where nullptr wouldn't be an improvement. I'm not saying don't use the fundamental constructs when they're needed, I'm saying use the modern alternatives when they're suitable. Unique_ptr and move semantics alone eliminated practically every use after free bug I've seen in the last decade, for example. Enum class is another great example - legacy enums are absolutely chock full of foot guns and I have found countless bugs where someone just passes garbage through. Again those bugs just don't exist with the modern replacement.


I could be wrong but all the fancy stuff you are doing to impose constants, including your bug, look like c++11 to me.

Without these the complexity in your example disappears, and every word has direct intuitive meaning. Except for the ugly type expression on the left (which is kind of thing one must love if you endeavored to be a C++ programmer in the first place, though personally I do agree it's increasingly hard to read beyond the most basic cases). Nonetheless, using auto is the cause of the problem I was referring to. This eliminates the remaining half of the ugliness, but now if the code inside the loop chokes in your inputs, I have no idea why. Maybe if I'm lucky the compiler will give me a paragraph that I can try to parse descriing the mess that the object was compiled into, or maybe it will just say "Can't find a suitable template. Have a nice day".


Your loop body code is choking probably because you have imagined that the loop induction variable is still an iterator. It's not. It is a direct reference to a container element. So, keypair.second, not it->second.


> Even when programming in python, half the time I find myself switching to a simple loop over an index because I realize I need the index.

If you need to iterate over an iterable xs and discover you need the index as well, in Python you shouldn't switch to a loop over the index, you should switch from:

  for x in xs:
    ...
to:

  for idx, x in enumerate(xs):
    ...


C++03 is Turing-complete, implying nothing in any newer Standard is "necessary". Yet, many things were added, for reasons. That you have failed to learn to understand those reasons is not a fault in the Standard. Blaming your own failure to to learn on the committee does not fool anybody.


> C++03 is Turing-complete

thank god for that. and also thank god that it is not a proprietary language/platform. eg., Apple otherwise we would be mandated to write in the latest C++.

>That you have failed to learn to understand those reasons is not a fault in the >Standard. Blaming your own failure to to learn on the committee does not fool >anybody

what you call as my failure to learn is not remotely unique to me. many others more eminent and eloquent than me (and you too) have said more unkind things about C++, least being the phrase 'cognitive load'.

the 'trouble' with software is it its impossible to leave it alone. an 'update or perish' sword hangs above it. and C++ has swallowed the bait totally. it has competed with the joneses all the time: oh Java has garbage collection, why cant C++ have that too?..oh well cant cos...our users write imperative, sequence critical software that we cant have unpredictable, behind the back GC...so hah smartpointer, nullpointer kludges. oh LISP has lovely lambdas, why C++ has no lambdas?....well we give u lambdas but please close ur eyes to that horror-syntax; hey we can give static const variables values in the struct/class declaration, why not other variables?...sure that isnt a big problem! and the horror of all...C++ so proud of be a type-safe language yet modern programming is all about type-free so hah lets give u auto! what if i used auto as a var name in my code. tough luck. gotta change it dumbo. u are a bad programmer to make a variable name with just 4 letters.

C++ is so thorny, every symbol impregnated with meaning. walking on egg-shells, no wonder they call it 'cognitive load'.


Arguments from ignorance always fall on their own sword.

Every argument you can cobble together against learning the modern language applies as much to C++03, or '98, or, indeed, C itself. And to programming at all. Ultimately, though, it all amounts to laziness.

So the final position is to stop writing any code, and leave the work to those willing to do the work.


the work is not about the number of programming language features used. its about imagination, addressing a users's need etc.

i'm done with learning further in C++. C++03 is enough for my purposes. for other features of C++, id use a scripting language integrated into my C++ app say Lisp, TCL, Python or Lua, a lot cleaner altho a bit slower.


Yes, it would be better for you to write in those other languages than to continue producing crappy C++ code that others must then maintain at extra expense.


wow, bro i dont remember peeing into your cornflakes this morning :) u know nothing about my work and i dont have to prove it to you. go sell your c++xx (even this naming convention is ugly) to linus. u will get a warm reception :)


It is no crime to think nothing new is worth learning. Still, bragging about it on the internet invites well-deserved scorn. Don't like scorn, don't advertise.


<scorn>piss off.</scorn>


More like, Wirth's Pascal from 1970 has broken, downward-funarg only local functions? So does GNU C. Why can't C++ have some similarly lame local functions? And then make them anonymous like lambda? Only we can't introduce a lambda keyword. I called static's agent, but evidently static is tied up playing five roles already. I know, let's use the array brackets: they have not had a good diddling.


> what if i used auto as a var name in my code

Then you had a syntax error. This is a historic keyword that played a useful role in Dennis Ritchie's B and NB languages, predecessor to C. It was retained as a reserved keyword in C, and into C++.

The use of auto in relation to type inference in modern C++ is a repurposing of the existing useless keyword that has always been there.


cool thanks, my bad.

but i like to see the types im working with. auto like another poster has said though makes it readable kinda hides the type which idiomatically / stylistically i'm not comfortable with.


The key is "(2018)". Things are better now.

Use ccache, ninja, the mold linker. Lean into asysnc. Split off libraries. A little attention, once, saves time all day every day.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: