> C++ is often described as complex, hard to learn, and unsafe. That reputation is undeserved. The language itself is not unsafe. On the contrary: it is precise, honest, and consistent. What is unsafe is how it is used if it is misunderstood or if one remains in old patterns.
I think this take needs to stop. It’s a longer way to say “skill issue”. Meanwhile, decades of industry experience have shown that the governing principles of even modern C++ make it incredibly hard (expensive) to deliver high quality software. Not impossible - there’s lots of examples - but unreasonably hard.
C++ is fundamentally unsafe, because that’s how the language works, and if you think otherwise, you don’t know C++. There are patterns and paradigms that people use to limit the risk (and the size of the impact crater), and that’s helpful, but usually very difficult to get right if you also want any of the benefits of using C++ in the first place.
Certain people will disagree, but I surmise that they haven’t actually tried any alternative. Instead they are high on the feeling of having finally grokked C++, which is no small feat, and I know because I’ve been there. But we have to stop making excuses. The range of problems where C++ is unequivocally the superior solution is getting smaller.
The author seems to be writing about a dream language that isn’t actually C++. For example:
> In my streams I showed how RAII can thereby also be applied to database operations: a connection exists as long as it is in scope.
Only if that connection object doesn’t support move — we’re 12 years of C++ standards past the arrival of move, and it still leaves its source in an indeterminate state.
> With std::variant, C++ gained a tool that allows dynamic states without giving up type safety.
> With C++20, std::ranges decisively extends this principle: it integrates the notions of iterator, container, and algorithm into a unified model that combines type safety and readability.
Ranges may be type-safe, but they’re not safe. Like string_view, a range is a reference, and the language does not help ensure that the referent remains valid.
> Only if that connection object doesn’t support move — we’re 12 years of C++ standards past the arrival of move, and it still leaves its source in an indeterminate state.
I haven’t watched the streams he referred to, but… I am fairly certain the language itself says no such thing. You may be thinking of the standard library, which states that certain classes of moved-from objects have unspecified state. If you’re writing your own DB connection class, you can define moves to leave the object in whatever state you prefer, or disallow moves.
Admittedly it’s still a weird example IMO, because external factors can sever the connection while the object is in scope.
There is literally no correct way to handle move in an RAII context that doesn’t either (a) behave unexpectedly if you try to use the moved-from object or (b) permit a null value of the object. This isn’t a library problem — it’s a language problem.
What if the moved-from DB object lazily opens a new connection, if someone uses it again? Maybe that’s a null object, but at least the nullness isn’t really observable to the API user. Even the extra latency or possibility of failing to connect must be expected at any time from query() etc. so it changes little.
Also, I would say nothing is “unexpected” behavior if you document it and implement accordingly. And at least for this DB case, handling it is not onerous or stretching the idea of class invariants beyond usability.
I’m probably like what GP said, “high on the feeling of having finally grokked C++, which is no small feat.” But I either want to understand better why move is broken, or we can agree that things like move require too much skill to get right and there are better alternative languages.
> What if the moved-from DB object lazily opens a new connection, if someone uses it again?
Great, so now the stateful settings on my database connection change depending on whether I move from it.
Database connections are kind of a bad example — having a connection drop is not really unexpected behavior, and a program that uses a database should be prepared for a connection to drop, so there’s kind of an invalid state on a connection anyway. But things like file handles or mutex guards aren’t like this — it’s reasonable to expect that, on a functioning system, a file handle won’t go away. And if I’m using a type-safe language that supports RAII, I would like the compiler to ensure that I can’t use an object that isn’t in a valid state.
Rust can do this, as can lots of languages that support “affine” types (that name is absurd). GC languages can kind of do this too, as long as cloning the reference is considered valid. Languages with “linear” types can even statically guarantee that I don’t forget to close my object.
C++ can ensure that an object is valid if it’s in scope, but only if that object is not movable, so “consume” operations are not possible in a type-safe way.
> decades of industry experience have shown that the governing principles of even modern C++ make it incredibly hard (expensive) to deliver high quality software
How can "decades of experience" show the deficiencies of Modern C++, which was invented in 2011?
If you've worked on a project built entirely on the technologies and principles of modern C++, and found that it caused your software to be low-quality, then by all means share those experiences. C++ has no shortage of problems, there is no doubt. But hand-waving about "decades" of nondescript problems other people have had with older C++ versions, is somewhat of a lazy dismissal of the article's central thesis.
My criticism is not of any particular feature in C++ (though there is much to both hate and love about it), but of a particular approach to software engineering that the language design leans into, and which even the newest iterations of the language do nothing to alleviate, and in many many cases even reinforce.
The easiest path in C++ is almost always the dangerous path. (The classic example is operator[] versus at().) With every tiny feature of the language, things we take fully for granted in every other language, there is a host of caveats, gotchas, footguns, and trap doors.
This is as serious risk as a project and its team grows, because the accumulated sum of accidental complexity grows exponentially.
It’s possible to manage this risk, but it is also expensive, and every modern-ish competitor to C++ allows fewer people to deliver better software quicker.
It’s not a good choice for almost anything that doesn’t specifically require C++ for its own sake.
Read my comment above: hardening is in, an effort to classify and remove UB is on the way. Implicit contracts (yes, that means basically to bounds-check even native arrays automatically) is on the way.
If you use warnings as errors they catch even subsets of dangling nowadays. Other ways of dangling have been made illegal (temporary conversions and range for lifetime extension).
I agree with you the defaults are still not the best but it is dteafily getting better.
But I think in the next years things are going to be tightened further in standard terms for better defaults. In fact, it is already happening.
All of that is great, and I strongly support these initiatives, but they are all teeny tiny bandaids.
Lifetimes are a crucial aspect of writing code in C++, yet they do not appear anywhere in the syntax. The same goes for synchronization. These problems are fundamentally unfixable without major, incompatible language changes.
Isn't it a "skill issue" whenever you complain about a language? I can say rust is too slow and too convoluted, and you will tell me "well stop fsckin cloning everything and use Arc everywhere. Oh and yeah lifetimes are hard but once you get used to it you will write perfect programs like my claude max subscription does!"
It is true that it will never be Rust-safe level bit there is ongoing work: stl library hardening is in for C++26, there are some temporaries dangling that has been removed (range for lofetime extension and forbid conversion snd binding to temporary), there is a form of implicit contracts on the way (this means that even regular code can be compiled wirh implicit bounds checking, including arrays) and there is sn ongoing effort to remove all forms of UB.
So I agree it has its quirks but if the defaults keep changing and improving it keeps evolving into a safer by default thing compared to before.
No, I do not mean into you must be super-skillfull anymore: I mean that with all of that in, things are much safer by default.
Things keep improving a bit slower than we would like (this is design by committee) but steadily.
>Meanwhile, decades of industry experience have shown that the governing principles of even modern C++ make it incredibly hard (expensive) to deliver high quality software.
Unreal Engine is C++ based and plenty of games have used it.
Fundamentally, when it comes to safety, its either everything or nothing. Rust is by definition unsafe, because it has "unsafe" keyword. If the programmer has enough discipline not use use unsafe everywhere, he/she has enough discipline to write normal C++ code.
But as far as C++ goes, the main problem is that the syntax still allows C style pointers and de referencing for compatibility with C code. Generally, if you stick to using std library constructs and smart pointers for everything, the code becomes very clean. Unique ptr is basically the thing that inspired Rust ownership semantics after all.
The c++ features that get bolted on to replicate those in other languages tend to never reach parity because of all the legacy baggage they need to design around. Modules are not nearly as useful as one would hope. std::variant and std::optional are not nearly as ergonomic or safe to use as rust equivalents. coroutines are not exactly what anyone really wanted. If you're simply looking for checkboxes on features then I suppose you have a point.
To be clear, I like and continue to use modern c++ daily, but I also use rust daily and you cannot really make a straight faced argument that c++ is catching up. I do think both languages offer a lot that higher languages like go and Python don't offer which is why I never venture into those languages, regardless of performance needs.
> std::variant and std::optional are not nearly as ergonomic or safe to use as rust equivalents.
> but I also use rust daily and you cannot really make a straight faced argument that c++ is catching up.
I mostly use std::ranges, lambdas, and concepts, and I see them catching up, as an evolutionary process rather than a fixed implementation in the current standard. Nowadays I can do familiar folds and parallel traversals that I couldn't do in the past without assuming third-party libraries. My optionals are empty vectors: it suits my algorithms and interfaces a lot, and I never liked `Maybe a` anyways (`Either errorOrDefault a` is so much better). I also use Haskell a lot, and I'm used to the idea that outside of my functional garden the industry's support for unions is hardly different to the support of 2-tuples of (<label>, <T>), so I don't mind the current state of std::variant either.
Everyone is welcome to their own opinions and there is definitely movement in the right direction. However, it's a far cry from catching up. std: variant doesn't force you to check the tag and doesn't have any easy way to exhaustively match on all types it stores. I'm not sure I understand what you're comparing it to I'm terms of tuples. Forcing everything to be "nullable" or have a default state can be painful to deal with and introduces invariants I often wish were impossible. Ranges are definitely nice compared to what we had before. I'm probably just not used to them, but they aren't always intuitive for me and spelling them is very verbose. It's not nearly as simple as calling map and filter on a collection or iterator.
Honestly, I find myself reassessing my stake in the c++ ecosystem on occasion when I discover a new way I broke it. Often my post mortem reveals, yes, skill issue, and yes I agree, many of my paradigms and patterns orient around limiting the impact crater.
Honestly tho, I keep the tool in my belt because I believe it is still the best for what I use it for: low latency financial applications and game engines.
If I find some time to migrate from c++ to a different language I may for certain games, but thats a future bridge to cross.
> C++ is often described as complex, hard to learn, and unsafe. That reputation is undeserved. The language itself is not unsafe. On the contrary: it is precise, honest, and consistent. What is unsafe is how it is used if it is misunderstood or if one remains in old patterns.
I think this take needs to stop. It’s a longer way to say “skill issue”. Meanwhile, decades of industry experience have shown that the governing principles of even modern C++ make it incredibly hard (expensive) to deliver high quality software. Not impossible - there’s lots of examples - but unreasonably hard.
C++ is fundamentally unsafe, because that’s how the language works, and if you think otherwise, you don’t know C++. There are patterns and paradigms that people use to limit the risk (and the size of the impact crater), and that’s helpful, but usually very difficult to get right if you also want any of the benefits of using C++ in the first place.
Certain people will disagree, but I surmise that they haven’t actually tried any alternative. Instead they are high on the feeling of having finally grokked C++, which is no small feat, and I know because I’ve been there. But we have to stop making excuses. The range of problems where C++ is unequivocally the superior solution is getting smaller.