This suggests you aren't aware of the many huge benefits that have come from introducing modern features that other languages have. C++ since 2011 is quite a different language to what it was before, and this is hardly a bad thing. So many of the challenges of writing C++ were significantly simplified with the introduction of lambdas, smart pointers, and threading primitives.
The issue isn't that it gets new features it's that they're half-assed because you can't impose new restrictions on old code. The power of these features in modern languages comes from their ability to protect you from yourself. That has more to do with what you can't do than what you can. Bolting more legs onto C++ doesn't protect you from anything, it just increases the surface area of the language.
They haven't introduced the features other language have, they've introduced poor rip-offs that fail to work as one might expect having worked in any of the other languages they're derived from.
The problem is it's not different than it used to be, the cracked foundations are exactly the same as they've always been always been but there's a bigger and more complicated house on them. C++ is starting to feel like the Winchester mystery house.
The C++ philosophy is not about protecting you from yourself. Its about allowing you to express your idea in as low or high level terms as you require. This lets you write a highly optimized tight loop and then abstractly describe less performance sensitive parts.
In the past it was thought that people would use safer high level languages and then drop down to C for performance. That vision just doesn’t seem to work out in practice - except at the cross process level.
The trade off to this is language complexity. If you want a simple language C++ isn’t for you - and that’s OK.
C++ for me was and with every newer standard is really a meta language. It is used by many big projects to build their own "language". Such a language is almost completely ad-hoc and does not enjoy compiler checks, because it lives in project guidelines. It may enjoy some checks thanks to whole template machinery. But error messages barf about templates not a real intention behind them.
Every big project is almost entirely different. They use different sets of features - some overlap more, some less. Many developers like to use C libraries, because they are easy to wrap in their version of C++. When you shop for libraries you often have to think if their version of C++ will work with yours. There is some consensus around STL and Boost, so at least that is relatively straightforward.
There seems to be a more modern trend of wanting all code to look the same. I've worked in large projects that have existed over many eras (including as far back as K&R C). The "Refactor when it becomes a problem" seems to work amazingly well. Global refactors and project wide styles seem to always fail miserably - inevitably a new era comes before the previous standardization effort completes.
E.g. in the C->C++ transition most malloc's were left alone. If you wanted to add a ctor/dtor then you would go refactor them as necessary. It also encouraged you to encapsulate your work moreso than you would have otherwise.
"Global refactors" work well with type system support, and not at all otherwise - and more modern languages do tend to come with better such support. Even for C++ itself, LLVM has a few bespoke tools to help you refactor old code as suggested by the semi-official C++ Core Guidelines - of course not all possible "refactors" are equally supported!
A project where different modules are written in different C++ dialects is a real pain when you have to refactor code across modules or even just write or review code in different modules. And the finer the granularity at which you allow dialects to be mixed (within a module, within a file, within a function), the more horrible it gets. Every time you make a change you need to think "what dialect am I supposed to be using here?" The mental surface area required to understand code grows and grows.
But it is also true that forcing all the code in a project to use a single dialect is expensive. Developers need to decide what that dialect is --- not just once, but constantly as C++ evolves. ("Is it time to use Ranges yet? Concepts?" etc.) You need to do expensive refactors every time you update the dialect. You need to enforce the dialect at code review time (harder as it changes). Every time you import third-party code into the project you need to update its dialect.
A constantly evolving language that regularly deprecates commonly-used features in favour of "better" alternatives, like C++ does, is problematic. The faster pace of C++ evolution is making this problem worse.
Aside from exceptions (Can't use them safely if RAII is not universal) and shared pointers, most new language features are pretty localized in effect. E.g. using a lambda in a function originally written in C++98 does not make the existing code less safe. Only your sense of aesthetics would force you to update the rest of it.
Python is likely the best example of this working, however even in Python the boundaries between high and low performance sections are very formal. It's a lot simpler to go in and optimize a piece of code the profiler has pointed out with C++.
I really don't understand the animosity non-C++ developers have towards the language. You can still use Python, Rust, etc if you want. Nobody wants to force you to use C++.
I am a C++ developer by career. Me and many other C++ developers I know do not really like the language. It is what it is. It has useful parts, but complexity is a weight.
How do you end up being a C++ developer that hates the language? I’m a C++ developer as well, and if I ever decided that’s I didn’t like what I was doing every day I’d learn a different language and try to get hired for that.
I’ve been an iOS engineer for 7 years and I’m not exactly a fan of Objective-C. I’m good at it though, I know where all the sore spots are and part of my value is I can stop people from hurting themselves with the language. One day, inshallah, our team can eventually move on to swift (thank you ABI compatibility).
Until then I work in it because my goal (and my job) is to deliver amazing products and experiences to my customers and if I had to do that in COBOL, I’d do that too. My job is not about the language for me, it’s about what I’m doing with it.
Non-C++ developers are usually awed by it. The animosity comes from people with experience in C++ that decided not to use it.
Pre-11 C++ was a bad language. The gains of leaving it to something like Java (that isn't even good by today's standards) were huge. The new standards are making it possible to create better code on the language, but it is still far from a modern language and the added features can not remove complexity, they can just add to it.
> the added features can not remove complexity, they can just add to it
There are countless examples of the so-called modern C++ features reducing complexity in code.
For example, it is now considered a code smell to ever write "new" or "delete" in your code. Imagine that, never using these words to manually allocate and delete memory in C++! But it's true, unique_ptr in particular makes so much simpler and safer.
Type inference with auto doesn't just save typing, it improves performance in many cases and reduces a lot of complexity, while also avoiding uninitialized variables.
These are just a few of at least a dozen examples that come to mind about reducing code complexity with more modern C++ features.
New features can reduce the complexity of code, but they cannot reduce the complexity of the language.
C++ programmers still need to know what "new" and "delete" do, so they can work with older code that uses them. They also need to learn what "auto" does, so they can work with newer code that uses it. (The behavior of "auto" isn't trivial; how many C++ programmers understand the difference between "auto v = <expr>;" and "decltype(<expr>) v = <expr>;"?)
...instead you'd use "decltype(auto) v = <expr>", which is equivalent. And people certainly use decltype(auto), or else it wouldn't have been added to the language, 3 years after regular auto.
Not sure why this was downvoted, decltype was introduced with C++11 alongside of auto for declarations: the example given isn't accurate of reality. A better one might be a function returning auto vs returning decltype(some expression).
Which specific example is choosen doesn't seem all that relevant for the point of "there's these two similar but not identical things people have to/might not understand the difference of"
I hate unique_ptr and shared_ptr, using them is so glaringly inconsistent. Half the time you can’t construct a unique_ptr when it’s easy, because you don’t have the information until just enough later (like after a calculation or two in the body of your constructor) that you can’t use the easy way. So how do you transfer a unique_ptr? Well, the obvious ways don’t compile, and you eventually learn you have to move it, which involves using std::move(), but it always seems to go in the place I don’t expect it. And then how do you use a unique_ptr outside of your class? You can’t pass by value, for obvious reasons. Pass by reference, maybe? I think that’s frowned upon, I think you’re supposed to pass a naked pointer, and Modern C++ code is supposed to understand that naked pointers mean I’m loaning you this pointer. But I thought the point of Modern C++ was to get rid of pointers? Anyway, shared_ptr works completely the opposite way. You are supposed to pass the pointer by value. Now you can argue that of course it’s supposed to be that way, and the arguments are all cogent and when you spend half a day figuring it all out it makes sense. Until tomorrow when you forget it all and have to actually use the things. Plus I hate underscores with a passion, it hurts to type them. Modern C++ also seems to like making a line of code longer and longer, because you can’t just make a unique_ptr or shared_ptr with a constructor, no, you need make_unique<typename>(...), and the arguments are magically the same as the type’s constructor, even though it’s obviously a different function. Yuck! At least new and delete are pretty obvious how to use, and only have one special case to worry about ([]). Granted, the *_ptr are better, but I hate using them and wish I could use something that was shorter and easier to remember.
> I hate unique_ptr and shared_ptr, using them is so glaringly inconsistent.'
They're glaringly inconsistent because they behave differently with regards to ownership, and the fact that you can't copy them or pass them around in certain cases is because they're designed to stop you from doing this as it would undermine the reason you're using the class.
Yes, but that fact is I can't ever remember how to use them properly (unique_ptr in particular). It's not often that you need to pass a unique_ptr to a function outside your class so that it can do something with it (without transferring ownership), so I can never remember how I'm supposed to do it. But it seems like if I want to hand a pointer to a short-lived function, doing it ought to be pretty consistent, whether I'm passing an old-skool naked pointer, a unique_ptr, or a shared_ptr, and it's not.
Like I said, if you look at all the logic, there's a good reason why everything is the way it is. The problem is I can't use the things without looking them up. Usability of my language is a big deal for me, which is why I hate unique_ptr.
Careful with that if it's not the only thing happening in the statement. It might be a while between the 'new' and the unique_ptr ctor. If an exception happens in between the object will leak.
e.g. foo(unique_ptr<X>(new X), unique_ptr<Y>(new Y)) is a leak waiting to happen.
Very good point! I was only thinking about the specific case of a statement that just makes an object and shoves it into a unique_ptr (e.g. does what make_unique does in the linked Herb Sutter article), in response to the comment about not being able too use a normal constructor. You are totally right that this isn't exception safe in cases like you mentioned.
The vast majority of the time you just want to pass a reference to the pointed-to object (for both unique_ptr and shared_ptr). Once you've decided what you're actually going to do with the object, that usually determines how you're going to want to pass it.
It allows automatic generic specialization in some cases by the compiler, as you don't have to do template-fu or use a nonspecific type or manually write multiple specializations.
Plus it usually makes the code cleaner by focusing on structure not types. As any construct, it can of course be abused.
Its a bit like IE6. A big upgrade over C but the next update took so long that it was hated at the end of its life. Which coincidentally is exactly why C++ now does timed releases.
I worked full-time in C++ from 2012 to 2017, and I don't like the language. I also did feel like I had to use it: I was a developer on an Apache/Nginx webserver plugin, and since those are written in C our choices were (1) C, (2) C++, or (3) a C shim that communicates with some other language. None of these options were ideal, but of them (2) was our best one.
> they've introduced poor rip-offs that fail to work
This sounds like hyperbole to me. I've worked in other functional languages professionally for many years, but when I write C++ with consistent use of std::function and lambdas, I get many of the same benefits and a very enjoyable workflow. Is it the same as Haskell or Clojure? No, because they are totally different languages. But within the C++ world, they offer a great productivity benefit that I don't think satisifies the definition of "rip off".
Not all of them, to be sure, lambdas and in particular their capture list syntax is solid. By no means is that universal though, for instance, move semantics.
std::move doesn't move anything, it casts an object as a movable reference (equivalent to static_cast<T&&>). The compiler doesn't preclude you from accessing the old object (or say anything at all) and there's no guarantee it actually did get moved, it's just a hint. That's not real move semantics, it's a shoddy knock-off.
You also have to manage a new constructor, new reference type and the opt-in because the default remains not using move semantics. Worse yet, there's varying agreement on what you can or should do with the source of a moved value, the STL containers will all continue to let you use the old value, it's just empty now. That's not standard behavior because there is none, and it's de facto now because it's in the STL. What a nightmare. [1]
std::variant is supposed to be associated values on enums, but of course, it doesn't do that either. You don't match, you create a new struct with a bunch of operator() methods on it [or overload a lambda?!] and throw it at std::visit. You can only have one variant of each type. Then it throws exceptions if you start mucking about in ways you shouldn't. There's no context. It's dreadful. [2]
I think the way you are thinking about move is all wrong.
In any kind of correct code, the difference between a move and a copy is only performance. If a copy were to happen where a move was requested then the code is just as correct, so I find it strange to get so hung up on it not being “real”.
Also, if move is the only available option, and move cant happen, you get a compiler error. If performance is correctness for a type that is expensive to copy, make copy not an option.
Also, there is a move constructor by default so it’s not opt-in, you opt out only if you start screwing around with copy / assignment / destructors which you usually shouldn’t need in modern code anyway.
Sure, the state of moving on the moved from object is unspecified, but really, I can’t think of a time when I’ve written code that would care. It’s kind of a non-problem.
If you really want to reuse an object after a move I question your motives, but you should just reinitilaise it by assigning a freshly constructed value and the result of that is of course standard.
> In any kind of correct code, the difference between a move and a copy is only performance. If a copy were to happen where a move was requested then the code is just as correct, so I find it strange to get so hung up on it not being “real”.
That's just not true when you take smart pointers into account. unique_ptr is pretty obvious, since it can't be copied. But shared_ptr is more devious, as there is a clear semantic difference between giving someone a copy of your shared_ptr vs moving your copy to them. And, given that destructors are often used for more than simple resource cleanup (e.g. they are sometimes used for releasing locks), the difference between a move and a copy can have a huge impact on program behavior.
Lambda captures make it pretty easy to create nasty memory safety issues via dangling references. Sure, you could create such issues with the equivalent manually written closure-structs, but lambda captures pave the dangerous path.
I would consider your counter examples rather compelling, thanks for the details. I never particularly enjoyed move semantics, so you have a point there.
There is still quite a bit of friction at the seams to the "core" language though. If you write a plain templated function, you have to wrap it in a lambda (or some other functor type, or std::function I guess) before you can do anything with it except calling it. Pattern matching is also quite verbose currently.
Whether this matters really depends on the code you're trying to write of course.
The fact that you can't make old code use the new features in no way makes the new features "half-assed". It may make the old code that, but not the features.
Or did you want them to deprecate large swathes of the previous standard?
As for the rest of your rant... it's mostly just a rant, with very little substance.
We also got Class Template Argument Deduction, structured bindings, various syntactic sugar (if (auto var = initializer) {}, if constexpr, fold expressions...), std::variant, std::optional, and a whole POSIX-style filesystem library, to name a few. And that's just C++17 alone.
It's nothing earth-shattering (that stuff is coming in C++20) but these are all features that improve the language and allow you to write better code. Consider e.g. if constexpr which can eliminate so much SFINAE cruft. Or CTAD to reduce the need for makeXYZ() functions.
14, 17 and 20 are full of small and big improvements: coroutines, contracts, modules, concepts, huge improvements for compile-time computation with constexpr, etc.
As an example, yes. Other versions offered similar benefits (C++14 in particular was a "bug fix" version for C++11, which I think most would be happy came quickly).
This. They keep adding half-finished features apparently for the sake of releasing every 3 years. Move types that may not move (std::move is just a cast / suggestion that if taken leaves the receiver in an indeterminate, useable state), pattern matching that isn't (std::variant). Frankly, I wish they'd stop.