Hacker News new | past | comments | ask | show | jobs | submit login
C++ the Good Parts (2014) (infoq.com)
156 points by zerofrancisco on Oct 1, 2020 | hide | past | favorite | 246 comments



The good parts are also the newer parts. They were added because they make programming better. C++20 is more fun to program in than 17, which was better than 14, which was better than 11.

That said, the one thing that makes C++ better than other languages is the destructor. You can see everything else added since as making destructors more useful.

(I am old enough to tell you to get off my lawn. And have one.)


Yes, agree. I think RAII is one of the greatest inventions in programming ever and basically only C++ has it. Note that it is not just about managing memory but any kind of resource like files and so on. I once made an HtmlElement class that would write the opening tag in its constructor and the closing tag in its destructor.


> basically only C++ has it

RAII is associated most prominently with C++ where it originated, but also D, Ada, Vala, and Rust.

https://en.wikipedia.org/wiki/Resource_acquisition_is_initia...

Topical sidenote D's got destructors as well:

https://dlang.org/spec/class.html#destructors


People are noticing below that some other languages may have RAII as well. I am quite willing to concede that some languages may have it. After all, it is rather hard to know all languages out there. But it also seems that people are a bit too eager to add languages to the list. If you either have to use 'using' or something like it it really does not count. My question would be whether you can do the HtmlElement example that I mentioned in my post. If you cannot, for instance, because the closing tag would be written whenever the garbage collector feels like it, it really does not count.


Yes, really only D and Rust, nothing else.

If you think other language X supports RAII, you don't really understand it. It's not really compatible with obligate-GC. Finalizers, in particular, are not it.

The thing about destructors is that reclaiming memory is only their most trivial use. In C++, as in D and Rust, and Haskell, too, catching errors is the type system's most trivial use. It does the heavy lifting.

If your program isn't putting the type system to work, you're toiling away with a screwdriver while people around you are using impact drivers and forklifts.


It's perfectly compatible with GC. GC manages memory. RAII manages object lifetime. C++/CLI, for example, has full-fledged C++ RAII implemented on top of GC memory management.


Fine, but having managed lifetime, it also manages memory, leaving nothing worthwhile for GC to do.


The acronym RAII might be associated with C++, but the concept of releasing resources automatically if a scope is left predates C++. It is present in e.g. Common Lisp in the form of 'with-' macros. See https://wiki.c2.com/?ResourceAcquisitionIsInitialization


The "block scope spirit" is there but not quite the same. A lisp "with-" macro requires you to remember to use it. It's not much different manually writing code to release something, in the fact you must remember and explicitly do it.

With C++ RAII you can have a brain fart and forget about freeing a resource, but you're still OK.


RAII also follows value scope instead of lexical scope. So if you have a function that creates a value and returns it, the cleanup-via-destructor runs in the scope of the callee, as it should.

This doesn't work with lexical-scope-cleanup things like golang's defer (and bugs me when new languages add it instead of destructors). Instead you either have to document that the caller has to remember to defer-cleanup the result, or you have to do funky things like take a callback that receives the value, invoke the callback and defer-cleanup the value.


If you have a C++ function that returns by value, the destructor runs twice - first inside the function, to destruct the original, and then inside the caller, to destruct the returned copy. (N)RVO can elide the copy, but it's not guaranteed, and in any case, not always applicable. So destructors are still lexically scoped.

What makes C++ flexible in practice is move semantics, which lets you move-return values, such that only the destructor call on the caller side is the one that matters.


You don't need move semantics to have RAII work. If you have a copyable type with a destructor where the cleanup should only happen once, then you already need some kind of refcount to only run the cleanup when the last copy is dropped. Either the refcount is in your own type, or you're wrapping another copyable refcounting type like shared_ptr.

Notice how I said "cleanup-via-destructor" rather than just "destructor", for this reason.


You’re right, but one problem with c++ is that types are copiable by default and copies happen automatically and silently in many situations. So something like this is easy to get wrong.


Wouldn't be surprised if Eiffel has it or something close to it. Bertrand Meyer seems to have thought of a lot of things when designing the language.

Disclaimer: Not a language lawyer, just a fan.


It's pretty arguable to claim that C++ is the only language with RAII.

C# has "using", Java has try-with-resources, Python has the "with", etc. - all of which support RAII patterns.


Those are extremely limited in comparison. For example, in C++ you can return a locked mutex-locker from a function:

    std::unique_lock<std::mutex> foo();
The unlocking of the mutex is then tied up with what you do with the returned mutex locker e.g. if you ignore it then it will be unlocked at the end of the current statement, if you assign it to a local variable (and don't move from that) then it will be destroyed when leaving that enclosing function, or you return it from that enclosing function to extend the length of the lock still further.

You also can't compose those mechanisms so easily. For example I could make a class with a std::unique_lock<std::mutex> and a pointer to the data it protects and pass that around.


At least for python, using is not more limited, but more complete: It allows to handle exceptions that are raised in the block and it also allows you to raise exceptions in the "destructor".

That is something that you actually need rather frequently. For example a db transaction wrapper might want to rollback instead of commit if we leave the block via an exception. You also want the "destructor" of that block to raise if the transaction can't be committed.

The only thing that the c++ method does really well is memory management (e.g. unique_ptr), because free() can't fail. But that use case is of course not a thing in gc languages.

Regarding the composition: Everything that's not memory management usually has side-effects and often wants to do something with regards to exceptions. So it's a good thing for the caller to know that the "RAII" is happening. Allowing you to complete hide that in your class is not a benefit.

C++ paints a distorted picture here, because its method is great for memory handling issues, but only OKish for everything else. But most of your RAII issues in C++ are memory related, so you don't notice the problems that much.


> At least for python, using is not more limited, but more complete: It allows to handle exceptions that are raised in the block and it also allows you to raise exceptions in the "destructor".

the difference is that with Python, the programmer can forget to use "using". In C++, as soon as you instantiate your variable on the stack somewhere, you know that ~T() will happen unless you get a segfault...

> For example a db transaction wrapper might want to rollback instead of commit if we leave the block via an exception.

I'd say that a transaction wrapper would have an explicit commit method which applies the transaction and "empties" the transactoin object - destructor would always rollback if there is something that was not committed - and if commit was called then there is nothing to rollback anymore so ~T() does nothing.


The way you'd do this in Python is something like

    @contextlib.contextmanager
    def get_a_locked_thing():
        with lock:
            yield thing
and then

    with get_a_locked_thing() as thing:
        do_stuff_with(thing)
Exiting this "with" block causes get_a_locked_thing to resume at its yield statement, which exits that "with" block, which releases the lock. Throwing an exception also exits the context managers.

If you were to try omitting the "with" statement and running, say, "thing = get_a_locked_thing()", the function hasn't executed yet (because you haven't entered the context manager), meaning that not only is 'thing' the wrong object, you also haven't even gotten the lock yet. So you won't deadlock/leak the lock, and you will notice in the most basic of tests that your code isn't doing the right thing.

I do agree that this is nowhere near as nice as having an object in a local variable, because "with" adds an extra layer of indentation, and that's an advantage of C++/D/Rust-style RAII. But it's definitely doable in Python and pretty idiomatic.


Ah good point, that is perfectly doable. But even accepting the explicit with statements, it's still not as composable in more general situations. For example, I don't see how to reliably pass an active context manager to a function for it to unlock part way through its execution, or return one of two active context managers (if you tried using your technique then both would still be active if you couldn't exit the undesired one before the yield). But maybe you're right that these are a bit less common, except for memory management, than I'm imagining.


The one example I gave in the parent comment already addresses everything you talked about.

Yes you can use a Python context manager (that's what you're referring to) to lock a mutex, but you can't pass that context manager as a return result of a function and be sure its __exit__ method will be called appropriately in the three situations I mentioned (immediately if result thrown away, at the end of the function if stored in a local variable, later still if returned from that outer function).

In fact I don't think you can return a context manager from within a "with" block at all, even accepting that the parent function will have to manually reuse it in another with block, because its __exit__ method will already be called in the inner function. Technically this is true in C++ too of course because the object in the inner function will have its destructor called, but its move constructor will be called first giving you an opportunity to clear out its internal state (this is why my example used a unique_lock rather than a lock_guard). Things are even better in Rust: the bytes of the object are directly copied to the new object's footprint and the original destructor isn't called, so you don't even need to set up a dummy "empty" state.

> The only thing that the c++ method does really well is memory management (e.g. unique_ptr), because free() can't fail.

Mutex unlocking usually can't fail (and if they do, there's usually not much you can do about it). Same with closing network connections. Closing files can fail, and for some programs it's very important to know when that happens, but for many others it's not important at all. Python's chained exceptions are very nice, but not so important that their absence from C++'s exceptions (or Rust's panics) makes those languages useless.


RAII is great for transactions. In a proper robust design, commits should always be explicit and destructors only rollback uncommitted transactions. If your rollback can fail, well, then you have bigger issues.

There is now enough introspection in the language to actually implement implicit commit in a non exceptional path, but I think it would be a mistake.


> For example I could make a class with a std::unique_lock<std::mutex> and a pointer to the data it protects and pass that around.

This is pretty much what we do in our code base, except that the class has lock(), tryLock(), etc functions that take a lambda. The functions acquire/try to acquire the mutex and then pass a reference with appropriate const-ness to the lambda.


So let me make it boom in C++,

    auto my_leaky_mutex = new std::unique_lock<std::mutex> {foo()};
Yes, I am making it fail on purpose, but copy-paste in enterprise codebases does wonders for code quality.


You're reaching. I'm not sure it adds much to parent's point.


Using "new" is code smell. This is easily caught in code review.


Like those that Google does for Android?


Raw pointers for ownership are a bad idea, one of google's 30,000 programmers doing it doesn't change that. You are the only one pointing to google as an authority.


I am the only one, because the others don't care to point anything.


Other people are pointing to the actual fundamental reasons these things exist and are valuable, you are pointing to a bizarre false authority as a desperate rationalization for not liking C++.


You are really harping on this one example.

I don't know what processes Google uses for the Android codebase, but in the main code base make_shared has only been available for a little while. Today, "new" is smell that should be caught in code review at Google. When was this committed?


Yes, because Google is a major contributor to ISO C++, LLVM, clang, has billions of C++ code, and yet fails to follow best practices.

So what to be expected by those companies that hardly have such a deep knowledge about C++ among their troops?


Google is a huge company and the majority of its employees are not contributors to ISO C++.


All the construct you’re mentioning are nice and useful but they are solely lexical, this means unlike RAII you can’t protect every use of the resource, if you need to return or pass along sais resource they will not work for instance.

RAII is strictly more powerful and more regular than context managers.


They are still different: 'using', 'with' are tied to scope, RAII is tied to lifetimes which in turn can be tied to scope but do not have to. It also applies to every object and subobject transitively and implicitly, without having to implement some special protocol [1].

[1] if you are implementing something like a container and dealing to raw memory you might need to call destructors of subobjects explicitly, but because it is needed to correctly reclaim memory, it is almost always done and all standard containers do it already. On the other hand in python list doesn't implement the context manager protocol.


This is true. But C++'s destructor behaviour means you get this pattern without any thought, and consistently and reliably and deterministically, whereas other types eg. C#'s "using" statement requires explicit usage, which isn't as convenient at all.


Only if the objects aren't heap allocated via new/delete, and you happen to forget to call delete on it.


`new` and `delete` have been banned outside of deep library code in every industrial C++ codebase I've ever worked on.


I don’t have first hand experience in the world of game programming, but I’ve heard that smart pointers aren’t used and old fashioned new and delete are standard practice. (Please correct me if I’m wrong, I’m curious if what I’ve heard about game development is really true.)



did you check the header of that cpp? they are storing it in a shared_ptr: shared_ptr = referenced counted pointer = RAII

they should have used make_shared, but it is still safe


Which isn't exception safe, as any good C++ developer would know.


"Google Does Bad Thing" is not quite equivalent to "C++ Is Bad Language"


Indeed, it is even one of my favourite languages, but contradicts the OP's point of every industry C++ codebase has banned new and delete.


The Google C++ style guide alone is sufficient evidence to conclude that Google doesn't know how to do C++ properly.


Yet they are one of the major contributors to ISO C++ and LLVM, what gives?


It just means that they still derive value from C++. But I, for one, am very glad that they're not a dominant force in the ISO standard committee.


In a word, not.


Well yes, C++'s destructor behaviour doesn't work if you don't use C++...


That's not really fair.

  Thing* thing = new Thing();
  thing->DoStuff();
  delete thing;
is perfectly valid C++ and has been since the invention of the language. It's not a good idea, but it is valid.

If, OTOH, you do

  Thing* thing=(Thing*)malloc(sizeof(Thing));
  free(thing);
the you fully deserve the inevitable world of pain that is coming your way


> is perfectly valid C++

we're in 2020.

    $ clang-tidy -checks='cppcoreguidelines-*' foo.cpp 
    /tmp/foo.cpp:10:3: warning: initializing non-owner 'Thing *' with a newly created 'gsl::owner<>' [cppcoreguidelines-owning-memory]
      Thing* thing = new Thing();
      ^
    /tmp/foo.cpp:12:3: warning: deleting a pointer through a type that is not marked 'gsl::owner<>'; consider using a smart pointer instead [cppcoreguidelines-owning-memory]
      delete thing;
      ^
    /tmp/foo.cpp:10:3: note: variable declared here
      Thing* thing = new Thing();
      ^


Does it compile? Does it run? Yes, therefore it's valid C++.

I'm not arguing it's good C++, but saying "C++'s destructor behaviour doesn't work if you don't use C++" in relation to using new and delete is clearly nonsense.


The malloc/free version is perfectly valid C++ too, isn’t it?


Eh... kind of?

I guess it depends on your definition of "valid".

Let's say we have:

  class Thing
  {
  public:
    Thing() { cout << "created" << endl; }
    ~Thing() { cout << "destroyed" << endl; }
  };
Assuming the happy path, the new/delete version will be functionally identical to a more correct/idiomatic

  {
    auto thing = make_unique<Thing>();
  }
The malloc/free version won't call the destructor, and therefore doesn't do what a "normal" object would do.

It's splitting hairs, I guess, but that's where I'd draw the line.


Which is what plenty of people do across the industry with C++ compilers, they code like C.


Well that's not C++'s fault. Those people are either stuck on a particular compiler version, are in a maintenance mode, or are just unwilling to improve themselves. Maybe there's job security in being able to write code in a deprecated style.


> or are just unwilling to improve themselves

Or they may not see that stuff as an improvement in the first place (e.g. see Orthodox C++[0]).

[0] https://gist.github.com/bkaradzic/2e39896bc7d8c34e042b


If I'm reading that right, they start off talking about a minimalist/reduced subset of C++, and then end up concluding: Use C++14 (now two standards ago) and we'll probably recommend C++17 in 2022 (5 year delay).

That seems like a rather weak conclusion given all the words that preceded it. You might as well just start and end with this:

New standards have issues, people don't know how to use them correctly, and compilers are slow to catch up. Wait 5 years to use a standard so all these things are worked out, and you don't have to worry about the details that don't stick.


The C++14 part seem to be added lately and indeed sounds contradictory to what was written previously. As a comment below writes though they use the original Orthodox C++ idea without the C++14 part (and TBH i think the whole "committee" part is a tongue-in-cheek joke).

In any case this is just an example, there are a couple of similar "subsets" with the same idea of keeping C++ use to its bare minimum, it is just that i remember this one because of the name :-P


Wow, I thought I was alone.


It is always the right time to learn modern C++.

Discomfort is not a valid reason to avoid learning new things.


Historically there was a big performance hit for using the fancy added abstractions of C++. Don't know if it's still that bad. But in the past it cost a factor of two or so, whenever I tried to get fancy.


> Historically there was a big performance hit for using the fancy added abstractions of C++. Don't know if it's still that bad. But in the past it cost a factor of two or so, whenever I tried to get fancy.

I don't know how much back in the past it is, but pretty much every time in the last ten years I tried to beat a C++ abstraction, I almost never managed to substantially improve (and since gcc.godbolt.org exists it made it much easier to witness that most of the abstraction layers entirely disappear at -O2)


Historically, meaning "in the '90s". It has been quite a long time since the '90s. Many readers here weren't even born yet when C++ abstraction overhead was last a thing.


Like Android NDK?


No idea, I've no experience with it.


You should, then you will see how much C++ it actually uses.

It exposes everything as C APIs, even thought it is a mix of Java and C++ on the implementation side, leaving the C++ developers to write their own type safe abstractions from scratch.

Most just can't be bothered and write C in C++.

Then there is the whole issue how, following Google C++'s guidelines, the NDK is actually implemented.


smart pointers exist


Just like value types and deterministic resource management in GC languages.


`using` composes badly. It won't work with eg. a list of Disposables or Linq.


This is the strongest argument against claiming "using" or Python's context managers are RAII.

RAII naturally composes because a container has to clean up its memory, and it has a well defined lifetime.

Python has a very straightforward model for its context managers, two magic methods:

    def __enter__(self):
        ...

    def __exit__(self, etype, eval, tb):
        ...
To handle many resources being released, the stdlib prefers a more direct approach using a special class:

    with ExitStack() as stack:
        files = [stack.enter_context(open(fname)) for fname in filenames]
And it makes sense. You probably don't want regular containers to have yet another magic method defined on them, and there is no clear single behavior the a container's __enter__ and __exit__ method could have.

And it doesn't compose in other ways. In async code, for instance, there's a completely separate mechanism with `async with` that calls __aenter__ and __aexit__.

It really comes down to the fact that when objects have an ambiguous lifetime, which is a consequence of garbage collection, you can't have RAII.


Performing an operation that can fail in a constructor or destructor will you into trouble. It's a trap. You can check for initialization failure ("remember to call isObjectOkay()" or something like that) but you're basically hosed if a destructor fails.

No, exceptions are not the answer. NONE of the C++ projects I've worked on in the last 30+ years have used those in production. Exceptions are far, far worse trap.

It's funny that the teams across half a dozen companies I've been in have independently come up with just about the same set of guidelines about C++ features. It's almost like some of those features were bad ideas. :-)


Performing an operation that can fail in a constructor is not a problem, because then the initializer throws, and you don't get an object at all - so there's nothing for you to check if it's in a valid state or not.

On the other hand, this is exactly the problem that arises if you refuse to use exceptions, because then the constructor has no way to signal that it failed. Since constructors, by their very nature (acquiring resources, validating initializers etc), can fail, you have to resort to "is it valid?" flags, and your API clients have to check those flags or risk unspecified behavior.


Using C++ exceptions is just not acceptable in large, production systems. I've worked on several C++ systems with tens of millions of lines of code. Different teams independently reached the conclusion that exceptions were destabilizing, and hid bugs and failures.

Take a look at the Google C++ guidelines, as an example.

https://google.github.io/styleguide/cppguide.html#Exceptions

They're fine in other languages (python, etc.). Note that golang does not really have them, except in a very crude sense.


I've worked on several large production systems written in C++ that use exceptions throughout. Don't assume that your personal experience is the norm across the entire industry. The Google C++ style guide is (unfortunately) popular, but it still covers the minority of C++ projects overall, and there's plenty of criticism of it around, including from large teams working on production systems.

I will also add that, from my past experience working on codebases that didn't or couldn't use exceptions (e.g. across C ABI boundary), the lack of exceptions, and people forgetting to check error codes, or propagating them improperly, was a very common source of hidden bugs.


> Using C++ exceptions is just not acceptable in large, production systems.

This is a wholly false statement. Exceptions are used almost everywhere C++ is used. Google is a peculiar outlier, and suffers for it. You can read right in the cited document why they were obliged to give up using exceptions. It imposes huge costs which Google happens, by virtue of monopoly status, to be equipped to afford.


The conclusion in google's page has nothing to do with bugs and failures. They mention how it would be better to use it but for practical reasons (read big legacy code base) it isn't feasible.


rust provides destructor functionality with drop trait:

https://doc.rust-lang.org/beta/std/ops/trait.Drop.html


yes, but rust doesn't have exceptions so it's a bit more limited


What? Rust has both early return and unwinding, which interact with destructors the same way.

Further, Rust has destructive move, making drop even more flexible.


D and Rust are the only languages that have since adopted the destructor.


I tend to see Rust as "only the good parts of modern C++, and then some".


And also: "Minus all the OOP, and then some."


Modern C++ generally doesn’t use OOP all that much anyways. However, <algorithm> and <type_traits> in Rust’s standard library would be quite nice to have.


What would be a use case for using type_traits? It feels to me that most cases for it would be frowned upon in Rust, but that's just my gut feeling, not knowing what they are useful for in C++-


They provide the compiler metadata about a type. For example if you have a std::vector<int>, there is a value_type trait for this that deduces to int, i.e. 'this vector holds ints'.


Hmmm I see. Reading up on it it seems to enable generic code deduce info or assume info about stuff (as in assert). Rust uses proper traits for that instead, basically having lifted traits to a first-class language feature. But there is only a barebones set of traits available in the standard library, mostly the ones needed for language or std features. Any traits only needed for generic libraries are left to crates to implement.


The difference here is that, C++ templates are closer to just "string replacement", as they're not validated (basic syntax aside) until you instantiate them. As such, in C++, your generic type T can be whatever the user decides to stick in there (you can't prevent them doing that), and so you typically have a chance of disabling/enabling certain bits of code by checking various type traits and using hacks like sfinae etc (e.g., "if T is an enum - please use this version of this method; otherwise - substitute this version of the same method", this would be quite typical and idiomatic in C++).

In Rust, generic code is fully validated as it should be, with all trait bounds being matched where need be. So you would rarely end up with needing to figure (at compile time) whether generic type T is an enum or not - because it wouldn't allow you to do anything extra with it out of the box. In Rust, an idiomatic way of adding different behaviours based on the kind of the generic type would be using traits still - e.g., you can have multiple impl blocks on your generic type, each one having different trait bounds on the generic parameter, like - if T: Copy, then implement these extra methods; standard library has tons of examples like this. The only exception that comes to mind is proc macros - when you deal with Rust code in AST form and generate new code at pre-compile-time phase, but that's a completely different story and not really related.


Oh yeah, C++ templates work more on the token level than Rust generics. Didn't know that type_traits was meant for that.

Indeed Rust proc macros, or Rust macros in general, are a better analog to C++ templates than Rust generics. They aren't quite the same, as during evaluation time C++ can do a bit more, and apparently type_traits is an example of what C++ can do statically. Rust's model is different, it wants macro expansion to be done by the time it starts even resolving any non-macro paths, let alone type checking or anything.

That being said, you'll be able to emulate something like that once specialization lands (hopefully also included by min_specialization).


C++ does have something along the lines of Rust traits now, it is called concepts and was added in C++ 20.

Mind you it probably has very little real world usage right now..

Concepts aren't as powerful as Rust traits in that they are static dispatch only, while Rust supports both static & dynamic-- but they should make for cleaner code than <type_traits>


Another large difference (in my understanding) is that you're not actually forced to use concepts; they help if you do use them, but they're not required. Rust pretty much requires that you use a trait for these cases.


Other than the classes used to implement all those STL concepts.


Types are being used to create those things. But it's not OOP as many people understand it. (Inheritance, everything via method calls etc.)


People should spend more time learning what OOP actually is all about, hint it isn't just MFC and Qt.


This is such a lazy form of "argument". If you have knowledge, share it, or at least share concrete references to concrete resources where one might learn what you think should be learned. If you put it in this nebulous way, people might look up the meaning of OOP in some source you don't approve of, which wouldn't help communicate what you think should be communicated.


Start here,

"Object-oriented programming: Some history, and challenges for the next fifty years"

https://www.sciencedirect.com/science/article/pii/S089054011...

Than read on BETA design for example,

https://beta.cs.au.dk/

Follow up with "The Art of the Metaobject Protocol",

https://mitpress.mit.edu/books/art-metaobject-protocol

and "Xerox LOOPS"

http://www.softwarepreservation.org/projects/LISP/interlisp_...

Then "Component Software: Beyond Object-Oriented Programming" (the 1st edition with Component Pascal)

https://www.amazon.com/Component-Software-Beyond-Object-Orie...

"Applying Traits to the Smalltalk Collection Classes"

https://rmod.inria.fr/archives/papers/Blac03a-OOSPLA03-Trait...

"Self – The Power of Simplicity"

http://media.cichon.com/talks/Introduction_Self_Language_OOP...

https://blog.rfox.eu/en/Programming/Series_about_Self/index....

How about this for starters?



This is pretty good for starters! It makes it clear that you're not interested in a real discussion. Requiring that people read The Art of the Metaobject Protocol before you'll engage with their arguments on whether C++ programmers write OOP code is clearly just evasion on your part.

This isn't knocking the book, I've read it and agree that it's edifying. This list is a good resource on OOP in general. It's just not helpful in the context of this thread.


> If you have knowledge, share it, or at least share concrete references to concrete resources where one might learn what you think should be learned.

Your own words, documentation and knowledge was shared, apparently you are the one not interested.


You missed the "concrete" part. Which of these sources, concretely, should one read to evaluate the claim that "Modern C++ generally doesn’t use OOP all that much anyways.", which is what you seemed to criticise in this subthread?

Again: Does one really, concretely, have to read The Art of the Metaobject Protocol, a book about Lisp, to evaluate this claim about C++? Having read the book, I would say no. Which is why I say that you are evading a real discussion.

(And for that matter, it would be interesting if you fleshed out your counter-claim -- "Other than the classes used to implement all those STL concepts." -- in a bit more detail. Are you saying that all uses of C++ classes are OOP? Or that only certain uses are, but that the way classes are used to implement the STL fall into this category? Will I find the answer to these concrete questions in The Art of the Metaobject Protocol?)


> Are you saying that all uses of C++ classes are OOP?

Definitely, any use of class constructs with data members and member functions is OOP.

Any use of data members inside class constructs represents aggregation and if any member function on a data member happens to be called, as means to help a member function to do their work, that is delegation.

Any template implemented as class that uses its type parameters to decide at compile time what gets called, makes use of dynamic dispatch decided at compile time.

If you prefer we can pick any random STL type and dissect all their uses of OOP features, lets start with something like std::set?


> Definitely, any use of class constructs with data members and member functions is OOP.

Definitely not.

OOP requires, also, runtime binding and polymorphism. You can get the polymorphism by inheritance, delegation, or what-have-you. You can get runtime binding with a vtable or a name lookup.

The usual term for just-encapsulation is "object-based". The STL is definitively not object-oriented, according to its author, Stepanov, who has said that giving the STL classes member functions was a mistake.

OOP is a niche technique. It has uses, sometimes. More often, a function pointer suffices.


No it doesn't, hence why I mentioned that one should learn what OOP is all about.

Object based languages are part of the OOP universe from CS point of view, plenty of literature, that apparently is too much to ask to read about, some of which I have provided above.

So here we go about std::set and it being OOP.

1 - class set, a means for encapsulation, with most of its members de

2 - Uses delegation for memory allocation via the allocator_type

3 - Uses delegation for key lookups and value comparisasion

4 - Implements the concepts Container, AllocatorAwareContainer, AssociativeContainer, ReversibleContainer alongside their respective concept dependencies, which in OOP speak are protocols/traits/categories;

5 - The actual types used for comparisaion and allocation are instances of the respective protocols, with dispatch being decided at compilation time of std::set uses, given the respective implementations as type parameters, in a way similar to multi-method-dispatch.

As bonus here is a simplified UML diagram. It isn't more detailed, because my patience to draw ASCII art is limited.

    -------------------------------------------------------------------------------------------  
    |std|                                                                                     | 
    -----                                                                                     |
    |                                                           __________________            |
    |                                                           | <<protocol>>    |           |  
    |                                                           |   Container     |           | 
    |                                                           |-----------------|           |
    |                                                           |-----------------|           |
    |                                                           |-----------------|           | 
    |                                                           | constructor     |           |
    |                                                           | copy-constructor|           |
    |                                                           |   destructor    |           |
    |                                                           |     begin()     |           |
    |                                                           |      end()      |           |
    |                                                           |     cbegin()    |           |
    |                                                           |     cend()      |           |
    |                                                           |     swap()      |           |
    |                                                           |     size()      |           |
    |                                                           |    max_size()   |           |
    |                                                           |      empty()    |           |
    |                                                           -------------------           |
    |                                                                   /\                    |
    |                                                                   --                    |
    |                                                                    | extends            |
    |                                                                    |                    |
    |          ______________        _______________________    _______________________       |
    |         | <<protocol>>|        |     <<protocol>>     |  |     <<protocol>>     |       |
    |         |  Allocator  |        | ReversibleContainer  |  | AssociativeContainer |       |
    |         |-------------|        |----------------------|  |----------------------|       |
    |         |-------------|        |----------------------|  |----------------------|       |
    |         | constructor |        |       rbegin()       |  |     key_comp()       |       | 
    |         |  destructor |        |       rend()         |  |     value_comp()     |       |
    |         |  allocate   |        |      crbegin()       |  |----------------------|       |
    |         |  deallocate |        |      crend()         |             /\                  |
    |         |  max_size   |        |-----------------------             --                  |
    |         |  construct  |                  /\                          |                  |
    |         |  destroy    |                  --              implements  |                  |
    |         ---------------                   |                          |                  |
    |               /\                          |                          |                  |
    |               --                          |  implements             /                   |
    |                |                          \                        /                    |
    |                | implements                \                     /                      |
    |                |                            \                   /                       |
    |                |                             \------------------                        |
    |                |                                        |                               |
    |                |                                        |                               |
    |                |                                     ------------------                 | 
    |         ----------------                            | <<protocol>>     |                |
    |         |  allocator<T> |          uses             |     set          |                |
    |         |---------------|<------------------------/\|------------------|                | 
    |         |---------------|                         \/|  constructor     |                | 
    |                                                     |  destructor      |                | 
    |                                                     |  begin()         |                |
    |                                                     |  end()           |                |
    |                                                     |  cbegin()        |                |
    |                                                     |  cend()          |                |
    |                                                     |  rbegin()        |                | 
    |                                                     |  rend()          |                |
    |                                                     |  crbegin()       |                |
    |                                                     |  crend()         |                |
    |                                                     |  operator=       |                | 
    |                                                     |  get_allocator() |                |
    |                                                     |  empty()         |                |
    |                                                     |  size()          |                |
    |                                                     |  max_size()      |                |
    |                                                     |  clear()         |                |
    |                                                     |  insert()        |                |
    |                                                     |  emplace()       |                | 
    |                                                     |  emplace_hint()  |                |
    |                                                     |  erase()         |                |
    |                                                     |  swap()          |                |
    |                                                     |  extract()       |                |
    |                                                     |  merge()         |                |
    |                                                     |  count()         |                |
    |                                                     |  find()          |                |
    |                                                     |  contains()      |                |
    |                                                     |  equal_range()   |                |
    |                                                     |  lower_bound()   |                |
    |                                                     |  upper_bound()   |                |
    |                                                     |  key_comp()      |                |
    |                                                     |  value_comp()    |                |
    |                                                      ------------------                 |
    |-----------------------------------------------------------------------------------------|


It has and does all of these things, but it does not have runtime binding, therefore by definition is not object-oriented.

Concretely, you cannot take a Container pointer and point it at a std::set. You can take a T, and bind T to std::set at compile time; but that is Generic Programming: analogous, in some ways, but not the same.

In the '90s it was fightin' words to say "X is not Object Oriented", because Object-Oriented was taken as a high-status way to say Good, and "Not Object-Oriented" translated implicitly to "Not Good".

But in our brave new world, object-oriented is but one design discipline, and we have others, and enough language primitives to construct our own disciplines by mix-and-match as problems dictate.

And, I have myself simplified code that had used a virtual function to use, instead, a function pointer, and it was (still) Good. Better, even.


> It has and does all of these things, but it does not have runtime binding, therefore by definition is not object-oriented.

What renowned SIGPLAN or IEEE paper states that runtime binding is required for 100% of all OOP programming languages ever created and polymorphic dispatch at compile time doesn't count?


Not interested in classifying programming languages, or in SIGPLAN- or IEEE-sanctioned opinions.

Changing the definition would be moving the goalposts. A new definition deserves a new word.


if STL is OO then everything is OO.


> if STL is OO then everything is OO.

I mean... in the sense in which most people use "OO language" (which is, "a language with constructs which associates a data specification with procedures"), yes. Only hardcore pure-functional algorithms written in Haskell / ML without typeclasses, or things like Prolog, Esterel (though thinking about it), and similar research languages aren't.

Once upon a time programming looked like this. https://github.com/chrislgarry/Apollo-11/blob/master/Luminar...


Then remove the classes, static inheritance via template arguments, method dispatch via type parameters (very CLOS like), class member functions, iostreams, .....

Plenty of SIGPLAN papers to one educate themselves about what OOP is all about.

OOP is not MFC alone, apparently those that learn languages on the trenches without referring to them, never get to appreciate the wealth of information we have available.

Not surprising, given how many misunderstand "whatever my compiler does" for what ISO actually says.


OO is a very nebulously defined term and if you ask two people you'll get three different definitions. But I would say that most people will agree that at the very least late binding is a fundamental component.


Hence why having a proper CS education is an very important pillar for any good engineering degree.

Back to STL, it makes heavy use at compile time method dispatch via template metaprogramming and lambdas (which are implemented as classes actually by the compiler.


Stepanov has forgotten more about CS than I'll ever learn in an hundreds lifetimes and he is pretty adamant that STL is no OO [1].

I know this is an argumentum ab auctoritate, but you didn't really provide an argument either :)

[1] http://www.stlport.org/resources/StepanovUSA.html


He also thinks that Artificial Intelligence is an hoax, so take your pick.


FWIW, he is referring to the AI Winter, the interview was written well before the current AI renaissance.


Did he say that about GOFAI or Deep Learning?


In the context of the interview, it would've been GOFAI. Deep learning hadn't taken off yet.


Well then it was a hoax?


For context, the relevant answer:

> Yes. STL is not object oriented. I think that object orientedness is almost as much of a hoax as Artificial Intelligence. I have yet to see an interesting piece of code that comes from these OO people. In a sense, I am unfair to AI: I learned a lot of stuff from the MIT AI Lab crowd, they have done some really fundamental work: Bill Gosper's Hakmem is one of the best things for a programmer to read. AI might not have had a serious foundation, but it produced Gosper and Stallman (Emacs), Moses (Macsyma) and Sussman (Scheme, together with Guy Steele). I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras - families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting - saying that everything is an object is saying nothing at all. I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.

I have seen this a few times and have felt that "hoax" is the wrong word each time. Later he talks about OOP being "philosophically unsound", which adds some clarity to the statement. It's not that OOP or AI are made up (true hoaxes), it's that they will not accomplish (in his view) what they set out to do because they are founded on unsound bases.


Aldanor has an important point here, in the "and the some".

Rust is deliberately a weaker language for expressing library semantics. You can write libraries in C++ that cannot be written in Rust, that encapsulate semantics that cannot be captured in Rust. C++ library writers do, routinely. The C++ Standard Library has many components that could not be coded in Rust, that make the language more powerful for all users.

This makes Rust easier to learn, but it limits the libraries you can use, having learned it, and limits the libraries you can write.

This is a difficult tradeoff for any language design. Haskell is more powerful, too, and also harder to learn and to use well. (At the same time, its lack of destructors also limits it in ways that Rust and C++ do not suffer.)

The OOP -- inheritance, and virtual functions -- is a niche feature that is sometimes useful, but can often be emulated well enough with function pointers.


What libraries do you have in mind?



Rust has virtual method dispatch. What else do you need to call it OO?

If it's implementation inheritance, then that was never a requirement for OO, seeing how prototype-based OO has been around for a very long time.


Traits are OOP, check out on Objective-C and CLOS/Flavors protocols, Smalltalk categories, alongside a couple of SIGPLAN papers on component programming.


Haskell typeclasses are quite close to Rust traits, does Haskell count as OOP then?


Yes, Haskell has all the necessary features to implement some forms of OOP.

There are plenty of OOP flavours, just like there are plenty of FP flavours.

If one considers 100% Haskell's features as must have for a language to be considered FP, then OCaml and Standard ML are going to have a hard time to be considered FP.


IIRC, Haskell doesn't have the notion of object identity. It wouldn't make sense in an immutable language, anyway, because identity only matters if object state can change - otherwise any two objects with identical state are substitutable. But anyway, I would argue that this is the crucial difference that sets apart "objects" and "values with syntactic sugar for function calls".


Swift has it. It’s called `dealloc()`. Since the language is fairly tight, and doesn’t have real stack-unwinding exceptions, there’s not a whole lot of utility in it.

Apple seems to think that anything that you put in dealloc should end up in the system (like unsubscribing from notifications).

C++ is a dangerous, powerhouse systems language. I think it really needs things like destructors, and I’m glad to see its continuing development.


deinit, and if they’re from NotificationCenter you haven’t had to do that for a while and for KVO if you’re using the block syntax (you should!) you basically have an RAII “observation” that you control the lifetime of (often you assign it as a property of the object being observed, so it automatically gets cleaned up when the object goes out of scope).


You're right (shows how often I use it, and I write Swift every day).

Apple is amping up KVO, in general, which I like.


Alongside Object Pascal back in the old MS-DOS and Apple days, Ada 95, Swift, Python, and a couple of others that I won't now bother to search for on my papers archive.


Here’s the crucial difference between Python destructors and C++ destructors: C++ destructors run as soon as the object goes out of scope, deterministically. Python destructors run down the road, during the next GC cycle.

As an example of why this matters, I’ve got a class called TimeCapture in a project. If I want to instrument how long a block takes (full function, inside of an if, etc) I can add `TimeCapture tc(“my-metric-name”);` to the block and know that the destructor on that object will be called precisely at the end of that block (capturing the end time of the block)

I don’t know the rules for Swift or Ada or OP, but I’m guessing they’ll vary.


> Here’s the crucial difference between Python destructors and C++ destructors: C++ destructors run as soon as the object goes out of scope, deterministically. Python destructors run down the road, during the next GC cycle.

Not true. They run when the object is deallocated. In CPython if the object is not involved in or referenced from a cycle, that’s as soon as it goes out of scope: the ref count goes to 0 and the object gets deallocated.

This is trivial to check: create a structure with a noisy __del__ and create an instance from a function, call the function, del will run. The GC has no reason to run at this point, there’s no allocation requiring any sort of reclamation, but if you don’t trust that you can just `gc.disable()` and observe… the exact same behaviour.


> Not true. They run when the object is deallocated. In CPython if the object is not involved in or referenced from a cycle, that’s as soon as it goes out of scope: the ref count goes to 0 and the object gets deallocated.

That's a CPython implementation detail, not a guarantee that you can rely on.

For example, both PyPy[0] and Jython use tracing GCs where this is not true.

[0]: https://doc.pypy.org/en/latest/cpython_differences.html


> That's a CPython implementation detail

Which would be why I specifically wrote « in CPython ».

And it still makes the original assertion completely incorrect. Python destructors do not run « down the road, during the next GC cycle ». They may, in certain cases or implementations.


Once again,

> implementation detail

There is no guarantee that even CPython will keep the current behaviour. Ultimately, the promise you're given is that __del__ might be invoked by the system some point between "the object has no live references" and "never".

The original assertion gives a much closer intuition than your description of the current behaviour.


Not all the world is CPython though. There are several other implemntions and the python docs make it quite clear that the cpython developers do not consider the a destructor running when an object goes out of scope anything more than an implementation detail, and they reserve the right to change it without warning should they discover a reason to. (They haven't changed it in 20 or 30 years, but that doesn't mean they won't tomorrow)


> Not all the world is CPython though

The original commenter made a specific assertion. I demonstrated that their assertion is not true.


But you didn't demonstrate anything because your assertion is false. In C++ destructor are guaranteed deterministic as to when they run. In Cpython they are not guaranteed, by chance it happens that way, but it should be considered chance.

It seems to me that there must be some other language that would have the c++ guarantee, but I don't know of it. (it isn't hard to implement, and c++ is well known enough that other languages designers can consider the pros and cons and surely someone else has decided it is worth it)


> It seems to me that there must be some other language that would have the c++ guarantee, but I don't know of it.

Rust's Drop[0] has basically the same guarantees as C++: the destructor will run right before the memory is freed (although it is possible in both cases to also leak both). Combine with Pin[1] if you need the exact memory location to be stable as well.

> it isn't hard to implement

Sadly, it's basically incompatible with tracing garbage collectors, since they make it completely unpredictable when the destructor will run.

So if you ever want to allow yourself to switch to tracing GC then you'll either have to disallow it completely, or bury it in a hidden corner of the docs with a load of caveats (like Python or Java). And in the latter case you'll still have people like the GP stumble upon it, say "oh, this looks neat", and set up a really painful trap for their future selves.

[0]: https://doc.rust-lang.org/stable/std/ops/trait.Drop.html

[1]: https://doc.rust-lang.org/stable/std/pin/index.html#drop-gua...


To implement scope guard you can use with statement and implement __enter__, __exit__ methods. It is not real RAII as object is not destroyed on exit but it is still quite useful.

http://effbot.org/zone/python-with-statement.htm


How do multiple destructors get sequenced? Last in, first out? Thanks


Objective C also has destructors.


Not the same, and overwhelmingly less useful.


dealloc is pretty much the same thing, and it can be used in the same way. It's not "useful" in the C++ RAII sense because:

- the standard library design -- every object is refcounted and any code that references the object can add it to an autorelease pool, so you can't safely predict when dealloc will be invoked

- Objective C doesn't support stack-based objects


You can do stuff:

  @autoreleasepool {
    RaiObj* obj = [RaiObj new];
    [obj autorelease];

    //... stuff
  }
will de-allocate at the end of the block if nothing else hold onto it, but it's obviously less neat.


Don't forget good old Ada.


Ada does not have destructors or anything analogous.


https://www.adaic.org/resources/add_content/docs/95style/htm...

"Finalize" on objects of type "Ada.Finalization.Controlled".


Nope. Key words there are "should call".


> Nope. Key words there are "should call".

I mean, yes, those words are in there, but I'm responding to your words:

> Ada does not have destructors or anything analogous.

Which are factually wrong. Finalize is called automatically (per the link). And they are exactly what you say Ada lacks, analogues of destructors.

Your complaint, now, is that Finalize and Initialize don't automatically call the same functions in the parent. That's a separate issue, and doesn't demonstrate that Ada doesn't have this thing (a variant of destructors) which it very clearly has.


It fails to implement the aspect of destructors that makes them an architecturally important feature.


Also PHP.


Sorry, no.


"Sorry No" as in "PHP is a stupid language, stay away!" Which is a point I won't argue.

Or in the "PHP has no destructors"? - in that: PHP's object model is an "interesting" mixture of Java and C++. In regards to C++ it has detemrinsitric destructors, which enable RAII. The only complication is that all objects are reference counted (like shared_ptr in C++) and one can easily lose ownership from where one expects it to be. However if you keep an eye on ownership you can rely on reference count being decreased on function exit (be it regular exit or exception) and the destructor being called, in LIFO order as one expects. This isn't used much in PHP as you don't have to manage resources as often as with other languages/domains (in short lived request world one doesn't care whether a database connection is destroyed on function end or only on request shutdown, memory is handled automatically, etc.)


Python also has destructors (def __del__(self):). They are called at whatever time and whichever order the GC gets around to it.


The Python people don't want you to rely on this--which I only learned many years later and it frankly ruined Python for me (are least in the abstract, as "practically" I can ignore this for my projects), as I consider "deterministic finalization" to be a critical language feature--but CPython uses reference counting and thereby has predictable deconstruction.


CPython uses reference counting and gc. It's the worst of both worlds: the expense of refcounting, and faux determinism until some innocuous code change causes you to hang on to a reference.


Refcounting needs a mechanism for cycle breaking, or it's necessarily leaky. This isn't the worst of both worlds; it's simply a consequence of refcounting.

The same issue occurs in c++ if you're sloppy with shared_ptr. If you know what you're doing, you can break cycles with weak references in either language. Where Python has diapers, c++ has footguns aplenty.


The problem with Python isn't that it has reference cycles. It's that it has a tracing GC that tries to automagically fix those cycles for you. So in C++, if you get a cycle, the destructors do not run at all - which is pretty noticeable. In Python, they run "sometime later", so programmers are less inclined to fix the cycles (indeed, many aren't even aware that it's a problem).

But if you provide a non-deterministic mechanism on top of refcounting to break the cycles, and destructors aren't consistently deterministic already, why even bother with refcounting at all?


In practice, cycles just don't happen in real code.

It is possible to contrive a graph data structure with cycles, but it is a bad (i.e. slow) representation not useful in production.


Those are finalizes thought and are a bit different.


D takes the spirit of RAII slightly further too in having scope guards as a first class concept.

Not as safe as proper RAII but very useful at the site of declaration to write scope(exit) close(file); or similar


Scope guard in Rust: https://docs.rs/scopeguard


>C++20 is more fun to program in than 17, which was better than 14, which was better than 11.

Not quite. The syntax has become even more weirder and the language more complicated. I really do think that the standards committee have gone overboard adding in everything and the kitchen sink. The pace of change has been crazy for such a old and widely used language. The standards committee should just disband for a couple of years and go take a rest instead of adding in everybody's "favourite features" :-)


I agree that the language in general is more complex mainly due to additions to the STL, but what has changed in the syntax that you think make it weirder?

> The pace of change has been crazy for such a old and widely used language.

Not necessarily a bad thing, at least you can request specific standards at compile time so you can stick around at whatever standard you want and it should work for a very long time (there's still C++03 and older out in the wild), and those who want the features in the latest standard can upgrade.


In terms of syntax and semantics, initialization in C++ is rather complex and subtle, perhaps the most complex part of C++ (yes, more so than templates, because templates-the-language-feature is not that complex, it's the way libraries like Boost and the STL use them that's complicated).

The good thing about the STL additions is that you can simply ignore them, as they are not part of the core language, just like people have ignored iostreams since the 90s.


Initialization is admittedly a tremendous mess. There are so many ways to screw it up it is ridiculous. But this is really a problem with some old old old decisions and backwards compatibility rather than an issue with new complexity.


to be fair the interactions between brace initialization and initializer lists is a completely new mess.


Well, initialization lists have added quite a bit of complexity to that problem. So have move semantics, to a lesser degree.


It is the overloading of symbols and keywords in the language with new meanings which bugs me the most. Symbol soup for lambdas, "using" can now mean a type alias, initializer lists, move syntax, "decltype" function returns etc. Each by itself is not complicated but when you start mixing them in template heavy code, suddenly reading difficulty increases exponentially.

I think a lot of things were added in just to "get away" from C roots and make C++ its own language which IMO was a bad idea. For me, C++ is first and foremost a "better C with Classes" because i can keep that subset of the language in my head easily.

PS: Relevant other HN discussion: https://news.ycombinator.com/item?id=24649992


Does it really matter as long as it’s backwards-compatible? AFAIK one can write C++98, compile it as C++20, and most everything should still just work. So a developer can pick and choose which new features they want to use.


You are in a sense both right and in another sense not quite so. You are right in that the language designers went to great lengths to maintain backward compatibility. So all my C++98/C++03 skills are still relevant today. When i see kids say how "modern and better" is C++11/xx i always reply, go read Coplien, Barton & Nackman, Koenig to understand where it all came from. On the other hand the cognitive overload imposed by the new features is substantial and there is a non-trivial learning curve even for experienced developers. Beginners have even more difficulty getting started. Result? Everybody lives within their "familiar subset" of the language. So codebases can be in C++ and yet two different developers can have difficulty understanding each other's codebase.


What about the 'auto' storage class?


I wonder if it has been ever used on the wild outside of some compiler conformance test.

Anyway, it is true that C++ is not fully backward compatible, some things have changed in very important ways, but the vast majority of the code will compile unchanged and breakages should be caught at compile time.


If you're dealing with a C++98 or C++03 codebase, you can literally remove every mention of "auto" in it without changing the meaning of the code.


C++ARM was already full of good parts versus C89, and C++03 had plenty of modern stuff in it.

However it doesn't matter how many good parts C++3000 happens to have, if some crowds insist in writing C with C++ compilers.


What is C++ARM?



Basically, C++'91.


> They were added because they make programming better. C++20 is more fun to program in than 17, which was better than 14, which was better than 11.

Not specific to C++, but I'm definitely loving the addition of coroutines to the language. It makes multi-tasking a lot more clean especially for something like userspace networking.


the problem with the newer parts is that they make people think less when they write code. it's easier to be sloppy and not understand the performance implications.


without these newer parts people write Java, C# or Go instead :)


that's great, let them stay where they are.


Am I the only one thinking that C++ is becoming a real mess of features, and that it makes it hard to read a foreign codebase that uses features you are not accustomed to, unless you have 5 years of experience as a full-time C++ dev?

I consider myself quite proficient at C. I wrote quite a bit of C++ from 2011 to 2014, while I was ramping up at programming. C++11 wasn't really a thing back then in my experience. The new features are really nice, but the syntax truly feels alien to me.

I am now looking forward to learning Rust, which seems more cohesive in general, and less of a bag of weirdly shaped features. Meanwhile, I'll probably keep on using C++ as "C with classes" when I have to. And I feel like codebases have to pick a subset of C++ they are comfortable with to be approachable to outsiders.


C++ probably needs an "unsafe" keyword like Rust to isolate the non-"good parts". A few hundred compiler flags could be used to distinguish the good and non-good parts. I don't think that 5 years is enough to know all things C++; it's like a black hole in the sense that it adds features faster than you can learn how all the different parts could interact - even if you dedicate your whole life to it. Maintaining old C++ codebases will employ a lot of old developers for many many years. Your employer should not expect you to understand an old code base if you are a new developer. This would even be counter-productive since it wastes your brain power and their money.

What modern C++ really needs is a rebranding. Something like "C22"/"CX"/"Z" to distinguish itself from old C++. Together with a guide on what the safe features and "unsafe"/deprecated features are the developers could get new certificates to prove that they are up to date. Companies/management could easily require that the new C++ is used and developers are actually capable. (This organizational issue is IMHO the most important part.)


It already kind of has it, it just requires a bit more of love and fewer false positives.

https://docs.microsoft.com/en-us/cpp/code-quality/using-the-...


it is called "new" and "delete"

but yeah, there is more to that and I doubt C++ can be fully unsafe unless you set references and pointers (they can be invalidated) probably many other things as unsafe. c++ is unsafe by design


Ha. People have been making exactly this complaint for as long as C++ has been around. Meanwhile the language just keeps rolling on, picking up new features like some demented Katamari Damacy character.

It’s a puzzle. One can only conclude that secretly C++ programmers like all the new feature whilst complaining bitterly about them in public.


Us c++ programmers have yet to find a paradigm we didn't like.


I think it's because C++ serves a wide set of users across a wide set of paradigms. I doubt every C++ dev is enthusiastically using every feature.


There's certainly quite a lot to the language. There are really two audiences for C++, those writing generic libraries, and those using them.

If you're in the former group you'll be delving into all the template nitty gritty. If you're in the latter group you'll rarely need them (I think our 200k+ LOC C++ codebase uses std::enable_if three times).

I still like the language though. There's no obligation on any C++ dev to use all the new features.

Personally, I find Rust's syntax to be quite ugly compared to C++'s; horses for courses I suppose.


Let's see what happens to Rust in another 10-15 years. I suspect there will be people talking about it having too many features which make it hard to read and maintain and that they look forward to migrating to the new and simple "Mend" programming language.


If you can use Rust instead of C++, look at it as a privilege: there are very important code bases that need to be maintained in C++, and modernizing the code base is still easier (and easier to automate) then moving to a new language.


You are very right. I have programmed for a long time in C++98 and "Modern C++" does feel somewhat alien to me. I don't see the actual need for some features (eg. range for is just syntactic sugar) while threading etc. are very welcome. The combinatorial explosion arising from mixing and matching all the available features really makes code using advanced techniques nearly opaque to "ordinary" programmers. So unless needed otherwise we have to confine ourselves to a "familiar subset" of the language; very few people can have mastery of the full language.


The truth is that most folks writing c++ are spending a ton of time debugging their language knowledge or sticking to what they know.

The language is almost totally unprincipled in my opinion so you just have to know the minutiae to reason about and work with novel constructions.

In the hands of an expert, it's an awesome tool. To me, it's a miserable chore.


> C++ is becoming a real mess of features

> I am now looking forward to learning Rust

Prepare yourself. I personally think one of the major downsides to learning Rust is how quickly the language is moving.

I first started looking into Rust in ~2015, but it seems most hobby projects from that time won't even compile today.

It feels like there are non-breaking changes to the language almost weekly. I still cannot figure out if I should be using nightly rust, stable rust, or switching back and forth depending on the project.

Overall I think Rust is doing a lot of great things, but I'm not sure it is handling the "mess of features" aspect of programming languages particularly well.


Thanks for pointing this out, as well as a few other sibling posters. That makes sense, though I hope that, as Rust doesn't seem too afraid of deprecating features, it will keep the language a bit leaner in the long run, when it will have stabilized even more. Of course, that comes at a cost, if the language isn't 100% forward-compatible.


It's almost like... wait for it... C and C++ are not the same programming language?

Nah, couldn't be! It's just C++'s fault, surely!


This applies to any language that has been long enough though.

Do you think anyone dropped into a foreign code base of Java 15, C# 9, PHP 8, Python 3.7, C2X will be able to understand all of it?


I'm looking forward to learning Zig as it approaches 1.0.


Me too, but I see Zig as a C alternative and Rust as a C++ alternative. I am not competent enough to judge either way on this. I am currently back to C++ for a project that is written in C++. If it were C, I would try to write small pieces in Zig. I guess Rust could be used, but I am nervous about the scope of work I would be taking on - learning Rust, getting familiar with a C++ codebase, etc.


This is a common refrain, but I'm not sure how accurate it is. I work with a bunch of C folks who never liked C++, and now write Rust. It just depends.


Rust is always on my list. I am a happy user that is grateful that Zig and Rust both have great stewards!


Should have a "(2014)" tag.

2 major language revisions have been released since this was recorded.


The best part by far:

for(const auto& item : dummy_list) { ... }

It was fucking ugly and verbose use iterators for something so basic and simple.


What I always found weird and verbose were all the algorithms taking begin-end iterator pairs, like:

    std::sort(my_container.begin(), my_container.end());
I mean, yes, in 0.1% of all cases I would want to sort something that is not an entire container, and for that it's great that this exists. But for the 99.9%, why isn't there an overload that would allow me to do:

    std::sort(my_container);


IIUC you have this using `ranges` (which itself needs concepts as implemented). With it you can even do :

    for (int i : ints | std::views::filter(even) | std::views::transform(square)) {
        std::cout << i << ' ';
    }
notice the `for (int i:ints)` and the pipes filters.

EDIT: more specifically, you can use sort on a range now (source : https://cppreference.com):

    ranges::sort(s);


The same functionality has been in boost [0] for quite a while. So, if you can't see the ugly iterators anymore, but have an outdated compiler, this is an option.

[0] https://www.boost.org/doc/libs/1_72_0/libs/range/doc/html/in...


Wow, that pipe syntax is quite nice!

Documentation here: https://en.cppreference.com/w/cpp/ranges.

They are called "range adaptors" (weird naming IMHO).


They take a range and adapt it into a view, hence the name.


I wish they had just added the overloads to the std:: namespace though - breaking a few obscure cases would have been worth that.


> breaking a few obscure cases would have been worth that

The committee is very reluctant to break old code. That's one of the reasons the language as documented is so large.

That being said it's not clear to me what would have broken by putting sort(x) into std:: as you recommend.


It’s literally a 3-line utility if you want that:

  template<typename T> void sort(T& t) {
    std::sort(std::begin(t), std::end(t));
  }


Sure. But this is the kind of thing that you would use if it were in the standard headers, but maybe not create (or modify, if it already exists) a work_around_cxx_deficiencies.h in the project and make sure it's included everywhere. Or maybe it's already included in some misc_utils.h in the project but is not easily discoverable, since such "misc utils" headers are always a grab-bag of random unrelated stuff. Whatever the reason, in the C++ projects I worked on we always ended up using the iterator version. We certainly would have used the non-iterator version if it had been available from the standard headers.


In the ideal world ranges would have been in the original standard, but there were two issues.

First, at the time the language did not have enough forwarding capabilities, so you would have needed a lot of overloads to handle const and non const containers.

Second getting anything approved in committee is a huge effort so often proposals are stripped to the bare minimum; in particular the STL was already huge and Stepanov tried to include only what he thought were the fundamental components.

The new ranges have been in development for almost 10 years (arguably they suffered a bit of mission creep), so it is not just a matter of adding range overloads for functions.


I wasn't talking about ranges, I was talking about the grandparent's three-line wrapper function.


well, first of all that wrapper uses std::being/std::end which wasn't available in c++98, second std::sort is "easy" as it is mutating and it doesn't return anything. For other algorithms you want to return ranges (For composability) and things get more complicated.


Regarding the first point: True, but the wrapper could also have been written in terms of foo.begin()/foo.end(), no? Second point: Yes, I would want the easy case solved. Obviously, if an algorithm cannot be wrapped like this, then I'm not asking for the impossible.


No, at least not without a separate overload for C-style arrays. If you don’t care about C arrays then yes, .begin/.end could work.


All generic container manipulating functions use iterations - each of which you’d have to write a utility function for, just to use a couple of times.

What would otherwise be a trivial one liner with something like transform suddenly turns into a 3 line call


This has a giant glaring flaw that is only just being remedied: traversing more containers at the same time or, tangentially, enumeration (i.e. availability of the index variable in the loop body).


the expectation was always that you would use a library level adaptor (something like enumerate() or zip());


Not a C++ expert, but I first encountered it 20 years ago with Windows API and it looked uglier than some dialects today. Agreed, it's not optimal when the complier spits 100k lines of error messages for a single template typo, yet it's a pretty nice language to code with.

I remember talking with one of the world best HPC expert and he said that it's the most advanced language today when it comes to optimisation. Don't know if that still applies.

A question while we are at it:

// Am I the only one left on this planet using this style of bracketing:

void foo()

{

  cout << "brackets like that are more readable!" << endl;

}


> he said that it's the most advanced language today when it comes to optimisation. Don't know if that still applies.

I wrote quite a lot of HPC-like code lately. Can confirm.

The reasons include first-party support for hardware intrinsics on all platforms (SIMD is the most important, but also scalar stuff like popcnt, pdep/pext, etc.), full control over RAM layout which allows to optimize data structures for cache friendliness, and the ecosystem of high-quality third party libraries.


exactly! i did try to optimise fortran and it was quite painful for little gain..

BTW it was him: http://www.theochem.ruhr-uni-bochum.de/~legacy.akohlmey/inde...


That's my preferred way of doing brackets, especially in languages where brackets are optional. I'm also a weirdo that likes default tabstops, subscribing to the philosophy that if your code is getting indented too far you might need to step back and reconsider what you're doing.


I use this sort of brackets too! I find them much more readable than any other style. That's just my personal view, but I find code which uses the brace in the function declaration to be disorienting when I'm reading it. My eyes are traveling around trying to see where the first line of the function is.

I also write a bit of Rust and I find Rust's nagging me to use the "default" brace style annoying.

IMO this is very much a personal taste issue, it depends on how someone grew up learning to read, what programming languages they grew up with, etc.


I would really appreciate a good and current summary of C++ best practice, including the most-favoured idioms for memory management, concurrency, structured design, and all the usual basics.

This is from 2014 so although it's useful it doesn't quite qualify. The other sources I can find are either even older, or they're tutorials about very specific features with a very narrow scope.


"Effective Modern C++" (2014) is probably still the best crash course if you're already broadly familiar with C++. There's more stuff in C++17 and 20, but the really jarring additions like move semantics were standardized earlier and therefore covered in the book.

Bjarne Stroustrup's "A Tour of C++" (2018) is also frequently recommended.


Just seconding "A Tour of C++". I picked it up last year to learn modern C++ since my professional experience has solely been with older C++ standards and code bases. The book is clearly written, though sometimes very brief. It is what it calls itself, a tour. If you already know older versions of C++, it's great. If you don't know any C++ but do know how to program (more than just the basics), it's a good place to start. If you're just learning to program, it'd probably be both overwhelming (too deep too fast) and underwhelming (too shallow in critical areas for novices). It assumes you know the basics, so start with a different book and come to this one later.


I'm curious which parts are considered deprecated at this point. What patterns should I avoid? Certainly there must be parts that have fallen in disfavor as better solutions are introduced.


The bad parts are the esoteric builds. X need Y built with a certain configuration on N platforms, and cross compilable from x86 to risc and arm. CMake stops helping pretty quick. Conan/Nix sort of do the trick... Wish it was more like npm or cargo at this point. Here's my deps, their versions, one command to build. I suppose vendoring everything works ala the mega Google repo?


I totally expected this to be a joke where the site was blank.

I haven't worked with C++ for years, and it's nice to see there really has been some progress improving some of the C++ parts.


I wish I had seen this 20 years ago. I did some horrible things in c++ that I would be embarrassed to see today. It was a fun language to explore though, and I learned much from it.


20 years ago, most of those "good parts" weren't a part of C++. So nothing to be embarrassed about, one had to do with what was available at the time.


Those slides are the same style as andrei alexandrescu‘a slides.


Is there a transcript of this talk?


c++ 11 & 14 goodies: concepts, lambdas


Concepts are part of the language from C++20.


A good source of information for what is new in C++ versions: https://www.modernescpp.com/index.php/thebigfour

That article covers the big four new features:

- concepts

- modules

- coroutines

- ranges library


Wow. I haven’t touched C++ in maybe 18 years or so. This actually looks somewhat high level.

It mentions that modules will improve compilation times. How are compilation times these days? That was one of my least favorite aspects of C++ back in the day.



C++ compilers keep improving and compile times have gotten noticeably quicker for me, especially the linker step which used to take forever.

It is possible to have a fast C++ build, but it still requires diligence, ie avoiding including massive headers everywhere & using lots of forward declarations etc.

Many code bases don't do this so they end up with ~forever builds.


They still allow for lots of sword fighting!


Only 20 years late, to the point where I think the design is almost outdated already (concepts can almost be done as a library if you provide the right tools a la dlang std.traits)


404 not found

(scnr)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: