> (It appears, by your remarks, that you do not understand features that support OOP, so cannot reason reliably about their potential value for particular cases.)
I would be interested to learn what you think those features are; I have years of experience fooling around in languages such as Python, Javascript, Java, C++11+, Haskell, and also other less common ones. Maybe I have in fact not understood how to best make use of certain features. I've always been a little bit obsessed with performance & control, and I ended up in frustration about non-orthogonality / non-composability of modern language features and approaches. So I found myself on the path of exploring what's possible when limiting myself to basic flow control and basic read and write operations and function calls.
It's certainly been a journey of ups and downs, but I'm in good company, and I've gotten to know some people who are amazingly productive this way. Basically my standpoint as of today is that at least for systems programming, languages should get out of the way. There is little value in using features that make it easier to forget about certain aspects of programming, when those tasks are actually only a small part of the total work, and for good performance it's important to get the details right - which can be done mostly in a centralized location. Of course I'm thinking about resource and memory management. (Most critics of bare-bones C programming still think it is about matching malloc() and free() calls. That's a terrible way to program, as they rightly claim, and there are far better approaches to do it that do not involve language features or libraries).
What matters IME is mostly systemic understanding, and sinking time into getting the details right. Papering over them is not a solution. YMMV, maybe I'll soon have another series of attempts at "modern" approaches, and whatever the way to programming heaven ultimately is, I've already learned a couple of insights that transformed my thinking significantly, and that I wouldn't have learnt otherwise.
The essence of OO, from the standpoint you describe, is a struct containing a pointer to an array of function pointers. Such an object is what, in kernel memory, a file descriptor in Unix locates; read() on a file descriptor calls through one of those pointers. Syntax this, "private" that, "this" the other are window dressing. You obviously can cobble that together by hand, as has been done in Unixy kernels forever. But, as a language feature, it is always done exactly right, with no distracting apparatus. You never need to check if it is really meant for what it seems to be, or if it really does exactly what it seems meant to do.
It is the same for other powerful features. You might, in principle, write code by hand to do the same thing, but in practice no one has enough spare time and attention, so you do something more crude, instead. A destructor is not something sneaky going on behind your back: you scheduled it explicitly when you called the constructor, certain it will run at exactly the right time, every time, with no possibility of a mistake. You never need to check up on it, or wonder if it was missed this one time.
The destructor is still the single most powerful run-time feature of C++. It was the only instance of deterministic automation in any language, ever, until D and Rust adapted it. Most of the other powerful features in C++, as in other powerful languages, direct action at compile time. The closest analogs usable in C are cpp, yacc, and lex. You can't cobble up your own without abusing macros or making other programs like yacc. The world has not been kind to attempts at those. But build systems see no problem with powerful language built-ins.
Static types are far more powerful than what C does with them, just checking that function calls match declarations. (Function declarations, BTW, were copied into C from C++.) A powerful language puts types to work to do what you could never afford to do by hand.
Sure I understand all of that, and I'm really surprised you thought I don't.
You describe the essence of OO being implemented by vtables and how C++ does the right thing while many Unixy kernels do this by hand.
(I understand the part about manually set up vtables as well, and in fact one of the things I do for money is maintenance of a commercial Linux kernel module that has lots of these. I don't feel like Linux has a problem encoding these tables by hand, while it is one of the projects that has the most need for vtables because it fits the description of a system that needs to be very flexible. Linux has many more relevant problems than hand-crafted vtables.)
Maybe it will surprise you, but for the greenfield stuff that I do in my spare time and that I'm allowed to do as my job, I can write dozens of KLOC of code without ever needing to setup a vtable. This probably isn't even that special for how I approach things - if we go to any random C++ class, chances are that by far the most methods there aren't virtual, or at least are prematurely virtual.
Personally I think of vtable situations as less than ideal from a technical perspective. I get that these situations can arise naturally in highly flexible systems where there are lots of alternative implementations for the same concept. On the other hand, I think vtables are problematic as a default, in particular for abstractions that aren't yet proven by time to work. They tend to create this kind of callback hell where stuff never gets done in the most direct way, and where you end up with much more repeated boilerplate than you'd like.
There are many other ways to go about "interfaces" that are not vtables, and it depends on the situation which is best. Vtables are callback interfaces, and I try to avoid callbacks if possible. As described callbacks make for incoherent code, since the contexts of the caller and the callee are completely disjoint. Another problem is that they imply a tight temporal coupling (a callback is a synchronous call)!
I would say the primary approach that I use where vtables are maybe often advertised, is to decouple using queues (concurrency, not necessarily implying parallelism). This achieves not only decoupling of implementation but also temporal decoupling. Asynchronicity should never replace standard function calls of course. But it is a great way to do communication between subsystems. Not only for requests that could take a while to process (parallelism) but also from a perspective of modularity and maintainability.
read() and write() are the best examples, they are now starting to be replaced by io_uring in the Linux kernel. I think read() and write() maybe only ever made sense in the context of single-cpu machines. Doing these things asynchronously (io_uring) makes so much more sense from an architectural perspective and also from a performance / scheduling perspective if requests aren't required (or impossible) to be served immediately.
Leaving that aside, there are other ways to do vtables than the one C++ has baked in. A few months ago I watched a video about this by Casey Muratori that I liked, but I can't find it right now. He talked about how he dislikes exactly why you stated, because it promotes one way to "implement" OO (which might not be the best, for example because of double indirection) over others.
Regarding (RAII) destructors, it's a similar situation. If I need a language feature that helps me call my destructors in the right order, this could mean I have too many of them and lost control. I can see value in automated destruction for script-like, scientific, and enterprise code, and admittedly also for some usage code in kernel / systems programming a la "MutexLocker locker(mutex);". But as code gets less scripty and more system-y, the cases where resources get initialized and destroy in the same code location (scope) become fewer. I have decided to see what happens if I leave destructors out completely and try to find alternative structures, which I feel has been fruitful so far.
As I said before, OO is not a valid organizing principle for programs. It is just a pattern very occasionally useful. The feature is there for such occasions. It does not merit multiple paragraphs.
Destructors, by contrast, are a key tool. The same is true of other actually-powerful language features. Avoiding them makes your code less reliable and slower. You may assert some moral advantage in chopping down trees with an ax, but professionals use a chainsaw for reasons.
One of the projects I'm working on right now is a vector GUI with one OpenGL based and one software based render pipeline. It's also networked (one server only). Right now I can't tell you a single thing that needs to be destroyed in this program. When you start it, it acquires and sets up resources from the OS, and it uses those. Well, there is a worker thread that reads files from the filesystem, that one needs to fopen() and fclose() files. There is also a job_pool_lock() and job_pool_unlock() to read/write work from a queue...
When the program is closed, the OS will cleanup everything, no need for me to do that (that would also be slower). And note that what the OS does is not like RAII destruction of individual objects. It's maybe also not like a garbage collector. It is like destruction of a (linear) arena allocator, which is how I try to structure my programs when that is required. This way reduces need for destruction calls by so much that I seriously don't worry about not being able to use RAII.
(The frame rendering also works like that - When starting a new frame, I reset the memory cursor for temporary objects that have frame lifetime to the beginning, reusing all pages from the last frame. I'm not ever releasing these pages).
The widget hierachy is constructed by nesting things in an HTML like fashion. I have a simple macro that exploits for-loop syntax to "open" and automatically "close" these widgets. The macro works mostly fine in practice but can break if I accidentally return from this loop (a break is caught by the macro). An RAII based solution would be better here, but I also feel that maybe there must be a better overall approach than nesting generic stuff in this way. The "closing" of elements in this case is needed not because they need to be destroyed (they are stored in the frame allocator after all) but because I don't want to name individual nodes in the hierarchy, but want to build the nesting implicitly from the way the nodes are opened and closed.
There is no way that you could convince me that "destruction" is an important concern in this program. It's probably not enough of a concern to make me switch to some idiomatic automation technology that has potentially large ramifications on how I need to structure my program.
That is literally what I described, and named. It appears I appear a lot of things to you that aren't so.
> Anyplace you do, there they are.
My point was, maybe, after all, there aren't that many things that have to be scheduled. And there are other, sometimes better (depending on situation) ways to schedule these things, too.
After all, there is a LOT of value in using plain-old-data without any "scheduled" stuff attached to it. POD is a huge tool for modularization, because it allows to abstract from arbitrary objects to "untyped bytes".
But hey, guess I might go back at some point and use RAII for some things when I've learned enough good ways to avoid them, so I won't tend to paint myself in a corner just because RAII is always easily available.
I would be interested to learn what you think those features are; I have years of experience fooling around in languages such as Python, Javascript, Java, C++11+, Haskell, and also other less common ones. Maybe I have in fact not understood how to best make use of certain features. I've always been a little bit obsessed with performance & control, and I ended up in frustration about non-orthogonality / non-composability of modern language features and approaches. So I found myself on the path of exploring what's possible when limiting myself to basic flow control and basic read and write operations and function calls.
It's certainly been a journey of ups and downs, but I'm in good company, and I've gotten to know some people who are amazingly productive this way. Basically my standpoint as of today is that at least for systems programming, languages should get out of the way. There is little value in using features that make it easier to forget about certain aspects of programming, when those tasks are actually only a small part of the total work, and for good performance it's important to get the details right - which can be done mostly in a centralized location. Of course I'm thinking about resource and memory management. (Most critics of bare-bones C programming still think it is about matching malloc() and free() calls. That's a terrible way to program, as they rightly claim, and there are far better approaches to do it that do not involve language features or libraries).
What matters IME is mostly systemic understanding, and sinking time into getting the details right. Papering over them is not a solution. YMMV, maybe I'll soon have another series of attempts at "modern" approaches, and whatever the way to programming heaven ultimately is, I've already learned a couple of insights that transformed my thinking significantly, and that I wouldn't have learnt otherwise.