I'm surprised he doesn't refer to it by name (Pimpl). In C++ it has the advantage that you can change your implementation and retain binary compatibility because the size of your class doesn't change as you add/remove member variables (and maybe more surprisingly, add/removing virtual functions). If you're going to use C++, seems like it would be better to avoid the void* by
class C {
protected:
struct Impl;
std::unique_ptr<Impl> mImpl;
};
Came here to say exactly that. You can also do this, thought it looks a little worse it allows for even more flexibility, such as complete decoupling of Impl from A across different files:
// Forward declaration of Impl. What does Impl do?
// You're not allowed to know.
class Impl;
class A {
protected:
Impl* mImpl;
};
I've been using this trick but for a different reason - to reduce the number of #include statements in header files that are included a lot themselves.
Between this trick and generous use of forward declarations, I was able to remove almost all "headers included from headers" from a past project of mine, speeding up compilation by probably 3X (never measured it but it was observably faster). Maybe today's compilers optimize all these includes away and it doesn't make a difference anymore but it used to.
These are the kinds of refactorings you often need to justify with hours of arguments and approvals and religious fights and code reviews at work, but can do in an afternoon in a private project just because it makes the code nicer.
Might not make much of a difference with precomputed headers, but if you can't or don't use those, that should still provide quite a bit of speed-up. I went through the same exercises back when I did C++, but couldn't use PCH because some dependency did weird things and didn't exactly work anymore.
These days I wish there was something similarly easy to speed up Webpack builds ...
Yes, but doesn't std::unique_ptr introduce compiler / toolchain constraints? I.e. if the class declaration was in a header file of a library, as a user of the library you'd be bound to the same toolchain as the library, no?
Considering this is C++ rather than C and it doesn't have a portable ABI to begin with officially you'd have to use the same toolchain regardless.
In practice you get a strong amount of compatibility between Clang and GCC at the compiler level (since Clang basically copies GCC's ABI). But you also get the same fudging at the stdlib level with std::unique_ptr because it's header-only and is a zero overhead abstraction for a raw pointer (i.e. at the ABI level a normal unique_ptr is the same as a pointer except you can't move it around in a register).
> and maybe more surprisingly, add/removing virtual functions
I don't have the spec handy but fairly certain that v-table implementation is compiler specific and while it may work it isn't guaranteed. However if you declare the function symbols as dynamic then you can leverage the linker to dynamically resolve the right symbols with the matching opaque data and achieve binary compatibility(assuming you use the same compiler or a compatible compiler ABI and all other caveats around C++ binary compatibility).
vtable layout is defined by the ABI, which is (mostly) consistent across major compilers everywhere except MSVC, however if MSVC ever broke vtable layout then everything that relies on COM would break on Windows. Which is basically all of Windows user space.
> Maybe? I can still think of ways to have ABI problems in the implementation of class C.
Yes which is why GP is speaking in terms of allowance. Using this pattern you can retain ABI compatibility, that doesn’t mean you do and it’s otherwise a free for all.
It's kind of ironic that you declared it protected given that the poor subclasses won't actually have a definition to work with... which is incidentally one of the reasons not to do this.
> It's kind of ironic that you declared it protected given that the poor subclasses won't actually have a definition to work with... which is incidentally one of the reasons not to do this.
i see you haven't heard of the occult powers of going to peek into the source file, copying the impl struct definition in your own source and going for a big bad reinterpret_cast
The usual solution is to out Impl in a separate implementation header, so that implementation headers of derived classes can include it, but the rest of the world doesn't need to see it.
I mean, yes, but that's just the start of it. Suddenly you need to add contortions (like, say, 2-phase initialization) if you need to e.g. add virtual methods to Impl. You can keep adding workarounds after workarounds until everything works; my general point is just that supporting subclasses now becomes more painful, and you end up having to (in some sense) fight the language. It's nowhere near as free of a lunch as people make it out to be.
I don't see why you need 2-phase for virtual functions.
But yes, I don't think anybody claims is a free lunch. It is annoying, verbose, repetitive, but often necessary to keep the cose base complexity under check.
> I don't see why you need 2-phase for virtual functions.
I might be misremembering the "for virtual functions" part, but the inability of C to call Impl's constructor directly in the presence of a derived class can sometimes force you to delay some of the initialization until after Impl is constructed. ("Necessary" might be too strong here, in that you could find some other workaround too.)
> often necessary to keep the cose base complexity under check.
Hmm... I'm not sure I agree. A pimpl-like idiom can be necessary for solving a few very specific problems, which are explained in [1] better than I can here: (a) ABI stability, (b) slow compilation, and (c) exception safety. There are other niche cases I can think of (e.g. "I need fast/atomic swapping like a reference type, but copying like a value type"), and even those might have better solutions, but none of them really has anything to do with code complexity... they're either domain requirements you either have or don't have, like in (a)/(c)/(d), or they're workarounds for slow toolchains, as in (b). But unless you have requirements/constraints like these, I have a hard time recalling any common situations where pimpl would be the best solution, especially if it's for taming complexity.
That breaks make_unique. Maybe unique_ptr inside an std::any? Error-prone casting, but managed lifetime. Though make_unique is maybe obsolete now that parameter evaluation order is better defined.
This also significantly hurts readability, and I would hate the person who writes code like this unless you really need binary compatibility (if you do, it is a sign that you should improve your release schedule).
Mostly, you just want to decouple the interface of the class from the implementation details.
In that case do just that - define an interface and implement it elsewhere.
You care about binary size is probably the first one. In my experience, increases in binary size cause an exponential increase in link time. You may have licensing restrictions (LGPL for example), or be using a binary thay was provided to you. Your source code might be set up in such a way that a monolithic build doesn't make sense.