Only an extraordinarily ambitious personal project would experience a significant amount of build latency on modern compilers and CPUs, no matter what approach is used. I don't think you can reason from "it works for personal projects" to "it's fast."
One thing I can tell you immediately about this unity build idea that is suboptimal for many codebases: modern compilers do not have much intra translation unit parallelism. If there's more in your project than one CPU core can compile quickly (i.e. most shipping products), it's going to be a serious bottleneck.
> One thing I can tell you immediately about this unity build idea that is suboptimal for many codebases
to give you a data point, in my case, https://github.com/OSSIA/score (fairly mundane C++ project totaling ~360kloc in roughly 1300 .cpp files / 1800 .hpp files, split across ~15 libraries, using boost, Qt and a few other common libs), unity builds (one .cpp per library) divides the time by 5 compared to PCH.
360kloc certainly meets my definition of extraordinarily ambitious for a personal project. :) Well done you!
"Unity" builds certainly can speed up project compilation when used judiciously. This is easily shown by imagining an absurd C++ codebase where each method or function is in its own module. This would increase the overhead of parsing .h files. Any two functions that could be combined into one while keeping the h files included the same would almost certainly reduce this overhead. But I doubt your project would compile faster if all of the cc files were combined into one.
> Only an extraordinarily ambitious personal project would experience a significant amount of build latency on modern compilers and CPUs, no matter what approach is used.
If only that were true. If you write modern C++, use the STL and popular libraries, even very small projects can take minutes to compile. IIRC, last time I tried this approach, a ~200 LoC throwaway project using the STL, Eigen and spdlog (which includes fmt) took almost a minute to compile an optimized build. I realize we may be using different definitions of "significant" here, but to me that's just unacceptable.
And, as others have already indicated, if your project really is large enough that you need parallelism, just have as many translation units as you have cores. It doesn't impede the idea of the unity build at all.
I am not talking about hobby projects here, but codebases with more than 100K lines of code.
I agree with your objection, scaling up to 2-8 compilation units to use several cores is something I also plan to test/benchmark.
And to be honest, I don't completely understand why the classic way of compiling (makefiles + dozens separate files) is so slow but there is clearly something fishy as it is very often unbearable.
Slow compilation time can be very detrimental for productivity as it creates a way for the mind to wander and lose focus.
One thing I can tell you immediately about this unity build idea that is suboptimal for many codebases: modern compilers do not have much intra translation unit parallelism. If there's more in your project than one CPU core can compile quickly (i.e. most shipping products), it's going to be a serious bottleneck.