> questionable tools (with large shadows) such as Bazel and gRPC,
Bazel and gRPC, whilst both saddled with problems stemming from Google's inwardly-focused engineering culture (and also just some sub-optimal historical decisions) are not "questionable tools" in the sense that they solve the wrong problems or something else solves the same problems much better.
Blaze/Bazel (and its various clones like Buck) are basically the only general purpose build systems out there that even attempt to do the basic tasks of a build system, namely actually figuring out what has changed and needs to be rebuilt. So of course they are gonna use it, nothing else comes even close (and the main downsides don't apply to them).
Similarly, gRPC has a lot of warts in both how its encoding, type system, API and transport work. But anything in that space that doesn't completely suck (such as cap'n proto) is basically a clone of the core design. Again what else even attempts to solve the core problem of having some backwards/forwards compatibly reasonably efficient rpc, messaging and data storage?
I don't have anything to say about Bazel, but the bit about gRPC assumes that the alternatives are as bad.
The closest alternative is Thrift. It's not as popular, alas, but its design is a lot better: you get to combine various encodings (where protobuf mostly has its binary encoding, json, and the underspecified text for configuration); and you get to pick a transport instead of being forced to use http2 with questionable features. You could run Thrift RPC over tcp with basic framing, or over 0mq, or http, or anything else. I'd also argue that Thrift's data representation is better than Protobuf (nesting) but I'll write about it later.
So protobuf is yet another "worse is better" from google: a ton of warts, choices that might make sense for google itself but not for a lot of other situations (http2!), and still winning over superior alternatives :/
It is indeed possible that there is a philosophical gulf. However, it is also possible that engineers who dislike Bazel (and love things like Make and CMake) haven't worked in large, multi-language organizations.
I'm not aware of the author's history, so I don't want to imply they haven't, I'm saying generally.
Bazel is a tool that exists to solve specific problems. And it (and clones) does that miles better than any tool out there.
1. Caching, while maintaining hermeticity and correct dependency tracking.
2. Remote execution - yes, contrary to the author's general focus on small, specialized, C (or similar) tools, products out there have to deal with thousands of dependencies that take forever to build and are nice to offload to a beefy server. They also need to support several different devices and architectures.
3. Actually being able to run tests, use the same caching mechanism on tests, surface those results to CI.
4. Being able to plug in arbitrary languages into the build system, with a sane (subset of Python) DSL to write rules in. Because not every project is written in C/C++, or in a single language.
5. Having very clear separation of build phases so you can't shoot yourself in the foot.
6. A lot of hooks to provide better integration with CI, as well as to collect profiling info from your end users, so you can actually make life better for your engineering org as a member of the developer tools/infrastructure team.
In general, my gripe with Bazel complainers is there is a loud community of them on the Internet whose primary software engineering experience is either in languages that build fast, projects that are small and with few dependencies, or just have a small number of people.
There is a world of software out there beyond the Web and C UNIX utilities. Try doing any high end computer vision, robotics or HPC work and you've to deal with a bunch of dependencies trying to solve very complicated problems. One can discuss whether some of those dependencies are well designed or not, but at the end of the day they are very good at their job and don't have much competition.
Bazel solves a lot of problems for a lot of these people, so calling it "questionable" is quite ignorant.
I have worked with Bazel, in a large, multi-language organization, but most of these points are solving the wrong problem. Often I argue for questioning the problems rather than taking them at face value and building an unnecessarily complex solution. Bazel is designed for managing complexity - and adding tons more in the process - while the right solution is to reduce complexity.
> I disagree, and there is a huge philosophical gulf of understanding between my view and those who feel differently.
Let me ask you two simple questions:
1. I work on a project in a git repo and do a git pull to get the latest changes to the branch. I do the equivalent of `make` with my build system, and encounter a problem (either a build error, or some unexpected bad behavior from the built artifact). Should I be allowed to conclude that someone has messed up, or do I first have to engage in some gyrations to make sure I have a "clean` build and the correct dependencies (git clean -dxf, issue commands to manually check and update dependencies etc.)?
2. I push some code after running the equivalent of `make test`, which passes successfully. A collaborator informs me that after pulling their build is broken or there is some unexpected failure, which does not appear to be a flake. Should I just expect this to happen every now and then and live with it?
If your answer to 1. is "No" and your answer to 2. is "Yes" than there may indeed be a gulf of understanding, but probably not a "philosophical" one. If your answers are "Yes" and "No" respectively, then I'd certainly love to hear what philosophical more aligned general purpose build tools have this property, because I'd be interested in investigating them. The only other tools in this space that I'm currently aware of of even making an attempt in this direction are either different granularity (nix) or unreleased/more of an academic POC (redo/shake).
> basically the only general purpose build systems out there that even attempt to do the basic tasks of a build system, namely actually figuring out what has changed and needs to be rebuilt.
Extraordinary claim. You will need to explain how Bazel does that and, say, CMake + Ninja don't.
If you think this claim is extraordinary and CMake is a counterexample, I don't know what to say. Have you really not have had to manually fuzz around with stale builds (by e.g. issuing "clean" commands, removing build/ directories etc) when using CMake or had to figure out why something worked on your machine but not a colleagues?
> You will need to explain how Bazel does that and, say, CMake + Ninja don't.
CMake makes no serious effort at all at "figuring out what has changed". Even trivial stuff doesn't work. If you have a wildcard pattern in a CMake file and you add a file that matches the wildcard, you will generally get a stale result (which is why it's "not recommended"). So not even explicitly specified inputs work correctly and CMake makes no real attempt to to detect unspecified implicit dependencies. Does your CMake build setup correctly detect when you updated gcc or some system library? That some build artifact implicitly depends on another?
By contrast Bazel goes through some fair amount of effort not only to detect changes to specified dependencies correctly but also to sandbox build steps to check that they don't depend on stuff that has not been explicitly specified. As the blaze docs say:
> When given the same input source code and product configuration, a hermetic build system always returns the same output by isolating the build from changes to the host system.
You can get pretty close with bazel; good luck with CMake.
I might have to make clean with CMake after a distro compiler upgrade where the compiler is behind the same symlink or something, dunno, I don't really remember doing that. Otherwise, no make clean because I understand how CMake works. Really, I do. Including with generated code and all that. Wildcards in CMakeLists.txt are misuse.
System libraries are intentionally disregarded as possibly changed dependencies, that is a design decision that most build systems make. System libraries are not supposed to change in binary incompatible ways without a major version upgrade.
Your last point is about reproducible builds, which is a different topic.
Even [task](https://github.com/go-task/task) (a tiny task-runner tool) does that... I don't know any build system that doesn't.
Perhaps the OP means full incremental compilation, which requires "cooperation" from the compilers, really (or the build tool actually parsing the language's AST like Gradle does, I believe). Or in the case of Bazel, the build author explicitly explaining to the build what the fine-grained dependencies are (I don't use Bazel so I may be wrong, happy to be corrected).
Make, cmake, cargo and npm are all examples of popular build tools that require you to manually "clean" to correct build problems. Make and cmake pretty much all the time, IME, cargo less so, but it happens. Bazel/Blaze and derivatives basically don't.
Bazel and gRPC, whilst both saddled with problems stemming from Google's inwardly-focused engineering culture (and also just some sub-optimal historical decisions) are not "questionable tools" in the sense that they solve the wrong problems or something else solves the same problems much better.
Blaze/Bazel (and its various clones like Buck) are basically the only general purpose build systems out there that even attempt to do the basic tasks of a build system, namely actually figuring out what has changed and needs to be rebuilt. So of course they are gonna use it, nothing else comes even close (and the main downsides don't apply to them).
Similarly, gRPC has a lot of warts in both how its encoding, type system, API and transport work. But anything in that space that doesn't completely suck (such as cap'n proto) is basically a clone of the core design. Again what else even attempts to solve the core problem of having some backwards/forwards compatibly reasonably efficient rpc, messaging and data storage?