A big problem with Bazel not mentioned here is the complexity. It's just really hard for many people to grasp, and adopting Bazel at the two places I worked was a ~10 person-year effort for the rollout with ongoing maintenance after. That's a lot of effort!
IMO Bazel has a lot of good ideas to it: hierarchical graph-based builds, pure hermetic build steps, and so on. Especially at the time, these were novel ideas. But in Bazel they are buried behind a sea of other concepts that may not be so critical: `query` vs `aquery` vs `cquery`, action-graph vs configured-action-graph vs target-graph, providers vs outputs, macro vs rule-impl, etc. Some of these are necessary for ultra-large-scale builds, some are compromises due to legacy, but for the vast majority of non-Google-scale companies there may be a better way.
I'm hoping the next generation of build tools can simplify things enough that you don't need a person-decade of engineering work to adopt it. My own OSS project Mill (https://mill-build.org/) is one attempt in that direction, by re-using ideas from functional and object-oriented programming that people are already familiar with to make build graphs easier to describe and work with. Maybe a new tool won't be able to support million-file monorepos with 10,000 active contributors, but I think for the vast majority of developers and teams that's not a problem at all.
> other concepts that may not be so critical: `query` vs `aquery` vs `cquery`, action-graph vs configured-action-graph vs target-graph, providers vs outputs, macro vs rule-impl, etc
Almost all of the distinctions you mentioned are related to the way that Bazel has the concept of a "target", which lets the build graph work at a higher level than individual files.
This lets us define, at a high level, that ":foo" and ":bar" are C/C++ libraries, and that bar depends on foo. This is the build graph of targets, and it's independent of any particular files that these rules may produce (.o, .a, .so, etc).
It's nice to be able to query the build graph at this high level. It lets you see the relationship between components in the abstract, rather than a file-by-file level. That is what "bazel query" does.
But sometimes you might want to dig deeper into the specific commands (actions) that will be executed when you build a target. That is what "bazel aquery" is for.
Macros vs. rules is basically a question of whether the build logic runs before or after the target graph is built. A macro lets you declare a bit of logic where something that looks like a target will actually expand into multiple targets (or have the attributes munged a bit). It is expanded before the target graph is built, so you won't see it in the output of "bazel query."
If you took away the target graph, I think you'd take away a lot of what makes Bazel powerful. A key idea behind Bazel is to encapsulate build logic, so that you can use a rule like cc_library() without having to know how it's implemented or exactly what actions will run.
I don't say this to minimize any of the pain people experience when adopting Bazel. I'm actually curious to learn more about what the biggest pain points are that make it difficult to adopt.
This great comment is another example of what's bad about Bazel: It has the least enlightening documentation. Bazels docs are thorough and useless. Every page you read assumes you already understand the concepts described on the page.
This comment explains a query, actions, and macros pretty decently, and I doubt you could find an explanation of these things in the Bazel docs that a new user could understand.
I care a ton about fast and accurate build systems but one issue I think we haven't solved is that: people do not want to use other build tools for their language. "Why isn't it Cargo? Why not use NPM/Yarn? Why not use Pip? Why not CMake?" These questions are often rhetorical because they do not care about build systems. They don't care about the design. They don't care if your CI could be 5x faster. You will never make them care. It's good enough. You must have absolutely zero externalized cost (and therefore put in a lot of effort) to get over this hurdle. There's seemingly no way around it.
The reason a lot of people like Bazel is, I think, tools like Gazelle -- which reduce that whole problem back to "Run gazelle" and all the crap is taken care of for you. Dependencies, BUILD files, etc. People constantly talk about the "complexity" aspect, but very few people appreciate how complex Cargo, NPM, Yarn, Cabal, Dune, internally are. Because they just run "build", and it works. Bazel, Buck2, Mill, etc will all have this problem unless huge effort is put in.
TBH, this is one of the reasons why I think Nix has wildly succeeded in the past few years while more fine-grained and scalable systems have had adoption problems -- despite its numerous, numerous flaws. You get to Bring-Your-Own-Build-System, and Nix along with the blood of 10,000 upstream contributors keeps the juice flowing, and it's cached and hermetic so you see real savings. That greatly eases people into it. So they adopt it at all points on the curve (small, medium, huge projects), because it works with what they have at all those points. That makes them willing to get deeper and use the tool more.
Bazel runs into the problem that it expects to have the complete well-defined understanding of the inputs and outputs for your project. This might have made sense when Blaze was first designed and projects were done in languages with compilers that had rigid inputs and outputs. But now we're in a world where more and more systems are becoming layers of compilers, where each compiler layer wants to just have a bunch of dependencies thrown at it. In a frontend project, it wouldn't be weird for Tailwind CSS to be compiled and embedded in SCSS, where it's pulled into a JSX module via some import that magically provides type checking with the CSS under the hood. And so you either need to handwave over it and lose some of the benefits of incremental builds, or spend time getting it to work and making it continue to work as you add new layers.
So in my mind, Bazel is no longer worth it unless the savings are so great that you can afford to staff a build team to figure these things out. Most teams would benefit out of using simple command runners instead of fully-fledged build systems.
I'm glad to see someone else describe their experience this way too.
bazel has arrived at $WORK and it has been a non-trivial amount of work for even the passionate advocates of bazel. I know it was written by the Very Smart People at google. They are clearly smarter than me so I must be the dummy. Especially since I never passed their interview tests. :-)
Of course given all things google, by the time I'm fully onboard the train, the cool kids will be making a new train and then I'll have to hop onto that way to enjoy the rewards of the promised land that never quite seem to arrive.
> know it was written by the Very Smart People at google
For Google. That's the key. I have the privilege of experiencing both sides, having been at Google for nine years. I never had a problem with Blaze, but using Bazel in a smaller company has been extremely painful. I think there are just very few places that have the exact problems as Google where something like Bazel would be a great fit.
That's the rub. It provides scalability for very large organization, of which, there are few. It's similar to running OpenStack. Meta also has some projects like this, such as buck2 which lacks the really good virtual FS acceleration stuff (eden). Megacorp FOSS tend to skew towards offering whizbang features that are incomplete, complicated, poorly documented, and require a lot of extra work.
Actually if you could make something like github, where all software would be part of a single megarepo and built constantly that would be incredibly useful, and bazel would be excellent for that (or at least the closest thing we have to reasonable)
The problem with bazel and almost every other build system (all except the "scan the source files and build a dependency graph" ones) is that you'll be writing build instructions for all your dependencies that aren't using it. If that was done for you, they'd be incredible.
Compiling things and wanting a robust build cache so developers spend less time waiting isn't a problem remotely unique to Google. You might not have Google scale to hire a team of developers to be able to optimize it to the N'th degree like they can, but holy shit we are not gonna use Makefiles for advanced build systems anymore.
> Compiling things and wanting a robust build cache so developers spend less time waiting isn't a problem remotely unique to Google.
That wasn't my argument at all. Plenty of modern tools address this exact need; it isn't unique to Bazel. If you read the article, the author made many interesting remarks on how Bazel reflects the unique design choices of Blaze, which were often picked due to Google's needs.
My point is that when people hit these barriers, they need to understand that it's not because they are unintelligent or incapable of understanding a complex system. That's what the OP I responded to was saying, and I was just providing some advice.
Bazel is an investment to learn for sure but your effort estimates are way overblown. We have a Python, Go, and Typescript monorepo that I setup for our team and rarely have to touch anything. Engineers rarely think about Bazel as we use Gazelle to generate all our build files and have patterns for most everything we need to do.
Compared with build efforts using other tools and non-monorepo setups at other companies the effort here has felt much reduced.
IMO Bazel has a lot of good ideas to it: hierarchical graph-based builds, pure hermetic build steps, and so on. Especially at the time, these were novel ideas. But in Bazel they are buried behind a sea of other concepts that may not be so critical: `query` vs `aquery` vs `cquery`, action-graph vs configured-action-graph vs target-graph, providers vs outputs, macro vs rule-impl, etc. Some of these are necessary for ultra-large-scale builds, some are compromises due to legacy, but for the vast majority of non-Google-scale companies there may be a better way.
I'm hoping the next generation of build tools can simplify things enough that you don't need a person-decade of engineering work to adopt it. My own OSS project Mill (https://mill-build.org/) is one attempt in that direction, by re-using ideas from functional and object-oriented programming that people are already familiar with to make build graphs easier to describe and work with. Maybe a new tool won't be able to support million-file monorepos with 10,000 active contributors, but I think for the vast majority of developers and teams that's not a problem at all.