Your example does not contradict what I wrote. You manually specified the tool to be run ($CC) and all of the arguments to that tool.
It's true that there is a level of indirection through the $CC variable, but you're still operating at the level of specifying a tool's command-line.
> There's no reason this shouldn't be possible with make; it just hasn't been implemented so.
Make is 44 years old. If it were an easy extension to the existing paradigm, there has been ample time to implement such an extension.
> Do bazel/buck/please actually do this? As far as I know tup is the only tool that actually verifies inputs/outputs of rules, and it needs FUSE to do so.
Bazel certainly has some amount of sandboxing, though I don't think it's quite as complete as what is available internally at Google with Blaze. I haven't used Buck or Please, so I can't speak for them.
> True, it's a bit of a footgun, but by no means difficult.
Well footguns aren't great. :) As just one example, any header that is conditionally included (behind an #ifdef) could cause this cache to be invalidated when CFLAGS change, but Make has no idea of this.
Notably, linklibrary can be defined according the platform, or according to dynamically set variables, giving you the same level of flexibility as make.
>> There's no reason [sandboxing] shouldn't be possible with make; it just hasn't been implemented so.
> Make is 44 years old. If it were an easy extension to the existing paradigm, there has been ample time to implement such an extension.
‘Nobody's done it yet, ergo it's not possible or easy’ is not a valid argument.
> just one example, any header that is conditionally included (behind an #ifdef) could cause this cache to be invalidated when CFLAGS change, but Make has no idea of this.
Then you specify that the source files depend on the makefile.
> Then you specify that the source files depend on the makefile.
Then you gratuitously rebuild everything whenever the Makefile changes, even if you only changed a comment.
Also this scheme is incorrect if the Makefile was written to allow command-line overrides of variables like CFLAGS, as many Makefiles do.
But these are just details. The larger point is this. The language of Bazel is defined such that builds are automatically correct and reproducible.
While it's true that Make has some facilities like "call" that support some amount of abstraction, it is up to you to ensure the correctness. If you get it wrong, your builds are back to being non-reproducible.
It's like the difference between programming in a memory safe vs a memory unsafe language. Sure, every thing you can do in the memory safe language can be done in the unsafe language. But the unsafe language has far more footguns and requires more diligence from the programmer to get reasonable results.
> ‘Nobody's done it yet, ergo it's not possible or easy’ is not a valid argument.
Ultimately, make and co are text oriented, while bazel and co are object oriented. Hacking object oriented capabilities into a text-oriented language isn't particularly fruitful or ergonomic.
Bazel and make are both text-based languages for describing symbolic, abstract structures. Just like pretty much every other programming language.
Bazel and make both use the same abstract structure, just like pretty much every other build system: a directed graph of build directives and dependencies.
Fundamentally, bazel and make treat "targets" differently. A make target is an invokable thing. That's about the extent of it. You have a dag of invokables, and invoking one will cause you to invoke all of its dependencies (usually, other people have discussed the limitations of make's caching already).
But let's look at how a rule is implemented in bazel[0]. Here's a rule "implementation" for a simple executable rule[1]:
def _impl(ctx):
# The list of arguments we pass to the script.
args = [ctx.outputs.out.path] + [f.path for f in ctx.files.chunks]
# Action to call the script.
# actions.run will call "executable" with
# "arguments", saving the result to "output"
# access to files not listed in "inputs" will
# cause errors.
ctx.actions.run(
inputs = ctx.files.chunks,
outputs = [ctx.outputs.out],
arguments = args,
progress_message = "Merging into %s" % ctx.outputs.out.short_path,
executable = ctx.executable.merge_tool,
)
concat = rule(
implementation = _impl,
attrs = {
"chunks": attr.label_list(allow_files = True),
"out": attr.output(mandatory = True),
"merge_tool": attr.label(
executable = True,
cfg = "exec",
allow_files = True,
default = Label("//actions_run:merge"),
),
},
)
This is, admittedly, not easy to follow at first glance. Concat defines a "rule" (just like cc_binary) that takes three arguments: "chunks", "out", and "merge_tool" (and "name", because every target needs a name).
Targets of this form have metadata, they have input and output files that are known and can be queried as part of the dag. Other types of rules can be tagged as test or executable, so that `blaze test` and `blaze run` can autodiscover test and executable targets. This metadata can also be used by other rules[2], so that a lot of static analysis can be done as a part of the dag creation, without even building the binary. To give an example, a rule like
can be built and implemented natively within bazel by analyzing the dependency graph, so this test could actually run and fail before any code is compiled (in practice there are lots of more useful, although less straightforward to explain, uses for this kind of feature).
Potentially, one could create shadow rules that do all of these things, but you'd need to do very, very silly things like, off the top of my head, creating a shadow filesystem that keeps a file-per-make-target that can be used to query for dependency information (make suggests something similar for per-file dependencies[3], but bazel allows for much more complex querying). That's what I mean by "object-oriented". Targets in bazel and similar are more than just an executable statement with file dependencies. They're complex, user-defined structs.
This object-oriented nature is also what allows querying (blaze query/cquery/aquery), which are often quite useful for various sort of things like dead or unusued code detection or refactoring (you can reverse dependency query a library that defines an API, see all direct users and then be sure that they have all migrated to a new version). My personal favorite from some work I did over the past year or so was is `query --output=build record_rule_instantiation_callstack`, which provides a stacktrace of any intermediate startlark macros. Very useful when tracking down macros that conditionally set flags, but you don't know why, and a level of introspection, transparency, and debugability that just isn't feasible in make.
That's what I mean by object-oriented vs. text oriented. Bazel has structs with metadata and abstractions and functions that can be composed along and provide shared, well known interfaces. Make has text substitution and files. While a sufficiently motivated individual could probably come up with something in make that approximates many of the features bazel natively provides, I'm confident they couldn't provide all of them, and I'm confident it wouldn't be pretty or ergonomic.
FUSE, strace, and namespacing were all the mechanisms I found. Bazel uses separate wrapper program which you could reuse in other build systems, so there is no fundamental problem with adding a "hermetic builds" feature to other build systems like Meson or Cmake.
Please takes sandboxing a bit further using kernel name-spacing to isolate builds and tests. It's an opt-in feature but you can bind to port 8080 in your tests and run them in parallel if you do ;)
Your example does not contradict what I wrote. You manually specified the tool to be run ($CC) and all of the arguments to that tool.
It's true that there is a level of indirection through the $CC variable, but you're still operating at the level of specifying a tool's command-line.
> There's no reason this shouldn't be possible with make; it just hasn't been implemented so.
Make is 44 years old. If it were an easy extension to the existing paradigm, there has been ample time to implement such an extension.
> Do bazel/buck/please actually do this? As far as I know tup is the only tool that actually verifies inputs/outputs of rules, and it needs FUSE to do so.
Bazel certainly has some amount of sandboxing, though I don't think it's quite as complete as what is available internally at Google with Blaze. I haven't used Buck or Please, so I can't speak for them.
> True, it's a bit of a footgun, but by no means difficult.
Well footguns aren't great. :) As just one example, any header that is conditionally included (behind an #ifdef) could cause this cache to be invalidated when CFLAGS change, but Make has no idea of this.