GCC has the ability to generate dependencies in the Makefile format using `gcc -MM` which I guess is also for "Make Make" :). It does not create the whole Makefile but it's a nice way to have a script (or a Make recipe) that maintains up-to-date dependencies for each targets in a file that can be included from the Makefile (which then declares generic compilation recipes).
There's generally a much nicer way to handle this so that the dependencies don't have to get generated before the first compile (and are instead generated as a side effect of the first compile, which is sufficient as the first build will always be complete and you only need granular dependency information for proper incremental rebuilds).
CFLAGS += -MD
-include $(wildcard *.d)
I personally like to also pass `-MP` to add phony targets so removing a header doesn't complain about headers which are intentionally removed.
You can pass `-MMD` instead of `-MD` to avoid having system includes in the generated depfiles just like the example you wrote does.
I also personally prefer to be explicit and avoid wildcards in makefiles wherever possible so I would do something like this:
That being said, this doesn't solve the "everything should depend on the Makefile" problem as well as the "what if I generate a generate source code" problem or finally "what if I have to compile C to generate C to compile my program" problem.
I think you can get away with 'include' here -- the (Gnu only?) make behaviour on trying to include a non-existent file is to look for a target which creates that file, then running it if found, then to try the include again. This has other applications: if your Common.mk is generated by autoconf (say), then
default: all
Common.mk:
$(error Not configured)
include Common.mk
gives a nicer error message if you 'make' having not './configure'd
It's also not like this is new - I know I've been using gcc to output dependencies for Makefiles for at least 2 decades, presumably it was around long before then.
My interest in make and makemake has recently been rekindled.
I've mostly used CMake for my build systems over the past ~20 years.
But recently I got into CUDA programming, and I was put off by the large amount of opaque magic provided by CMake's built-in CUDA support.
It made it somewhat harder to have clarity about the variety of CUDA build commands, which kinds of file each could ingest and produce, and which options they accepted.
As someone new to the entire CUDA toolchain, that magic was more of an impediment than an help.
So for starters I'm just writing a Bash script for building my code. But the next step for automation will be a hand-written Makefile, not a CMakeLists.txt file.
That's the way to go in my opinion. The FreeBSD folks build their entire damn project with make, although I'll admit my eyes glaze over a bit anytime I take a look at it.
CMake is just a little too high level for my taste. Like I don't really understand how its figuring stuff out under the hood. Now that is of course because I haven't bothered studying up on it, but the point is you don't really have to with Make since the basic concept is dead simple.
CoastalCoder is right - the name "makemake" has been in use for 20+ years by at least one other project, and a quick google reveals that many projects have used that name.
It is a bad idea to use "makemake" as a name, I have seen/heard about at least 5 different programs/scripts/... called makemake during the last 25 years, each of which had been quite widespread (I wouldn't have heard of that 20 years ago if it hadn't ;). And nowadays I get quite a number of Github repos called "makemake".
And there is always makedepend (and similar named programs) too, of course.
I don't want to sound too negative, but what does this version do that all the other existing 13843215 versions don't?
> In July 2008, in accordance with IAU rules for classical Kuiper belt objects, 2005 FY9 was given the name of a creator deity. The name of Makemake, the creator of humanity and god of fertility in the myths of the Rapa Nui, the native people of Easter Island, was chosen in part to preserve the object's connection with Easter.
It builds a Android APK with one line of comand line on a fresh install linux to begin with (Thanks to Felix) thats pretty cool.
Id say the main use case is to move dependencies from a project file to the actiual C files that have the dependencies, This way if the dependencies change all projects that use the file have their build process automaticaly updated. It also removes the need to list the files included in a project since this is information that already exists in the .c files.
Any time you have to define the same thing in two places, you have the risk of them not beeing in sync. This way there is only one canoical ground truth. Source and build process can never be out of sync.
No, Bercause MakeMake cant know what defines are made by the compiler on the system. MakeMake may not even run on the same system as the resulting makefile will be used on.
This means that all source code files has to be compilable on all platforms, and thats good practice anyways, so thats not a problem for me.
The makemake pragma lets you define platform specific build options.
> MakeMake was designed to solve my problem (Im a C developer that develops
> protable C programs using VisualStudio, and I do not want to maintain up
> todate make files for all projects and platforms).
I just don't understand why you wouldn't use CMake for this? Especially since VS has built-in CMake support.
I use CMake, but can completely understand wanting to avoid it. The Makefiles that CMake produces are not at all what you want for correct reproducible builds.
* Before CMake 3.20, released in 2021, CMake tracked dependencies at configuration time. As a result, any time you add/remove a header file, you need to re-run CMake in order to have correct builds. CMake 3.20 finally delegated this out to the underlying compiler.
* The Makefile produced by CMake does not have a target for the object file. Even though it prints out the path `CMakeFiles/cmake_target.dir/src/my_file.cc.o`, and generates that file, you cannot run `make CMakeFiles/cmake_target.dir/src/my_file.cc.o` from inside the build directory. Instead, you need to run `make src/my_file.cc.o`, a PHONY target which actually builds the object file.
* Wildcard rules in CMake are expanded to static rules in Make, rather than wildcard rules. As a result, any time you add/remove a file, such as when checking out a new branch, you need to re-run CMake in order to have a correct build.
* Until recently, the makefiles produced by CMake did give useful information with `make --dry-run`. Instead of printing the command required to compile a file, it instead printed an internal delegation back to cmake.
* Non-transferable build. The `CMakeCache.txt` stores a record of what options have been enabled/disabled, but also has local file information. As a result, the contents of `CMakeCache.txt` are essential to know what I actually built, but I cannot send it to somebody else as a way to reproduce my build. Makefiles generally avoid this by having explicit include files, which then can be transferred.
So, in order to have correct and reproducible builds, I need to re-run CMake every time I'm building a project, the usual tools for debugging a build error are broken, and including my build environment in a bug report is useless for reproduction. I can see why somebody would want to avoid it.
Because Cmake requires a configuration file for each project. With makemake i can with a single command line build a new project without any configuration needed. Its one less thing to maintain.
If you're never using any external dependencies, it can work. It's just the no-dep C projects are few and far between. As soon as you have things like dependencies or compiler detection, CMake or other build systems really start picking up a lot of slack.
What constitutes “modern” here? ~15 years ago I was bitten by the CMake bug and enjoyed it for years until I got frustrated by it getting 90% of the job done and having to fight it for the last 10%. These days I have “reverted” back to BSD Make, and feel pretty good about it.
So… when did CMake start its “modern” phase? Does my experience resonate with yours at all?
The switch over to doing everything via target properties is the big difference between "modern" and "old school" CMake. This fixed a lot of CMake's problems other than the scripting language being awful and has mostly happened over the last decade.
CMake release notes pretty consistently have at least one thing that makes me go "how is that a thing that just got implemented now and not 20 years ago?" That's not exactly a positive, but there are a very large number of things where 15 years ago you'd be shocked and frustrated to discover that there's no good way to do it, while today there's an obvious and simple way.
I stopped considering CMake when I reported a bug (that worked in all major alternatives) and they actively refused to patch it (because it was based on a traditionally-UNIX API, and they refused to use it ... even though it was available on Windows too!).
Hmm. I see issues about finding executable-but-not-readable files, but nothing about `access(X_OK)` specifically (though GitLab's search may be failing here). Anyways, there are a number of…unfortunate behaviors that are baked in too deeply and take far longer to uproot than anyone likes. For example, the `EXISTS` predicate checked for readability…probably what was usually wanted, but sadly falls over when a broken symlink is left laying around. That one was fixed recently, but they crop up often enough.
Also note that while Windows might have POSIX APIs, sometimes the semantics are just not quite right. File permissions is one such place where things "exist" but act like they're in an alternate dimension (e.g., what would `umask` do on Windows?).
I don't understand, with gcc -MM (to generate .d files), -include to include them, and VPATH for source file discovery, a typical small project makefile is about 1/2 page and requires very little maintenance.
More complicated project? Might shoot out to 2 pages. It'll still work in 20 years time too, the tools won't change under you (I'm looking at you, pretty much all other build systems)
Here is the fundamental problem it solves for me: my code in 99% pure C code without dependencies. Then i have a few wrappers for platform specific stuff. I have many many projects that use the same wrappers. Make files store both what files are used and what dependencies there are, but that creates a lot of duplication. Its better if the wrappers themselves can say what dependencies they have. Then thats stored in one place, and a change affects all projects. Then we have the issue of what files are needed. That information already exists in the .c files, so again why duplicate it? Its better to parse it out.
We had purchased a commercial MS-DOS utility with the exact same name back in the early 90's. It was reasonably priced ... I think it was $50 at the time. It successfully untangled the dependency chain in our code allowing for more optimal build times.
Nice quick and dirty hack, but the latency isn't great and it will hammer the file system in large projects calculating time stamps. It's especially bad on windows although polling on mac has also historically been rather expensive (linux is better in this regard though polling still isn't free).
I'd much rather install entr if it's available and do something like
THIS is the lightest way to have continuous make, without wasting cycles and hammering the file system. inotifywait is a pretty standard tool that does exactly what's needed.
watch :
while true; do \
clear; $(MAKE) -j all tests; \
inotifywait -qre close_write src tests; \
done
is a blog article i wrote a long time back abouut how to automate makefiles. there is a lot wrong with it, but some of the concepts might be useful for people trying to do better.
edit: in the blog i said "indirection" when i meant "redirection". doh.
Yes. I had an early implementation that did look for any function declarations, but using extern made it more robust, and enforces the use of extern, something i think helps with clarity.
[edited] Since extern is the default scope for functions, so can be (and often is) omitted in declarations and definitions, won't the program "miss" functions which are implicit extern? (I've not inspected the code in detail, so might well be wrong, to indicate this an earlier version of this comment was prefixed with an "Er", which was taken by other commenters as a swipe, not my intention).
make make does this by looking for any function declaration using extern,
Taking that at face value (I'd not inspected the code in detail), then it would miss functions with implicit externs, no? My apologies if you found my posting offensive, that was not my intention.
E.g.: