It still boggles my mind that CMake was the thing that won for the next generation of build systems. Its syntax is probably even worse than bash (except that it didn't have decades of compatibility to blame that on), making it the single worst language I've ever written code in. I struggle to understand how one could sit down to create a language and come up with something that bad.
There was apparently at one point effort to replace its frontend with Lua, but that sadly never took off. (I, like everyone else, still hold my nose and use it, but man ... like ... bwa?)
The problem I have with CMake & autotools is that every project writes their build process from scratch, and everyone does things slightly differently making their project a snowflake.
Certain platforms, compilers, and dependencies need special handling and workarounds, and this knowledge isn't shared and has to be repeatedly rediscovered by every project.
Rust's Cargo, perhaps accidentally, has found a fantastic solution to this. The build process itself is built out of packages. I don't have to know how configure MSVC compiler, I just use an existing crate for it. I can use existing package to find and configure libpng — the work to handle its own quirky pkg-config replacement has been done once, and now every downstream project can benefit from it.
Package manager in the build system may sound strange, but it's actually fantastic. It works so well that I switched my C projects to Cargo to be able to build them on Windows.
Your critique is fair for autotools but not CMake, which encodes a ton of platform and compiler specific knowledge [1]. cmake modules can also be used to find libraries, e.g. FindPNG for libpng.
The bundled Find* modules are great, but once you go beyond what's included you often run into problems.
In converting a recent project to CMake (so as to use CLion), I encountered all of the following:
- find_package modules that didn't support REQUIRED or VERSION options
- Lack of standardization on whether find_package variables include recursive dependencies
- Some modules supply imported targets, some don't - requires different consumption
The fact is, I can't simply rely on find_package doing what I expect - I have to read each and every Find*.cmake I use.
On top of that, there's a bunch of conflicting information out there - should your source list include header files? Some older blog posts suggest it's necessary for dependency calculation, newer ones don't seem to mention it; I honestly have no idea what the right answer is.
IMO yes. Part of CMakes usefulness is IDE integrations, so listing your headers means they show up in your IDE.
As for your points about find modules not working, that’s not really CMakes fault. I’ve spent days fixing bad autotools/premakr/scons/custom scripts, (not to mention third party code in general).
I agree that it's not CMake's fault, but it does fit the grandparent's statement that "everyone does things slightly differently".
It's also very fixable - a central site for collecting find_package modules and enforcing certain standards should be doable, and would improve CMake's UX significantly.
Yeah agreed, a site of find package modules would go a long way towards helping, but only if they’re vetted in some way, otherwise nothing r ally changes.
> I keep considering that we might want to port Cargo to support other languages... It's better than nearly everything out there that I've ever used.
I don't think that this would solve anything. A big problem for instance is maintainers who insist in keeping compatibility for totally obsolete stuff - see for instance zlib which clings on DOS compatibility, or Nethack which just started to use ANSI C features in the latest release ; before this they were on K&R C. As a result build sytems are terrible since they have to handle so many edge cases for obsolete compiler vendors, Watcom C & friends for instance. Rust code doesn't have this kind of legacy to handle.
For C it’s not as obvious how this could be accomplished, but it would be possible. There are lots of Rust projects that embed C today, and add C flags for the LLVM.
The language I was really thinking of was Java, where Cargo is far superior to maven. I’ve never had time, but what I’ve thought would be neat is to take a Cargo.toml file as input, and then generate the maven files from that, like Cargo.lock files in Rust.
The language isn't great, but I have never spent more than a few hours trying to figure out how to do something in CMake. In other build systems, I've had to give up on some features after dozens of hours of trying. CMake is ugly, but it won because it works.
> I have never spent more than a few hours trying to figure out how to do something in CMake.
A well designed tool would not need a user to spend "a few hours" to learn how to do a specific thing.
People are far too in love with their annoying build systems.
Soon, if not already, tools to build CMakeLists.txt from something simpler will exist. Eventually that will expand to the point it itself requires a tool to generate config files so that it can generate config files for cmake so that cmake can finally fail while generating whatever your compiler wants, and then you have to debug that shit.
I remain to be convinced that build tools are even fundamentally useful.
Build tools not useful? So what you just re-write your entire gcc compilation command every time? Do you build the.o files separately or rebuild them every time even when they don't need it? This is where Make really shines. It's would be far top expensive to build projects in C if you had to recompile every file every time.
I'm still not convinced there's a good way in cmake to pull in and compile external libraries as dependencies. I know it can be done via a superproject method but that's a lot of boilerplate for something that should be straightforward. Everything else is simple and easy until I butt heads with that issue.
It's been a while since I've done this kind of CMake in anger, but I recall my main gripe with ExternalProject_Add is that it can't leverage that the external project is a CMake project and I still end up hard coding paths.
function (AddDependency name)
add_subdirectory(${PROJECT_SOURCE_DIR}/deps/${name})
include_directories(${PROJECT_SOURCE_DIR}/deps/${name}/include)
endfunction (AddDependency)
And then use git submodule to clone all the dependencies I need in the deps folder. Everything then works automagically (assuming that the dependency is built through CMake).
I wrote up a little thing about all the alternatives https://geokon-gh.github.io/hunterintro.html. Tldr: Hunter is by far the best options and makes CMake painless. It also solves some multiple-dependency and toolchain issues that are nearly impossible to get right with vanilla CMake
What boggles my mind is that it was conceived on a project (medical imaging, iirc) where they were using Tcl as an embedded language, decided to create a new build system that needed an embedded language and... decided to write one from scratch?
I would have love, love, LOVED to see Tcl in that space.
I would have love, love, LOVED to see Tcl in that space.
100% agree, a Tcl based build system would be a dream come true. I wonder why they thought it would not enjoy widespread adoption. Plenty of Tcl-based systems did, after all, and it's only because of fashion now that people aren't still using it.
There is smake[0], which I use a lot, and lots of room to grow with (I.e.: dig in and figure out idioms), but Tcl/cmake could have been very interesting.
The language itself isn't great, but I don't think that's the main problem with CMake. The whole structure of the thing is like having a dump truck pour a giant pile of tools in front of you. It's all in there somewhere, but good luck figuring out which ones you need.
Out of interest, how's this any different than other build systems like SCons, Waf? Even working out stuff like string substitutions in Make can be confusing at times, in terms of the syntax required IMO.
I absolutely agree. This is the number one reason I don't do any serious development in C or C++. Here are some grievances that come to mind:
* Pretty much everything is a string
* One giant namespace with no notion of imports
* No sane dependency management; just uses whatever is laying around the system
* IIRC source files have to be listed in the CMake file individually (or at least that's what I recall to be the best practice)
* Everything is a one-off function. [Seriously, CMake comes with functions for Qt projects][0] and other popular libraries.
* It's supposedly cross-platform, but you need to take care if you want it to generate sane, usable VS solution files. IIRC, it would create a VS project for every subdirectory or source file by default.
* I forget whether CMake itself is slow as hell or if it just generates super slow Makefiles, but in either case, it is painful
Listing source files individually is what you're supposed to do, because not all build tools support globbing for files. To support this properly, cmake would need to re-run on every build to regenerate the files list... well, it's probably doable, with some work, but I expect there's just not the demand, I expect because most people are like me and would prefer to list their source files out by hand so they know where they stand.
The Visual Studio projects it creates, at least for my projects, have a VS project per target. That's just how Visual Studio works. You do tend to end up with a lot of projects in one solution this way, which is something most people don't do when setting projects up by hand - but with CMake picking up the strain there's no reason not to. (It's also very convenient for running tests in the debugger. One target per test, select it as the startup project, and off you go.)
As for makefiles, I haven't noticed CMake's being noticeably slower than any others I've seen, though I'm sure it's possible. They are rather larger than most Makefiles. (If you're able to switch to Ninja, that comes recommended.)
I've never had a reason to suspect CMake's Makefiles are slower than others. I've converted projects from Autotools to CMake and CMake was always miles ahead faster.
I recently switched to using CMake's Ninja generator. I've been shocked at how much faster it is than Makefiles, even for tiny projects.
I just converted a large project's build system from Makefiles to meson+ninja. The ninja part is significantly faster than make was because it's keeping all the processor cores busy all the time which make doesn't do.
No, I was comparing ninja to make -j8. If you have large recursive makefiles they end up waiting for each sub-target to be made before the next directory is entered and made. This has a significant effect when compiling our ~100k line code base. Maybe 30% slower on average.
It’s nice to drop back to serial compilation when debugging IMHO. Sometimes one header bug causes two errors with interspersed output in parallel builds.
(I imagine it buffers the output internally. With a moment's thought you can imagine situations specifically engineered to cause more problems than this solves, but I've been using Ninja for a couple of years now and I've yet to see that happen.)
Awesome ;) - thanks for this note. I've been using the same no-dependency no-cygwin Windows build of 3.80 for about ten years, and OS X comes with 3.81, so my GNU Make skills never got any further than that. But this has prompted me to track down a similar Windows build of GNU Make 4.21, so all I need to do now is figure out how to get one set up on OS X...
This won't get me to switch away from Ninja when using CMake, but I write my own -j-friendly makefiles sometimes, so this will come in handy.
That's a good point, but I'm debugging much less often than running make, so when I want to disable parallel builds I just `unset` the MAKEFLAGS variable. For me this is a nice default.
> Listing source files individually is what you're supposed to do, because not all build tools support globbing for files. To support this properly, cmake would need to re-run on every build to regenerate the files list... well, it's probably doable, with some work, but I expect there's just not the demand, I expect because most people are like me and would prefer to list their source files out by hand so they know where they stand.
Sincerely, have you spent much time with other languages? Very few require so much from the user.
It just now occurred to me that CMake isn’t so much of a build tool as it is a homegrown framework for making homegrown build tools. I don’t mean that to be insulting; I think it explains a lot of the differences of opinion about CMake; if you’re expecting a build tool like Cargo, you’ll be sorely disappointed, but if you’d like to make a one-off build tool for your project then maybe it’s not so bad?
That said, I probably spent 10-15 percent of my time managing CMake, and I wasn’t doing anything particularly novel; just trying to wire in some new library (like Qt back when everyone had a recipe that worked for their project but no one else’s) or testing framework (I remember GTest being particularly painful) or make builds reproducible or fetch libraries from a repository. Stuff that people in other languages get for free these days. I guess I just want nice things for the C/C++ communities as well.
I just like being explicit about which files end up in the build. It's a form of sandboxing, however limited and imperfect. (Build tools could go further to provide a more comprehensive sandbox, but meanwhile I have stuff to do and so I have to work with what's available.)
As well as automatically finding files to compile, if people working in other languages also want things downloaded during the build, dependencies automatically discovered, prebuilt dependencies retrieved, and so on, then good for them. But this is not stuff I want, and indeed I try to actively avoid it, to the extent feasible in the time available. (I typically try to ensure that all dependencies are built from source as part of the build, and that the source for each is included in the repo. This is usually fairly straightforward, and goes a long way towards eliminating annoying discrepancies between builds made on different computers.)
Anyway, this is not a moral question, and you must do what you want. These opinions are based on my experience, and if yours was different, I'm sure you had more fun. My point was just that if everybody feels like I do, then this might explain why CMake works the way it does, rather than some other way.
> This is usually fairly straightforward, and goes a long way towards eliminating annoying discrepancies between builds made on different computers.
With sincere respect, what you're describing is 'deterministic builds' and the features you say you dislike in other languages help to support deterministic builds in those languages, but it sounds like you're shunning those features because you think they hurt determinism? What you've described--vendoring dependencies--is a legitmate solution, but it has its own pain points, and many other languages have settled on a different model.
Even if you like the features you cite (and there's certainly validity to your opinion), CMake must be one of the worst ways to implement them. But I won't belabor the point; lots of people are very happy with CMake, but I won't touch it if I can help it.
I think alot of C/C++ devs have spent time in other languages, particularly python. And they generally have their own pathological problems, like pip.
Cargo is probably one of very few build tools that does almost everything correctly. So if you use almost any other build tools you're trading one set of problems for another. Granted, the issues aren't as hard to understand as C++ ABI incompatibilities but they are more or less just as hard for the tool to auto fix.
I think the overalls point is that the tool should support that language and so on. I'm sure everyone has their own horror cmake or autotools debugging story, but I think we are all very interested in a better way to solve the problem.
If you're point is, well you should just use a different language, you probably don't know the requirements well enough.
Yeah, I own the build system for our code which is all in Python. Pip is the package manager, not the build system, but your greater point stands: Python’s build tooling isn’t great (although pipenv is a significant improvement).
While few tools match up to Cargo, CMake is still uniquely unpleasant in my view. Most others aren’t fully imperative programming languages and at least have some ability or convention for locating sources. Many serve as a package manager or they dovetail nicely with a package manager. Go’s package manager story is a work in progress but I still prefer it to CMake (no contest).
My point isn’t that you should use a different language, my point is that CMake doesn’t compare to the tooling in other languages and there isn’t a compelling reason—it’s success is a historical accident, and I hope the community arrives at something better. I’m of the impression that there are other tools that are interested in solving this problem and I wish them luck.
> I hope the community arrives at something better.
Yes. That's the opinion of most everyone, including KitWare, it appears.
All of this is frankly a language defect. If working on build and packaging problems seems like a waste of time, then C and C++ are not good choices for you. Otherwise, dealing with all this is a job requirement, at least for senior engineers.
C++ is working on modules. Solving these problems seems to be downstream of that. I wouldn't count on de facto standard packaging tools for another decade. And there's a good chance that there will be forum threads about how annoying those tools are.
While CMake is verbose I don't think it's terrible. The complexity of building a simple flat source project vs a multi-stage build project scales very well. One thing that amazed me was how simple it is to script up the creation of generated libraries.
For school I had to generate a build step that would allow me to embed Duktap and link to a library called Glad. Both needed to be generated with a python script. Doing this was dead simple [1]. I didn't have to do anything crazy to locate the path to python, the source folders I want to output into, etc. CMake has magic for that.
Granted I spent just as much time reading the really horrible docs to find out how to use these magic features as it took me to complete the assignment. I think like many amazing technologies it's blocked by a steep learning curve and a long long history of worse ways of doing things. Every time you google for how to do something you'll get an answer from a different part of the project's lifespan.
This is a great introduction to CMake for a single project/repo. You could add a section on installing the project. Although that probably isn't required for the student projects.
What I'm still searching for is a nice way to build multiple repos, which have dependencies on each other, in a cross platform way, that's distribution friendly. What I mean by distribution friendly is: you shouldn't be bundling your dependencies.
But it's nice to compile your own dependencies and link to that for debugging. So I want something that easily allows the upstream developers to download, compile and debug their dependencies, but doesn't do that when distributions (Debian, Fedora) go to compile and release your code. And it'd be nice if this was part of the CMake, so new dev's don't need to run some bash script to get setup, or follow a long list of instructions in a document to download, compile and install said dependencies. I'd also like to keep my existing CMake as horrible as it is at times, I'm not sure if switching to Meson is really worth the effort. Especially since I'm still stuck with C/C++ anyways.
Daniel Pfeffier’s effective cmake talks about how to do that. Setup each project standalone and get the dependencies with find_package. Then create a superprojects thats add each dependency with add_subdirectory and override find_package to be a no-op for dependencies added with add_subdirectory, since the “imported” target is already part of the build:
Now, overriding a builtin function is not the best idea so in boost cmake modules we have a bcm_ignore_package function which will setup find_package to ignore the package :
Just watched that video, long but awesome. I'm not sure on how exactly his strategy is supposed to go, but it sounds like the superproject is another repo. I don't really want that. What I'm thinking right now is to use the same overall idea, but instead of a superproject, I use the same repo, and use:
https://cmake.org/cmake/help/latest/module/ExternalProject.h...
But put that behind a flag, that defaults to off, so it doesn't interfere with the distributions. And it supports patching so I can patch whatever dependency's CMake, in the event that they don't (yet?) support the same strategy so it can work transitively without having to boil the ocean. Granted, you would need to write patch files, for basically all your deps initially because nobody else is doing it that way.
For reference, I'm trying to help convert an existing open source project from autotools to cmake, but at work I'm the resident cmake expert, and we have multiple products, which effectively are built as different 'distributions' (not always Linux based) so I've been searching for a good way to do this for a while, even though I just started contributing to OSS.
I was working on this topic a few years ago and one of the things I noticed was that a lot of software patterns that work on the source code level can also be applied at the library or project level. What you're describing is basically Dependency Injection and the super-project is a Container.
It seemed like such a good idea, but I quit working on large software projects about the time I thought of it, so I never got a chance to actually try it outside of toy projects. I'm glad to hear that somebody else independently discovered and pursued the idea. If anyone has written about how adopting this approach has worked out for them, I'd love to read it.
It looks like you're one of the people very involved with hunter. This also looks very interesting, since it's implemented as a set of cmake scripts, which I can just add to my module path... What's the advantage of using hunter vs ExternalProject?
I'll try to dig through the documentation later this weekend.
Since my work project is spread across so many repos, and is so complicated and my OSS time is so limited, even hacking together a prototype can be a decent time investment, so I'd appreciate any high level feature differences you can think of.
Yes! I started using it ~8 months ago and fell in love with its simplicity. Hunter aims to be a full package manager, so it strives to build a project and all of its depedendencies recursively. This is fundamentally different than just adding one external project and ignoring the depedencies (what ExternalProject_Add does). The downside is this forces Hunter to maintain many forks of original repos just to keep track of the dependency information. Meanwhile, Hunter basically guarantees rebuilds only when necessary and caches all library builds locally in the ~/.hunter directory. It really really simplifies the whole dependency management of a CMake project.
How does meson's find_library compare to CMake when it comes to finding libraries on Windows? That, and the very nice GUI tool that comes with it are my two killer features of CMake.
If you're primarily a *NIX user, there are a bunch of reasonable build system alternatives out there, but if "supporting Windows" is even a little bit of a priority, I have found CMake to be far and away the least painful way to build multi-platform software on Windows, especially if you're a novice.
Until Meson even approaches being as easy to use on Windows, I feel like it's going to continue to be second fiddle.
It may be second fiddle for people who care about Windows, but thankfully they seem to be few and far between these days. There's no reason to support Windows when they won't even pretend to care about standards compliance. Although even then, having to draw the ignorant masses of Windows devs out of their GUIs would be quite a task.
I don't know how common I am, but literally everybody that's ever paid me to do any development work has wanted support for Windows. (I'm a bit worried that this might date me, but my first time coding for coins was 20 years ago.) Tools that don't support Windows are just not useful to me.
I'm not even sure which standards you're worried about, either. VC++ supports C++14 pretty thoroughly, and you've got various builds of gcc or clang if you want whatever extra support they give you. We're long past the dark days of yore when support was hit or miss and MS didn't give a shit.
If you want POSIX, you are out of luck, but that's fine, because Windows isn't a Unix, so POSIX doesn't apply. My view is that this is actually a pretty good thing, but reasonable people may differ - e.g., by thinking that this is a fantastic, amazing, excellent and extremely good thing ;)
Overall: the really weird thing is that Windows support is no harder to arrange than support for any other platform. But because it's so unpopular with a certain brand of nerd, it's somehow OK to just code for POSIX, complain when your code doesn't build on Windows, and blame the whole affair on Windows. But, dear people that do this, I'm afraid you appear not to have noticed that portability is your job, not platform vendors'.
>But, dear people that do this, I'm afraid you appear not to have noticed that portability is your job, not platform vendors'.
That's rich. CPU manufacturers ship C compilers and virtually every operating system other than Windows provides a POSIX interface (and in fact, so did Windows at one point!).
Consider the math; if the platform supports portable interfaces then the vendor did a small amount of work to support a large number of programs. If the programs support the platform then a large number of programs have to do the work to support a single platform.
Windows is quickly losing what little relevance it has left. Today's most relevant platform is the web and end users are accessing it on their phones - and we both know how Windows for phones ended. Windows Server is a bad joke and the servers which power the platforms of today are run on Unicies and programmed by engineers on Unicies. Windows is dying quickly and is simply not important anymore. What few end users they have left are being driven away to Chromebooks and Macbooks by unwanted updates, advertising, and a non-stop barrage of annoying bullshit.
> There's no reason to support Windows when they won't even pretend to care about standards compliance.
There's a lot of standards that windows support and linux does not though, for instance a lot of protocols used in multimedia works, art installations... A simple thing such as sharing a video buffer in the GPU between two processes is trivial in Windows or macOS and a damn pain on current desktop linux (dunno in wayland). And I say that as a hardcore linux user.
While I appreciate the need for DSLs, in many cases I feel like I'd rather use an API. At least, an API imposes no constraints on the features of the language itself. (So, imagine a DOM for VS solutions and projects; writing your own script would be a breeze...)
Make is horrible because it does not handle spaces in paths
Putting spaces in pathnames is idiotic to begin with. I have written Perl scripts that are 3x more complicated just to deal with spaces in pathnames. You might as well blame Make for not handling UTF-32 characters in pathnames -- I mean, someone might want to put a GREEK CAPITAL REVERSED DOTTED LUNATE SIGMA SYMBOL (U+03FF) in their path -- Make would suck if it didn't allow this.
You missed the most egregious problem with Make -- the fact that commands must begin with a friggin' TAB! No other whitespace will do. Think of every manual writer who has to somehow convey that the whitespace you (don't) see is a TAB.
> Putting spaces in pathnames is idiotic to begin with.
The thing that’s idiotic is not being able to handle valid paths. Yes, all of them. Yes, including ones containing GREEK CAPITAL REVERSED DOTTED LUNATE SIGMA SYMBOL (but also apostrophes, backslashes, quotes, dollar signs). It boggles the mind that a tool that does everything through a shell can’t be made to pass arbitrary information to that shell in a reliable manner.
GNUMake handles UTF-8 just fine. I was able to replace spaces (character 0x20) with non-breaking spaces (character 0xA0) and it just worked (https://news.ycombinator.com/item?id=16527084).
Sorry, didn’t mean to imply it couldn’t; just pointing out that it’s important to handle despite the way the parent suggests it’s a ridiculous expectation.
> Putting spaces in pathnames is idiotic to begin with.
it’s only idiotic in the sense of tools not working with them properly. spaces are allowed in paths and filenames in most filesystems, and so tools written to handle paths and filenames should handle spaces. spaces carry no special path information, so it’s a little weird they are ignored by many tools. it would be crazy to think of tools not handling a “b” or something.
and i agree with the tabs thing. although, it was pointed out to me that in a newish version of make, you can override this behavior with a flag. i doubt it works well though because of makefile compatibility issues.
> it’s only idiotic in the sense of tools not working with them properly.
It's idiotic because it's annoying for things like tab completion (technically another tooling issue I suppose... but why bother going against the grain for little/no benefit).
The baffling thing about this opinion, at least to me, is that it's actually difficult not to handle spaces. Once you've got your path-with-a-space past the well-known issues involved with handling these in your average shell and/or make - which are stupid issues, but it's probably too late to do anything worthwhile about them by now - what's the problem? What are you going to do with paths that makes handling spaces so hard?
This applies equally to Unicode characters. (Which I'd actually expect Make to handle OK. I doubt it uses UTF-32... presumably it's just char stars internally, like most Unix tools, so I'd have thought it would handle UTF-8 just fine. UTF-8 is a bit inefficient, but extremely well-designed in this respect.)
CMake is absolutely terrible if not destructive for non-trivial projects that require frequent build system updates and restructuring (especially those tied to cross-compilation).
GNU Make might have its share of issues but compared to CMake, it's dead simple and I've never once in 20 years of programming encountered a build issue I could not debug.
CMake has had me throw up my hands and give up in despair far too many times. It boggles the mind that people continue to use such a rotten tool. Probably because it looks attractive, superficially, but if one examines it in more detail any possible justifications for using it should completely fall apart.
It's indeed worth noting that cmake only supports one toolchain at a time, so it's currently not possible to build for the host while cross-compiling. This is a complete showstopper for certain types of eminently sensible build process, so if you've got one of those then you should look elsewhere.
In other respects, my experience has been exactly the opposite of yours. Right down to the supposed superficial attractiveness of CMake ("LOL", that's all I can say - shit's a fucking disaster zone at first glance!) that turns into horror on closer examination. (Since in my view, it actually mostly makes the right decisions internally - you just need to get past the spitefully bad scripting language.)
I'd suggest everyone to check out bmake. It has a very nice standard library of makefiles with which often your makefile can be a couple lines long, one of which just includes the relevant library.
There was apparently at one point effort to replace its frontend with Lua, but that sadly never took off. (I, like everyone else, still hold my nose and use it, but man ... like ... bwa?)