Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Build Tools – Make, no more (hadihariri.com)
344 points by _superposition_ on April 21, 2014 | hide | past | favorite | 140 comments


I find these posts somewhat amusing. We've got people who (rightfully) question the tools they use and look for alternatives. They then discover Make and have some kind of zen Unix moment that they want to share with the world.

If what you are doing in your flavor-of-the-month build tool translates to a roughly equivalent number of lines in Make, then yes, you should probably look at using Make. But the thing is, Make is stupid, it doesn't know a lot. Sometimes that is a good thing, sometimes it is not.

I've written about this before on HN: I mostly program in C++ and when I build my stuff I want a build tool that understands things like header dependencies, optimization levels, shared libraries etc. It's a bonus if my build files are portable.

My point is that these alternative tools often strive to raise the abstraction level and the reason people use them isn't necessarily because they haven't discovered Make.


I find them irritating for the same reason.

It reminds me of the jQuery cycle: use jQuery for everything -> decide that depending on frameworks is lame -> use "vanilla JS" for everything -> realize this requires polyfills and various other inconvenient, inelegant things -> either go back to using jQuery, or gain a much deeper understanding as to why everyone uses it.


I doubt the analogy is apt.

Make is not an amazing (abit slightly bloated) meta tool that solves all your problems on all platforms (abit slowly).

Make is vanilla javascript, along with all the bumps and hassles of not working correctly on multiple platforms, having odd obscure syntactic oddities and only kind of supporting various operations in newer versions (which may or may not be available on various platforms).

The newer build tools are trying to do exactly what jquery does, and abstracting away those rough edges for a consistent build behavior with better syntax.

Going back to make is the 'use "vanilla JS" for everything' step in your list above, not the final step.


That was exactly the analogy the gp was making...

Make -> VanillaJS

Grunt/Rake/whatever -> jQuery


Taking the analogy a bit further, a vanilla proponent might put Make in the second category as well. Funny that modern web development requires a build process to change a line of CSS.


Amusing indeed. (Functional) Reactive Programming [1] [2] anyone. That's the same thing we've learned during development to be profitable and of real use. And it seems that build systems also converge towards the same lesson learned, but slowly.

––

[1] http://en.wikipedia.org/wiki/Reactive_programming

[2] http://en.wikipedia.org/wiki/Functional_reactive_programming


Might go a bit off topic but i have to bring this up since 9 out of 10 make tutorials on the internet do the same horrific mistake as you just did, 11 out of 10 code bases out in the wild as well.

In your make file example the .o files are just depending on the .cpp files, not the header files they include, the header files those included header files include and the files they include etc etc. This means nothing will be recompiled/relinked if a constant in a header file changes for example! Changed function signatures will give you cryptic linker errors with the standard solution "just try make clean first".

To solve this you can either manually update the make file every time any file changes the files it includes, which almost defeats the purpose of having an automatic build system. Or you can use automatic dependency generation by invoking your compiler with a special flag (-MMD for GCC), and suddenly make isn't as simple anymore as you laid it out to be. In conclusion your build tool must be aware of ALL inclusion rules as your compiler(preprocessor) has, or be given the information somehow. Maybe it's better to just use something designed for your particular toolchain that can come bundled with this knowledge?


Right. Make is mostly a kludge around the nonexistent module system in C and C++.

It's so bad (specifically due to the way file preprocessing works), that you need to have large parts of a compiler to accurately determine what the dependencies of a source file are.

This is why a decent module system should be the top priority for C++17, though it doesn't look likely so far.


Have you seen what Clang is doing[1]?

[1] http://clang.llvm.org/docs/Modules.html


This is eminently possible with make-plus-GCC - add the line:

.depend :

        gcc -M *.c > .depend
then at the bottom:

source .depend


Thank you for that useful option.

Just a nitpick: did you mean `include .depend`?


Great nit to pick. I was typing , not testing. I believe you are 100% correct.

You may need to do a little finagling to bootstrap .depend when adding to an existing make file: echo "garp" > .depend; make .depend : may suffice.


Is that solved with autotools? I'm not an expert, but I wonder why people talk about pure Make where a lot of bigger projects actually use autotools.


Yes, it is. automake will generate makefiles that track dependencies as a side-effect of compilation ( http://www.gnu.org/software/automake/manual/automake.html#De... ). What this means is that whenever a source file is compiled it will update the dependencies for the .o file. It has to be done then because different platforms could have different header dependencies (if you're having fun with #define).


1. Since make has builtin suffix rules, the Makefile could be simplified to:

    CXX=g++

    hello: main.o factorial.o hello.o

    clean:
        rm -rf *o hello
2. Shameless plug: he didn't mention redo [1], which is simpler than make and more reliable. The comparable redo scripts to the Makefile would be:

    cat <<EOF > @all.do
    redo hello
    EOF

    cat <<EOF > hello.do
    o='main.o factorial.o hello.o'
    redo-ifchange $o
    g++ $o -o $3
    EOF

    cat <<EOF > default.o.do
    redo-ifchange $2.cpp
    g++ -c $1 -o $3
    EOF

    cat <<EOF > @clean.do
    rm -rf *o hello
    EOF
[Edit: Note that these are heredoc examples showing how to create the do scripts.]

These are just shell scripts and can be extended as much as necesary. For instance, one can create a dependency on the compiler flags with these changes:

    cat <<EOF | install -m 0755 /dev/stdin cc
    #!/bin/sh
    g++ -c "\$@"
    EOF

    # sed -i 's/^\(redo-ifchange.\+\)/\1 cc/' *.do
    # sed -i 's}g++ -c}./cc}' *.do
sed calls could be combined; separated here for readablility.

[1] https://github.com/gyepisam/redux


CXX=g++ isn't necessary either; make already knows about $(CXX) and how to link C++ programs. Also, I think you wanted .o, not o.

And compared to that Makefile, the redo scripts you list don't seem simpler at all. I've seen reasonably compelling arguments for redo, but that wasn't one.


> CXX=g++ isn't necessary either; make already knows about $(CXX) and how to link C++ programs.

You're right, of course.

> Also, I think you wanted .o, not o.

I would, yes, but I copied the Makefile ;)

Should have been clearer; I meant that redo is simpler (and more reliable) than make.

For simple projects, redo scripts are a bit longer. However, as the projects grow, the redo scripts reach an asymptote whereas Makefiles don't. The only way to reduce the growth in make is to add functions and implicit rules which get ugly real fast.


redo is pretty cool, but I ran into trouble with apenwarr's implementation (https://github.com/apenwarr/redo, see https://groups.google.com/d/msg/redo-list/GL5z8eEqT90/tk_vLZ...) with OS X Mavericks. I have no experience with the alternative implementation at https://github.com/gyepisam/redux, since it came out after I reimplemented the build system in question with CMake.

In general, I found CMake quite useable for my needs, and quite clean. It also required less build system code than redo. CMake fits quite nicely into a (C or C++) project which consists of many binaries and libraries which can depend on each other.


redo might be simpler and more reliable, but shell isn't. And redo is encouraging even more work to be done in shell. Additionally, the redo version is more verbose and harder to read. While fancier tasks will make's version look horrible relatively quickly, they won't make redo's version look any better.


> redo might be simpler and more reliable, but shell isn't.

Not quite sure what you mean here. The scripts don't do anything complicated and redo catches errors that could occur.

As for readability, etc, I suppose it's relative. Simple makefiles do read very nicely. Unfortunately, they aren't always simple and hairy makefiles are just horrible to write, read and maintain. I've had no such problems with do scripts.


To this day I still don't understand redo (I'm just staring at it, and don't get anything) - haven't really read the internals.

With make it was easier for me to grasp the idea (or maybe I was simply 20 years younger then).


It's actually quite simple. You write a short shell script to produce the output you need and redo handles the dependencies.

For example, the shell script named "banana.x.do" is expected to produce the content for the file named "banana.x".

When you say

    # redo banana.x
redo invokes banana.x.do with the command:

    sh -x banana.x.do banana.x banana XXX > ZZZ
so banana.x.do is invoked with three arguments and its output is redirected to a file.

   $1 denotes the target file
   $2 denotes the target file without its extension
   $3 is a temp file: XXX, in this case.
banana.x.do is expected to either produce output in $3 or write to stdout, but not both. If there are no failures redo will chose the correct one, rename the output to banana.x and update the dependency database.

If banana.x depends on grape.y, you add the line

    redo-ifchange grape.y
to the banana.x.do, creating a dependency. redo will rebuild grape.y (recursively) when necessary.

The only other commands I haven't mentioned are init and redo-ifcreate, which are obvious and rarely used, respectively.

That's it.


I think the big difference between redo and make is that make requires knowledge of dependencies up front, and this is sometimes tricky to get right.

"as you can see in default.o.do, you can declare a dependency after building the program. In C, you get your best dependency information by trying to actually build, since that's how you find out which headers you need. redo is based on the following simple insight: you don't actually care what the dependencies are before you build the target; if the target doesn't exist, you obviously need to build it. Then, the build script itself can provide the dependency information however it wants; unlike in make, you don't need a special dependency syntax at all. You can even declare some of your dependencies after building, which makes C-style autodependencies much simpler."

https://github.com/apenwarr/redo


Sorry, but that doesn't appear simpler to me...


A build DSL solves the problem of making your build rules and systems first-class citizens. It's not just learning a new syntax—in fact, since you're embedding into a known language, it isn't even that new—it's about getting more control. You can pass rules around, modify them and do whatever arbitrarily complex tasks you need in a natural, straightforward way using your favorite programming language. You don't have to contort yourself and bend over backwards to fit the logic you want into Make's limited and peculiar language.

Your build system is an integral part of your whole program and you want to treat it just like any other code. This means refactoring, this means modularity, this means libraries, this means no copying and pasting... All this is far easier with a system embedded in your main language than in Make. You can use your existing tooling, debuggers and frameworks to support your build system. If you're using a typed language, you can use the types to both constrain and guide your build files, making everything safer.

Using an embedded DSL integrates far better with the rest of your ecosystem than relying on Make.

Apart from making the logic of your build system easier to describe and maintain, an embedded DSL also makes arbitrary meta-tasks easier. You might want to monitor random parts of your build process, report to different services, connect to different front-ends (an IRC bot, a CI system...) and maybe even intelligently plug into the features of your main language. Wouldn't it be great to have a make system that's deeply aware of how your server is configured, how your type system works, what your compile-time metaprogramming is doing an so on?

You could just glue together a bunch of disparate scripts with a Make file. Or you could use a DSL and call these services through well-defined, maybe even typed interfaces! No need for serializing and deserializing: you can keep everything inside your system.

Sure, if you're just going to use your DSL as a different syntax for Make, you're not gaining much. But it allows you to do far more in a far better way, while fitting in more naturally with the rest of your code. I'm definitely all for it!


Perhaps I'm just jaded but what you describe sounds a lot more complicated for most people than just writing a Makefile. Perhaps even the vast majority of people. I think you underestimate how far you can go with just a vanilla make build system.


Can you show an example of what you are describing? It doesn't sound interesting for the tasks I have in mind, so it must be the case that you are dealing with very complex tasks.


See ASDF, for example (http://common-lisp.net/project/asdf/).


Your build system has types? Is that so you can catch errors at compile time instead of run time? Who builds the build tool?


I know you're just being glib, but here are some answers: * build doesn't mean compile * you can compile your build process and then use that to actually run your build * the benefits of typed languages go beyond catching errors at compile time


There are tools like ocamlbuild and cabal, which are implemented in Ocaml and Haskell, and used to build Ocaml and Haskell.

But ocamlbuild doesn't have a build file. It just knows how to build Ocaml. And build Ocaml is about all it can do. Cabal is powered by a .cabal file, which is a data structure, and a Setup.hs file, which is run like a shell script via runhaskell.

I think what tikhonj envisions would be a lot of work to achieve. Maybe he already knows that. I suspect that the closest most of us will ever get is putting something like this at the top of a Makefile :)

     .ONESHELL:
     SHELL = /usr/bin/runhaskell


If you want to use Make, read "Recursive Make Considered Harmful" first (http://aegis.sourceforge.net/auug97.pdf).


I think everyone goes through a phase where they try to find the perfect build tool, and then at least entertain the idea of writing one themselves.

Eventually, you grow out of it. There's a lot of build tools, each are better at some things than others. It's not that much grunt work to convert things from one to another (even very large projects). If your build tool is working for you, leave it alone. If it's getting in your way or slowing things down, try another one. Move on.


I think one reason is because Make is built with Shell, which is always one step (and one letter) away from hell.

For example:

    clean:
         rm -rf *o hello
Did you really mean to erase all files and directories that end in "o"? Let's say it's just a typo and fix it: "*.o".

Now, are you sure it'll handle files with spaces in the name? What about dashes, brackets and asterisks? Accents? What if a directory ends in .o? Hidden files?

This specific case may support all of the above. But if it doesn't, what will happen? How long until you notice, and how long still to diagnose the problem?

Just like I prefer static strong typing when developing non-trivial projects, the build system should be more structured. I agree endless reinventing is tiring, but it may have some credit in this case.


I'd expect all developers using make to know about this and never have this problem thanks to one simple thing: sticking with sensible names (no spaces, no brackets, no stars and other special characters in the name - hello underscores!).

It's an easy rule.

    Just like I prefer static strong typing...
You probably don't use any special chars or spaces for identifiers in whatever the language you're programming in. This is just applying a similar rule to the files of your project.


For source code files I agree completely; but the build system will encounter other types of files that aren't so strict.

Maybe you downloaded something and it came with a bracket because that was in the page title. Or you copied a duplicated file and your system helpfully appended " (2)" at the end. Or there was an excel file updated by someone not so technical and this person didn't know they have to strip accents from the words in their native language (possibly losing meaning). Or someone saved their "Untitled Document.txt". Or you needed to include a date in the directory name. Or you are just human and didn't mean to break the build by pressing the biggest button on your keyboard when saving a file.

And remember "break the build" here is not "a red light flashes and you get an email". It means you get unknown behavior throughout the process, including security features and file removal.

Strict rules for source code file names are good because names usually bleed into the language itself. Python file names become identifiers when you import them. Identifiers in turn are strict because parsing is strict, and there are many good reasons for strict parsing in general purpose languages.

Lacking accent support in file names, as some very popular software do, is terrible. Lacking support for spaces is just atrocious.

I love shell, I use it daily for one-off tasks, but I don't think it's a good fit to manage the build system of a project.


For me, the only justification for using a language-specific build tool (e.g. grunt, rake, paver, ...) is when you actually want to exchange data with a library / program written in that language. On the other hand, you could probably accomplish the same effect using environment variables, with the upside of having a cleaner interface.

For those that are curious which build tools exist for Python, here's an (incomplete) list:

* pyinvoke (https://github.com/pyinvoke) - claims to be the successor of fabric, pretty solid and well-maintained

* fabric (http://www.fabfile.org/) - not actually a build tool but often used as one

* paver (http://paver.github.io/paver/) - no longer actively maintained

* doit (http://pydoit.org/) - one of the few tools that actually support monitoring file states (like the original make)

* disttools (https://docs.python.org/2/distutils/) - not actually a "universal" build tool but intended to distribute Python packages


You forgot buildout[1], which is probably more than a build system, perhaps putting a toe into the configuration management world.

1. http://www.buildout.org/

Documentation can be challenging to find, and it isn't the most actively developed project in the world, but what it does, it does pretty well (including supporting more than python dependencies).


Thanks for posting the link! I hesitated including configuration management tools since the use case is not the same. There's a lot of interesting stuff going on there though: With Saltstack and Ansible we have two serious "chef" contenders for Python now.


the last 10 years in build tools has felt like 1 step forward, two steps back. i like being able to write tasks in any language other than Makefile. however, it seems like many of the new popular options (cake, grunt, etc.) don't do what, to me, is Make's real purpose: resolve dependencies and only rebuild what's necessary. new task runners have either eliminated or pigeonholed the (typically one-to-one in makeland) correspondence between tasks and files, meaning the build system can't be intelligent about what tasks to run and which to not.

computers are fast enough that this doesn't often bother me anymore, but i've run across some huge Rakefiles that could benefit from a rewrite in Make.


> however, it seems like many of the new popular options (cake, grunt, etc.) don't do what, to me, is Make's real purpose: resolve dependencies and only rebuild what's necessary.

You might like tup[1]. Its killer feature is that it automatically determines file-based dependencies by tracking reads and writes (using a FUSE filesystem). It has an extreme emphasis on correct, repeatable builds, and is very fast. Other stuff:

- does work in parallel, and will let you know if your build isn't parallel safe. (note it is NOT relying on your specification of dependencies: even if you manually specify dependencies, it will tell you if something's wrong based on what it actually observes your dependencies to be)

- tracks changes to the build script and reruns if the commands change.

- cleans up old output files automatically if build rules are removed.

- lets you maintain multiple build variants (say for different architectures, configurations, etc)

- autogenerates .gitignore files for your build output

- very easy to get started, and "Just Works".

- for advanced usage, it is scriptable in Lua.

I've tried every build system out there. For Unix-y file-based build tasks, tup is, by far, the best. I don't know why it isn't more well known.

[1]: http://gittup.org/tup/index.html


I was already sold on tup after reading the first paragraph comparing it to make[1]:

"This page compares make to tup. This page is a little biased because tup is so fast. How fast? This one time a beam of light was flying through the vacuum of space at the speed of light and then tup went by and was like "Yo beam of light, you need a lift?" cuz tup was going so fast it thought the beam of light had a flat tire and was stuck. True story. Anyway, feel free to run your own comparisons if you don't believe me and my (true) story."

[1]: http://gittup.org/tup/make_vs_tup.html


My favorite is the "Tup vs Mordor" benchmark.


+1 for the link. I hadn't seen tup before, and I really like how it feels like make in its simplicity, but is more explicit about the graph inputs and outputs, cares more about the output (e.g. deleting old files), and watches the file system for changes.


I think MSBuild does something like this - there is a filetracker that tracks what files were read/written while running a tool, and writes that information in a file. I think you can even install your own file-notification-changes .dll to track changes your way (maybe file system that is not supported, or something else).

Similar to what lsof, procmon (windows) do.


Seconded on tup's greatness. Go through the example to see what all it's doing:

http://gittup.org/tup/ex_a_first_tupfile.html


I agree completely, and I think the blame rests with Ant and Java. Java's dependency management was painful enough to deal with in 'make' that Ant was built to support building Java projects. But in doing so the authors threw away the explicit file dependencies that made 'make' so powerful in the first place. Instead it got people to think in terms of a graph of 'tasks', each of which could either figure out its own dependency management, or more commonly ignore them completely. Most tools that followed seem to have gone down the 'graph of tasks' avenue, with 'graph of file dependencies' as an additional mechanism if you're lucky.

The huge Rakefiles you've seen could possibly have simply benefited from a rewrite in Rake. Rake has 'file' tasks which implement the file dependencies of 'make' but for some reason most users of Rake seem to ignore them completely.


It can be worse than that, one build system I looked at built everything every time. Why? Because "Computers are fast enough that trying to figure out exactly what needs to be rebuilt is an anachronism, this can rebuild everything in the time it took that crufty old system to figure out what it actually had to build."

I've given up trying to educate folks, I just make a note to check in with them, 6 months to a year later, to see if they are still building everything.


Yes. And then combine that dependency tree resolution with make -j. Simple and powerful.


Another vote for higher-level meta-build-systems like cmake, premake or scons (I'm using cmake because it has very simple cross-compilation support). My personal road to build-system nirwana looked like this, I'm sure this is fairly common:

- Started using hand-written Makefiles and autoconf. Then someone wants to build on Windows, in Visual Studio nonetheless. Add manually created VStudio project files to the project. Then someone wants to use Xcode, so add manually created Xcode project files. Now you add files, or even need to change a compiler option. Fix the options in the Makefile, open the VisualStudio project, fix the options there, open the project in Xcode, fix the options there. Depending on the project complexity, this can take hours. The next guy needs to build the project in an older VisualStudio version, but the project files are not backward compatible...

- Next step was to create my own "meta-build-system" in TCL (this was around 1999), which takes a simple descriptions of the project (what files to compile into what targets, and the dependencies between target), and creates Makefiles, VStudio-files and Xcode-files, this worked fine until the target project file formats change (happens with every new VisualStudio version).

- Someone then pointed me to cmake which does exactly that but much better (creates Makefiles, VStudio-, Xcode-projects, etc... from a generic description of the sources, targets and their dependencies), and I'm a fairly happy cmake user since then.

- Recently I started to wrap different cmake configuration (combinations of target platforms, build tools/IDE to use, and compile config (Release, Debug, etc...)) under a single python frontend script, since there can be dozens of those cmake configs for one project (target platforms: iOS, Android, OSX, Linux, Windows, emscripten, Google Native Client; build tools: make, ninja, Xcode, VStudio, Eclipse; compile configs: Debug, Release). But the frontend python script only calls cmake with the right options, nothing complicated.

Of course now I'm sorta locked-in to cmake, and setting up a cmake-based build-system can be complex and challenging as well, but the result is an easy to maintain cross-platform build system which also supports IDEs.

I general I'm having a lot less problems compiling cmake-based projects on my OSX and Windows machines then autoconf+Makefile-based projects.

[edit: formatting]


I agree with this. As a cross platform (ie, Windows + OSX + Linux) person who enjoys Visual Studio (XCode less so) I need more than make.

My own experience is with gyp and ninja which is used by the Chromium team (http://martine.github.io/ninja/) which they use to build Windows, OSX, Linux, Android (and maybe iOS?)

Of course for personal projects I'll probably never notice the speed difference but for bigger ones Ninja is FAST.


CMake outputs to Ninja as well. It's the only way I know how to use Ninja in fact.


I've recently been playing with ninja, which does a good job of not being 'just another make' http://martine.github.io/ninja/. To quote their website, "Where other build systems are high-level languages Ninja aims to be an assembler.". It's used as a backend for GYP (Google Chromium) and is supported by CMake as well. I've had good success generating the files manually using something like ninja_syntax.py: https://github.com/martine/ninja/blob/master/misc/ninja_synt....

I also note that google are working on a successor to GYP, GN which targets Ninja http://code.google.com/p/chromium/wiki/gn.


Thanks for the plug! In line with the original post, I'll add that the Ninja manual has a section where we try to convince you to not use Ninja and instead use a more common build system: http://martine.github.io/ninja/manual.html#_using_ninja_for_...


I'm 52 years old. I've had this discussion with dmr, srk, maybe with wnj.

All I know is for years, decades, I carried around the source to some simplistic make. I hate GNU make, I hate some of the unix makes. I loved the simple make.

The beauty of make is it just spelled out what you needed to do. Every darn time make tried to get clever it just made it worse. It seemed like it would be better and then it was not.

Make is the ultimate less is more. Just use it and be happy.


Dunno if the owner of the site will read this, but here's a tip. Don't show a full screen overlay telling me how my visit would be better with cookies enabled.

1) I have cookies enabled. 2) The Eurpoean law is daft, but since you feel you must comply do it in a more user friendly way.


Hmm, maybe the overlay changed since you posted (11 minutes ago), but all I see is a sticky-slim footer with the cookie mention.

I've always liked how rockpapershotgun.com does it...it also uses a sticky-slim footer, but the text reads: "Rock Paper Shotgun uses cookies. For some reason we are now obliged to notify you of this fact. Not that you care"


I had the same full-screen thing on mobile; on the desktop it's indeed as you described.


I know. I don't agree with it much either and I found this least intrusive (wasn't aware of issue on mobile). If I can find a better solution, will change.

Thanks.


In what way would my reading that blog post have been a better experience for me with your tracking cookies?


Sorry if that came across as negative. I like to help point out UI problems from time to time but I think it comes across as criticism sometimes.


Is there a nice JS lib to detect and put up an unobtrusive hover-over popup in the bottom corner or something? Would that satisfy Euro law?


> Would that satisfy Euro law?

Note that this doesn't exist. Only local law could force sites to implement this, and afaik only the UK had a defunct specification of something like that they did'nt even follow for government pages. So, just the usual law craze.

> Is there a nice JS lib to detect and put up an unobtrusive hover-over popup in the bottom corner or something?

Well, I don't know if this is nice enough as I didn't use it, but it looks ok: http://sitebeam.net/cookieconsent/


Nothing against make, but I've found that it feels really nice when the majority of your toolset uses the same language. This is what I liked about Rails. Rails is ruby. Bundler is ruby. Rake is ruby. It's all ruby, which allows for a certain synergy, streamlined feel, and less cognitive overhead. I don't blame the js folks for attempting something similar.


Agreed. Mixing languages is fine when necessary, but a single language is usually preferable to me. My dev team has been using Grunt in projects thus far and have been pleased with Gulp in small experiments.


Misses the fundamental point that Make is broken for so many things. To begin with you have to have a single target for each file produced. Generating all the targets to get around this is a nightmare that results in unreadable debug messages and horribly unpredictable call paths.

nix tried to solve much of this, but I agree it can't compete with the bazillion other options.


It does not miss it, just ignores it. The author states that there are lots of things we can improve but the point is that we have too many variations on the theme without converging to a solution that has few (or no) dependencies and comes with built-in build knowledge and the ability to discover what you want rather than make you declare it.

Such a tool should be: - Zero (or few) dependencies. Likely written in plain C (or C++, D, Rust) and compiled to distribute in binary form. - Cross-platform - Support any mix of project languages and build tasks. - Recognizes standard folder hierarchies for popular projects. - Easy enough to learn. Not overly verbose (looking at you, XML). Similar to Make if possible.

Examples of the auto-discovery: It can find "src", "inc", and "lib" directories then look inside and see .h files then make some educated guesses to build the dependency tree of header and source files (even with mix of C and C++). Or it could see a Rails app and figure out to invoke the right Rake commands, perhaps checking for the presence of an asset pipeline etc. Or a Node.js project. It could check for GIT or SVN and make sure any sub-modules have been checked out.


The dependencies thing is a killer. I remember a Windows developer co-worker insisting that everyone had the .NET runtime installed, and after shipping it turned out that most of our customers didn't have it installed, to which he finally said, "well, I always have it installed." (To be fair, I should have pressed him harder, and I did ask the question twice, but because I'd never built against the runtime I was unprepared for any challenge.)

Almost every new project I download starts with a sad, manual, and demoralizing installation of a bunch of third-party stuff that you have to google to find out what's missing. And it's not educational at all, because in a few years all these tools will now be obsolete.

(The best project I ever encountered was the Stripe CTF, which almost always used just one command to install a complete working copy of everything you needed and didn't have. I'm still impressed with that.)


Some of these requirements should be built into any build tool. However, most can be added easily enough:

For instance, redux [https://github.com/gyepisam/redux] is written in Go (not compiled for binary distribution, but I could add that), is cross platform, supports any mix of languages and tasks, is very easy to learn.

It uses shell scripts to create targets so everything is scriptable. Stuff like recognizing standard folder hierarchies and auto-discovery can be added with small scripts or tools. It can be as simple as you want or as complex as you need.


> To begin with you have to have a single target for each file produced.

Try this next time (only the pertinent lines are included):

  SOURCES=$(wildcard $(SRCDIR)/*.erl)
  OBJECTS=$(addprefix $(OBJDIR)/, $(notdir $(SOURCES:.erl=.beam)))
  DEPS = $(addprefix $(DEPDIR)/, $(notdir $(SOURCES:.erl=.Pbeam))) $(addprefix $(DEPDIR)/, $(notdir $(TEMPLATES:.dtl=.Pbeam)))

  -include $(DEPS)

  # define a suffix rule for .erl -> .beam
  $(OBJDIR)/%.beam: $(SRCDIR)/%.erl | $(OBJDIR)
	$(ERLC) $(ERLCFLAGS) -o $(OBJDIR) $<

  #see this: http://www.gnu.org/software/make/manual/html_node/Pattern-Match.html
  $(DEPDIR)/%.Pbeam: $(SRCDIR)/%.erl | $(DEPDIR)
  	$(ERLC) -MF $@ -MT $(OBJDIR)/$*.beam $(ERLCFLAGS) $<

  #the | pipe operator, defining an order only prerequisite. Meaning
  #that the $(OBJDIR) target should be existent (instead of more recent)
  #in order to build the current target
  $(OBJECTS): | $(OBJDIR)

  $(OBJDIR):
  	test -d $(OBJDIR) || mkdir $(OBJDIR)

  $(DEPDIR):
  	test -d $(DEPDIR) || mkdir $(DEPDIR)

I've been using a makefile about 40 lines long and I've never needed to update the makefile as i've added source files. Same makefile (with minor tweaks) works across Erlang, C++, ErlyDTL and other compile-time templates and what have you. Also does automagic dependencies very nicely.

> Generating all the targets to get around this is a nightmare that results in unreadable debug messages and horribly unpredictable call paths.

If you think of Makefiles as a series of call paths, you're going to have a bad time. It's a dependency graph. You define rules for going from one node to the next and let Make figure out how to walk the graph.


Could you post an example of what you mean by the single target/file limitation? As stated I can't tell how implicit rules or a rule to build an entire directory wouldn't be a solution, but maybe I'm not understanding the problem.


Sure, consider a compiler that produces an (foo.o) object file and an annotation (foo.a). Now if a target requires both foo.o and foo.a you have to create two targets on them (even though its really one command).

You can do implicit rules which requires a very verbose makefile, which is what automake and other make generation tools do. God help you figure out what went wrong.

If you make people go to a directory approach you've now imposed a new structure on their code. One reason for the multitude of packages is each one matches their target community better.


Huh? Doesn't this work:

    all: copied o a
    source:
    	echo "message" > source
    foo.o foo.a: source
    	(echo -n ".o: "; cat source) > foo.o
    	(echo -n ".a: "; cat source) > foo.a
    o: foo.o
    	cat foo.o > o
    a: foo.a
    	cat foo.a > a
    copied: foo.o foo.a
    	cat foo.o foo.a > copied
The third rule simulates a compiler producing two outputs. Now if foo.o changes, both "copied" and "o" will be updated, and if foo.a changes, both "copied" and "a" will be updated. (And if either foo.o or foo.a are deleted, the compiler will be rerun, as will everything depending on foo.a or foo.o.)

This is gnu make 4.0.


I don't understand.

If both the .o and the .a are created from another file, wouldn't it be safe to just rely on either one of them? (Obviously, you will need to be consistent in choice.)

That is, if every time a .o is created, so is the .a, then where is the difficulty? Just rely on one (the .o). I could conceive of a scenario where the .a updated but the .o didn't, but I don't know of any tools that really work that way right now. I thought the norm was to at least touch all output files.

Further, if that is happening, seems you are safest having two rules, anyway.


Say you have a long build process and do a quick semi-clean by hand to speed up the next buld (not the best idea, but not inconceivable), deleting the .a files, but fogetting to delete the matching .o files. Then, your next build will produce some novel (to you) error messages that may take long to clean up. Worse, the command building on the .o and the .a might just say "OK, I'm given a .o without a .a; fine, then I'll do a slightly different thing"

Also, having two rules means duplicating a command:

    foo: foo.a
        baz $(BAZ_OPTIONS) foo

    foo: foo.o
        baz $(BAZ_OPTIONS) foo
That's bad from a maintenance perspective.


Invoking the command twice can also screw up things if you run parallel build, which you should always do! Not only to speed things up it's also a good way to verify that your make file actually is correct. If your make file doesn't work in parallel build it is broken, in the same way as C code that breaks at -O2 and above due to reliance on undefined behavior.

The solution to the multiple target problem is using the built in magic .INTERMEDIATE rule which isn't entirely obvious how it works.


Ok, that makes sense. I'm tempted to rattle the knee jerk, "don't go deleting random crap," but I realize that is a hollow response.

I'm curious how .INTERMEDIATE helps in this case. I did find this link[1], which was a rather fun read down how one might go about solving this, along with all of the inherent problems.

[1] http://www.gnu.org/software/automake/manual/html_node/Multip...


I was able to do that with GNU Make. Granted, the syntax is a bit ... odd, but it was doable.

    ASM = A09/a09

    %.o %.l : %.a
            $(ASM) -o $(*F).o -l $(*F).l $(*F).a

    clean:
            /bin/rm -rf *.o *.l

    foo : disasm.l
    bar : disasm.o
    baz : disasm.l disasm.o
The target baz has both a .l and a .o, both of which are produced in one command. The line that begins with "%.o" starts an implicit rule, which loosely states, in English: "to produce a .o file, or a .l file, run the following ...". $(*F) is a GNUism that maps to the filename of the source (directory part, if any, is stripped). This works. I tested all three targets (foo, bar, baz) with a "make clean" between each one.

(and for the really curious, a09 is a 6809 assembler; disasm.a is a 6809 diassembler, written in 6809; binary is a 2K relocatable library)


Or if you don't like taeric's suggestion you can just touch a .ao file after the line that creates the .a and .o files and have your further rule(s) depend on that .ao file. Have .ao depend on your source. If you still want to be able to type stuff like 'make foo.a' instead of 'make foo.ao' and have it work, then you can make a rule where .a depends on .ao and all the rule does is touch the .a file. Create the same rule for the .o too.


I think the main problem with these articles is that the examples given are exceedingly simplistic, and hence in no way represent real world build systems. It's very easy to have a build system look nice and clean for trivial examples, when it breaks down is when the software it builds gets more complicated and the number of hacks and extra code is added making the build system into a big mess.


I've been thinking a lot about build systems lately. I enjoy the discussion that this post has provoked. The post itself is weaker than it could have been, in that it does not stick to a single example when comparing build tools, and does not pin down any criteria for distinguishing between build tools.

If you are interested in a comparison of a few interesting build tools, please check out Neil Mitchell's "build system shootout" : https://github.com/ndmitchell/build-shootout . Neil is the author of the `Shake` build system. The shootout compares `Make`, `Ninja`, `Shake`, `tup` and `fabricate`.

Another possibly interesting build tool is `buck`, although it is primarily aimed at java / android development. See http://facebook.github.io/buck/ . There's a little discussion about `gerrit`'s move to `buck` here: http://www.infoq.com/news/2013/10/gerrit-buck .

Here's some questions I'd ask of a build system:

- is it mature?

- which platforms does it support?

- which language ecosystems does it support? (language-agnostic? C/C++? ruby? python? java?)

- does it support parallel builds?

- does it support incremental builds?

- are incremental builds accurate?

- is it primarily file-based?

- how does it decide when build targets are up-to-date, if at all? (e.g. timestamps, md5 hash of content, notification from the operating system)

- does it allow build scripts for different components to be defined across multiple files and handled during the same build?

- does it enforce a particular structure upon your build scripts that makes them more maintainable?

- how does it automatically discover dependencies, if at all? (e.g. parsing source files, asking the compiler, builds instrumented via FUSE/strace)

- how easy is it to debug?

- is it possible to extend in a full-featured programming language?

- does it let you augment the build dependency graph mid-way through execution of a build?

- how simply can it be used with other tools such as your chosen continuous integration server, test framework(s), build artifact caches, etc?

Many of these criteria are completely overkill for trivial build tasks, where you don't really need anything fancy.


One big advantage of vanilla Make is the community. There are some very nice tools that work well with make (such as https://github.com/mbostock/smash).


I love documentation that has humor, as long as it doesn't get in the way.

What's special about the Make community as opposed to the Grunt or Gulp communities?


For that matter, what's special about the Grunt or Gulp communities?


At least we are moving the direction of Grunt/Gulp rather than a maven sort of direction. Many lives lost to maven, somewhat of a Vietnam of build tools. You might think you are a Java developer with it but truly you are a maven servant.


This post rather misses that while Make is simple, making Make do all the things we're used to (e.g. Java dependency management) not as simple.

I'd like to think people have decided that it's easier to replicate the task part of Makefiles onto their environment as the simpler alternative to making dependency management and various other language-specific tasks available to make.


Never underestimate a young developer's need to reinvent reinventing the wheel.


Make is the "assembly" language for build systems. Qt's qmake/cmake target it, and the output produced is horrible, but then using "make -j8" or qt's own "jom.exe -j8" as replacement for the underperforming nmake.exe and you are all set.


Make is still the best build tool form me.

https://algorithms.rdio.com/post/make/


I've never felt hindered by Gradle.

Hindered by the fact I can't add an arbitrary github repo through Gradle? Yes. That seems like it should be solvable though...


Add an arbitrary GitHub repo to what? In what capacity? You mean as a source dependency, an artefact repository, what?


Heh. The one and only time that I ever wrote a parser in my professional career was for a build tool. In my defence, at the time I didn't know much about command line tools, and had only really programmed in IDEs. So when the new project was to be compiled on the command line, I quickly discovered that maintaining dependencies, changing targets and doing all the other things that a build system generally does by hand quickly gets old. Not knowing that autotools, cmake, ant, and about a bajillion other tools already existed to do just this, I wrote my own language, with a parser in ruby, no less :D

I have since repented. I find autotools (with occasionally a script of [ruby|python|perl] to handle something that would otherwise be tricky to do in make or m4, which is then called by the makefile) works a treat. Just don't try to do anything tricky in the auto tool files - as I said, boot anything exotic out to a separate tool.

Also, any discussion of build tools without also discussing package management is but half a discussion.


I'm unreasonably fond of Ant - there's plenty of scope for pointless clever-dickery, and there are days where that's all that keeps me going!

Nice to see it mentioned in a context other than "oh god what a mess"... even though, in fairness, many aspects of it are a complete dog's dinner.


Ant is a Turing-complete language in XML.

That is horrifying.

It is bloated, difficult to read, tends towards duplication. It also doesn't do dependency management all that well, doesn't cache build results (so it does a complete rebuild every time), and is difficult to extend.

Not an Ant fan. I have used both Rake and Gradle successfully, and have been much happier with each. Their scripts tend to be (much) more compact, easier to read, and less prone to duplication.


I agree with a lot of your points, but can you explain the part about it doing a complete rebuild every time? It doesn't do that for me (unless I specifically tell it to).


My apologies: I just wrote a basic HelloWorld.java and a build.xml to go with it, and it looks like it doesn't recompile the class unless there is a change to the source .java file. So I was mistaken about that.

Wonder what Gradle's caching, then?


> Wonder what Gradle's caching, then?

Let's hope it's catching another scripting language or two in its upcoming version 2 because having Groovy as the only option does it no favors.


In fact I agree. I would be really glad if somebody explained me persuasively which one build-tools is the best one ever, so I could use it always and for everything, even when a shell-script would be enough. Yeah, it would be nice. But then, don't we have the same thing with about every class of software? Tens of text-editors and no perfect one. Many OSes and nothing sane. So many programming languages with overlapping functionality! And we won't even talk about such thing as linux distributions (and their packaging tools), pepsi&coca-cola… oh, it's not even software.

So, yeah, there are too many build-tools. Whatever.


I hate how far spread apart the JS community is but I'm excited for the day it starts to all come together and we don't have to worry about Javascript stacks becoming outdated within 6 months


I appreciate Grunt and Gulp, but still fall back to Make for many of my web projects, even those that require a CSS or Javascript build system.

Here's a Makefile example that utilizes fswatch: http://blogs.mpr.org/developer/2014/02/makefile/


I'm currently working on a build tool that doesn't work using the traditional "makefile" approach. Instead it's designed as a Python library, and you have the full power of Python at your disposal.

Sadly it's not ready for prime-time yet (early designing stage), so I won't link my highly unfinished project.


I suspect your downvotes are coming because you haven't mentioned http://www.scons.org/. Letting you know in case you're not aware of it. (I would have emailed you out-of-band, but there's no contact info in your profile. Sorry for the noise, everyone else)


Scons uses Python more for configuration rather than the control flow. I'm sure most people think that's ok, but I find it limiting and a bit too derpy.


In fairness that's the core of the argument: a significant proportion of people that have had to maintain build tools over long periods believe they should only be configuration and not contain any control flow.

The problem with adding flow control is that establishing the dependencies without actually executing the whole process becomes next to impossible.


i suspect that the downvotes from people are because python is far too powerful as a build scripting language.


You mean waf?


Fabricate might be interesting to you if you haven't seen it already.

https://code.google.com/p/fabricate/


Rich Felker, the lead developer of musl libc, uses make to generate his ewontfix.com blog. See http://ewontfix.com/Makefile


Pelican [1] uses make and python to generate a blog. Simple makefiles are great.

1: http://blog.getpelican.com/


I get redirected with. "Your experience on this site will be enhanced by allowing cookies"

I need cookies to read a blog post? Don't think so. Probably not worth the read


No. You don't. But EU laws requires that if you use Cookies for things such as Google Analytics or even just stats, you must inform the user. And you can switch it off.


What about having your text editor do the build for you (use `.dir-locals.el` in emacs to compile your less on save for example)


Why would you need Grunt just to compile LESS?


It is a non-ad-hoc specification of your build, and it allows you to integrate things like LiveReload more easily.


I start using https://github.com/rags/pynt, and what a change. Is the simples thing you can imagine. Combining with request and http://plumbum.readthedocs.org/en/latest/ (or similar) and whoila, you are done.


What about contribution packages of Grunt/Gulp?


So he's saying use 'make' instead of gulp/grunt and then he submits an example where it's as easy as piping through GCC.

He's making the wrong assumption that you don't need to setup a build environment when building with make, but to have gcc you will still also need to install g++ and build-tools. Also, he refers to building on Windows using yet another specialist tool, but have you recently tried building anything C++ on Windows when you don't have visual studio, or even worse, CYGWIN installed?

When you make such a statement, then please show me a makefile that's not 10.000 lines long, that will do the same as this, but without NPM and 'downloading half of the internet'.

   gulp.task('scripts', function() {
      // Minify and copy all JavaScript (except vendor scripts)
     return gulp.src(paths.scripts)
      .pipe(coffee())
      .pipe(uglify())
      .pipe(concat('all.min.js'))
      .pipe(gulp.dest('build/js'));
  });
I submit that's impossible, simply because this stuff took 2 years to evolve (for the Javascript toolchain, that is) and a lot of people went through hours of frustration trying out alternative methods.


In GNU make, this might look something like:

    %.coffee : %.js
            js2coffee $< > $@

    %.ugly : %.coffee
            uglifyjs $< > $@

    build/js/all.min.js : $(UGLY)
            cat $^ > $@
I'm not that familiar with grunt. Can it do the equivalent of make -j4?

PS. I'll happily agree that collecting the list of script files to operate on -- and the list of script files after the transformation (eg $(UGLY)) -- is slightly annoying in make.


I'm not familiar with the -j4 argument. (something with debug info?)

Also, what your example looks like voodoo. Could you explain what the parameters do?

On top of that, tools like uglifyjs still run on NPM/Node afaik.


-j4 says use up to 4 processes in parallel. Make has a jobserver that does parallelism safely with respect to the dependency graph. So for example, even if we have workers to spare, it won't try to build all.min.js until all of the .ugly files have finished building.

The %.foo : %.bar things are pattern rules. [1]

The $<, $@, and $^ things are called automatic variables. [2] They correspond to the first input file for a rule, the target file for a rule, and the complete list of input files for a rule, respectively. Cryptic at first but really handy.

There's a standalone script for UglifyJS. [3]

[1] https://www.gnu.org/software/make/manual/html_node/Pattern-M...

[2] https://www.gnu.org/software/make/manual/html_node/Automatic...

[3] https://github.com/mishoo/UglifyJS2/blob/master/bin/uglifyjs


This is Make 001: $@ is your output file, $^ are your inputs (prerequisites, technically), and $< is your first input file. They're unfortunately ungoogleable, but just remember that they're called "Automatic variables" in the GNU make documentation.

"-j4" means to run up to 4 jobs in parallel. It's orthogonal to the issues at hand.


http://www.gnu.org/software/make/manual/make.html#Automatic-...

The manual is actually pretty good. I've only started to dig my teeth into make.


> this stuff took 2 years to evolve and a lot of people went through hours of frustration

GMake was first released in 1977: http://en.wikipedia.org/wiki/Make_(software)

They've worked on this thing for decades

There are 42,000 issues filed for make. if you could resolve each of these issues in 10 minutes, you'd spend 291 days of frustration. http://savannah.gnu.org/bugs/?group=make

kids these days.


_make_ was first released in 1977, but that was the PWB/UNIX version. The GNU variant of make didn't come along until sometime in the 1980's. It's hard to pin down the exact date of the first release because the developers didn't keep good records prior to the switch from RCS to CVS. However, the earliest ChangeLog entry now is dated July 15, 1988, and 1985 is the earliest date mentioned in any copyright statement in any of the GNU make source files.


My point was referring to the JS toolchain here mostly. I get that for C, make is the tool for the job.


make is a general-purpose tool for describing dependencies for regenerating files. It would be worth your while to learn make, and try it on your example. Understand that it's a declarative language ("A depends on B"; when "B" changes, here's how to update "A"), and not a scripting tool. This is a good thing.

In my experience, make is coupled to Unix. Make is not coupled to C.


On one hand, make typically comes with built-in rules for .c targets.

On the other hand, make can't cleanly handle #include dependency detection. I doubt that there is any major C project where "make extraclean" (or its equivalent) isn't occasionally necessary.

So yeah it's really not very well suited for C.


Well, #include resolution would require that make be able to parse C, which would add a heck of a lot of complexity, and be unscalable. For instance, you'd need to modify make to parse CSS to teach it about @import, or to parse javascript to teach it about require() (but only if you're using RequireJS)

Or, you could use the C preprocessor option "-M" and its variants[0] to get it to generate make rules with C #include resolution for you.

See also Recursive Make Considered Harmful[1] for a good description on how to set up this in combination with GNU make's "include" facility to autogenerate your per-source #include resolution fragments.

[0] http://gcc.gnu.org/onlinedocs/gcc/Preprocessor-Options.html

[1] http://aegis.sourceforge.net/auug97.pdf


What is wrong wrong with just using the following.

Given it would be nice if make had a builtin macro to do this, but it is not too bad to type out.

depend: .depend

.depend: ${SRC} ${CC} -MM -I. ${SRC} > .depend.X && mv .depend.X .depend

include .depend

Makefile: .depend


Because now make can't build a clean source tree.

I believe make parses the entire Makefile before running it.


GNU make restarts when a Makefile: dependency changes. So it works perfectly fine. Try it ...


What happens if you list depend as a dependency of the first target?


In my experience, make is coupled to Unix. Make is not coupled to C.

However, implementations typically include magic that is heavily biased toward building C and C++ code. As someone who works on a lot of projects, some using those languages and some not, I tend to think it's rather too magical at times. Personally, I'd prefer to have that kind of magic explicitly stated in some standard file that comes with the tool, so that file can be included with a one-liner for those projects that want it but there is nothing implicit going on by default.


I think "magic" is much too loaded a way to characterize it.

The rule applicable to C pertains to building foo.o from foo.c. It amounts to two lines in a Makefile:

  %.o: %.c
          $(COMPILE.c) $(OUTPUT_OPTION) $<
where COMPILE.c and the other variable is predefined to trivial values.

There are other rules useful for C++, and for yacc, etc., but they are just as simple.

You can cancel all the implicit rules with "-r", or cancel selected implicit rules by naming them in the Makefile like this:

  %.o : %.c ;
All this is in the make manual, which is quite good: https://www.gnu.org/software/make/manual/html_node/Catalogue...


please show me a makefile [...] that will do the same as this

You mean like this?

    scripts: ${SCRIPTS}
            cat $^ | coffee -sc | uglifyjs -cm > build/js/all.min.js
Have you ever wished there was one-character symbol for the word "pipe", like maybe '|'? Such a symbol would even make all those .'s and ()'s redundant. While we're at it, if our build script is going to be in its own specially-named file, wouldn't it be nice if instead of namespacing under 'gulp', within the special build script file there was a DSL where you could specify the task name and its dependencies with a single character, like ':'? And instead of 'function() { return ...; }', your instructions were delimited with just indentation, like a Tab character?

Your example proves the opposite of the point you're trying to make. Starting from your example and trying to compress it with a DSL, you literally couldn't do better than Make syntax: the "gulp.task('" part is implicit, the "', function() { return gulp.src(" part is a single character (':'), every ").pipe(" is a single character ('|'), and ").pipe(gulp.dest('...'))" is a single character ('>').

I submit that's impossible, simply because this stuff took 2 years to evolve...

Before and during the entirety of those 2 years Make has been a better tool, for those of us JS coders who didn't dismiss it offhand as being always thousands of lines and only for dinosaur C coders.

Make has many problems, but taking thousands of lines to simply pipe together commands has never been one of them. Having to write .pipe() where in shell you could just do '|' has never been one of them, either.


Quite the contrary: I've had to maintain or incorporate some non-standard build processes into several build systems (mostly scons, waf, gyp), and in each one it was difficult or impossible to express what I wanted because the build system wasn't as general as Make.

The single most valuable thing Make has going for it is that the primitive is a Unix pipeline. Anything that can be built with tools you can invoke from your shell can be built with Make, and the language of action is about as universal as it gets for Unix-like systems.

Yes, the dependency syntax is somewhat confusing, and it's not obvious how to understand and debug Makefiles at first glance, but the GNU make documentation is decent, and time spent learning the language and the tools (e.g., "make -d") is much better than trying to reinvent the system, especially without understanding it. Every reinvention I've had the displeasure of working with missed some important (if not widespread) use case.


I think he's saying that re-implementing make in ruby is a bad idea.

Not that I agree with him. Being able to do some printf debugging (or even use a real debugger!) to troubleshoot issues is a big plus that he doesn't mention.


Indeed. Responding to the fetishization of web technologies by fetishizing Unix utilities is missing the point. They're both tremendously important and tremendously useful, but trying to ignore their shortcomings doesn't help anything.


On Windows I build using TDM-GCC (MinGW), and SCons. No MSVC needed.


    $ cat Makefile
    production:
      brunch build --production

    test:
      brunch build
      karma start




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: