Hacker News new | past | comments | ask | show | jobs | submit login
Stop the autoconf insanity – Why we need a new build system (2003) (freecode.com)
55 points by ahomescu1 on Feb 11, 2014 | hide | past | favorite | 83 comments



The comment "Put the blame where it belongs: Automake" below the original article sums it up pretty succinctly. Quoting:

All the complaints in the article are PRECISELY the kinds of complaints that autoconf programmers worked very hard to avoid. For example, it's IMPOSSIBLE to have version skew with autoconf because the configure script is completely pre-generated before being shipped. As far as the user is concerned it's a simple sh script.

ALL of these problems stem from the "automake" monstrosity that creates many more problems than it ever solved. Unfortunately people now tar autoconf with the same brush as automake and libtool whenever they run into problems with these. Sigh.


For my projects I did a make system that uses gmake meta programming to easily define rules for most cases, here you have to includes a template make file that expands the target definitions into make rules.

http://mosermichael.github.io/cstuff/all/projects/2011/06/17...

The catch is of course most cases ; other stuff is still possible to do , but things become hacky.

Another downside is cross compilation, my system does not do this effectively (though it works for Linux i686, x86_64 and Cygwin - many surprises between these setups !). Most build systems I know don't support cross compilation - except for automake, the terrible.

If you do debian packages then you are strongly encouraged to use automake - because of cross compilation.


I'm seeing some projects shipping without configure which is annoying.


We don't need a new build system, we need a new mentality.

The problem with build systems is they don't cover the entire scope required to actually build software reliably. Ideally, you want to take just your code as input and produce a target, but the reality is that you're taking your code, plus an environment with potentially infinite number of configuration options, and you're asking the build system to produce a target. This is effectively saying "Here's some code and some shit, please do your wizardry and make me something that looks like this." Is it any wonder that every proposed solution to this difficult (impossible?) problem gets it wrong?

The correct solution is to declare the target you want, then build up the dependencies you need as input to meet that target and declare them explicitly, then perform the build in that isolated environment to ensure that no undeclared configuration options can alter the result of the build. This is what Nix (http://nixos.org/nix/) does, and by derivation, Guix (https://www.gnu.org/software/guix/), and Debian's ReproducibleBuilds is also attempting. The build process becomes effectively a pure function, free of unwanted side-effects.

Nix doesn't actually perform builds in itself, except for trivial ./configure && make. It piggy-backs on existing build systems via bash scripts if needed, but it completely manages the environment in which such script will run, so it has local-side effects which are controlled, similar to how you might use the ST monad in place of IO in Haskell perhaps.

I find Guix a bit more interesting because it can take on the whole problem - including the build system. While it can piggy-back on existing build systems in the same way Nix can, you can also write your own in Guile (or some other language and invoke from Guile). It doesn't really make sense to depend on the old cruft under this new mentality, because things like pkg-config are obsoleted by known, explicitly declared dependencies. The build process can be simplified (and probably sped-up).


How portable is nix? I haven't looked into it much, but from a (possibly mistaken) first impression it looks more like a replacement for yum or apt than for autotools. Despite being crufty, what I find valuable about autotools is that, given one source directory, I can successfully compile and link the package on many things: various distributions of Linux, the BSDs, Solaris, HP-UX, AIX, OS X, nearly whatever else you care to dream up. Does Nix actually do that (or target it, if not yet)?


Nix is indeed a replacement for package managers, but the point still stands. Package management and software builds are not completely disjoint - the latter is depends quite heavily on the former.

Nix itself should work on BSD (I think it's tested on FreeBSD), but Guix is linux-only so far, as it is still in early development. I'm pretty sure Guix intends to be fully portable, even to Hurd in the long term. The ideas behind Nix is platform independent though - there shouldn't be any reason it can't be ported to other platforms.

As for building individual software on other platforms - they require different targets. (Or in fact, just an alternative configuration of the same piece of software on the same platform requires a different target). This is intentional though - targets have an identity (a SHA1 of their entire inputs), so any new platform will require a new package.

The package configuration format is quite useful in that regard though, as you can derive new package definitions from existing ones, so it should be possible to make partial "generic" configurations, then derive the platform specific targets from them. There's quite a bit of configuration for each platform though, as the dependencies will all have different identities, all the way down to the compiler and the kernel.

Another potential option would be to create a template system to automatically generate the Nix configuration for multiple platforms, which would be somewhat similar to what autotools et al are doing today. The difference being, once one person has performed a build and tested it for a platform, anyone else should be able to reproduce it exactly, or to explicitly specify where he wants the target to differ.

In effect, Nix removes the "hidden knowledge" that goes into building software by requiring that you specify even the commands you used to perform the build.

Of course, Nix isn't a panacea - it introduces a new set of problems, including some social ones. It might take more effort to write portable targets for each platform - but for that trade off you get reliability and simplicity for the users of the software.


He again does the typical thing and runs "./configure --prefix=/opt". The configure script runs for a while, then exits with an error which basically translates to "You have an autoconf version which is three weeks old; please upgrade", but this is displayed in the most cryptic manner possible.

This is simply incorrect. If you already have a configure script, then you don't need autoconf installed at all, let alone a particular version of it. autoconf is used for generating the configure script, not running it.


I'm positive that I've seen configure scripts that try to check whether they, themselves, are out of date, and regenerate themselves if so. That's the only time I've seen an error about autoconf version skew.

To be fair, I've only actually seen this on scripts I was writing - never seen it happen to something that passed make dist.


I don't remember exactly (my autoconf/automake-fu has gotten weak these past years), but there's a macro AM_MAINTAINER_MODE that will add --enable/disable-maintainer-mode as an option to your configure script that will determine if rules get added to the final Makefile that will check to see if Makefile.am is newer than Makefile.in or if Makefile.in is newer than Makefile, or if configure.ac is newer than configure (etc.) and then attempt to regenerate them.

I don't quite remember what the default is if you don't include AM_MAINTAINER_MODE in your configure.ac. So yes, you could have indeed seen that happen, but there are ways to make it not happen.


Automake will regenerate and rerun the configure script if configure.ac is newer than configure, which is normally only the case if you're either modifying configure.ac or if the files are packaged incorrectly (and automake makes packaging a release correctly easy).


Interesting that the alternative tools mentioned in the article (SCons, Cons, and A-A-P), only SCons has been updated since 2003 (according to Freshmeat), and its last update in 2010 appears to have been mainly to redo the version numbers and drop Python 2.4 support.


https://code.google.com/p/waf/ is the spiritual successor to SCons. It started off as a fork of it (due to performance problems in SCons) and then became a full project on its own.

It can do everything autotools+make can and you only have to code your build in Python rather than Makefile + M4 + shell scripts. I've been using it for many years and it is technically superior to any build system I've seen. Shame the author of it doesn't market it more -- with some hype it would easily have been the de facto standard build system by now.


Replacing make with python is a huge step back. The makefile language is simple yet extremely powerful for describing a dependency tree such as that for building an executable from sources. Its perhaps strongest point is that it completely agnostic to what the different steps actually do rather than consist of a predefined set of canned rules/tasks with, if you're lucky, a convoluted way of adding your own using e.g. python (or worse, java). This means that as long as a shell command can turn a set of inputs into an output, describing it takes (usually) just two lines of makefile, one naming the target and prerequisites, one providing the command to perform the transformation. If a particular rule is too complicated for a few shell commands, invoking a separate script (or even a just-built executable) is trivial. Every other system I've looked at needed a comparatively enormous amount of code for adding even the slightest tweak to the built-in rules.

GNU make is actually two different languages coexisting in the same file (the makefile), a declarative language for describing the targets and their dependencies as well as a functional language for the variables/macros. Once you realise this, a whole new world of possibilities opens up. If more people took the time to actually understand proper use of makefiles, perhaps we'd see fewer poorly reinvented wheels.


I've written a few complex Make based portable build systems over the years and in my experience Make script is its major limitation.

Make is a great system for declaring dependencies - the basic syntax is wonderfully concise (not that that's everything) and clear, and the system is a great platform on which you can build your build system (unlike Scons which tries to abstract away all sorts of things and just ends up hiding them).

The problem is that when you are trying to do anything more complicated than simple rules you are lacking basic language features. You don't even get function calls - the templates work under many circumstances, but not all (and are a pain to debug). Also don't get me started on invisible syntax errors (tabs vs spaces), and the only error message Make has' Missing separator. Stop'.

Writing a build system in Python loses you that concise declaration syntax (but again characters typed isn't everything, as long as you maintain clarity it doesn't matter), but gets you ugh more flexibility to produce rules under complex circumstances, which saw at you need for portability.


Your comment betrays your ignorance.

Firstly, you call it "make script," suggesting that you view makefiles as a typical (procedural) script that is executed from top to bottom. I base this on my experiences working closely with others who also used the term "make script," all of whom suffered from this misconception.

Secondly, GNU make does have function calls. Look up the $(call ...) construct. When the built-in functionality is insufficient (and of course such cases come up), it is trivial to call an external script, written in your language of choice, using either the $(shell ...) syntax or as part of a target recipe.

As for the "Missing separator. Stop." it is not fair to dismiss an entire tool because of one slightly obscure error message.

I also find it ironic that you complain about tabs vs spaces, then go on to suggest using python, which also has invisible whitespace as part of its syntax.


It's obvious you have exactly 0 experience with waf, yet feel qualified to dismiss it based on objections you have against completely different make alternatives.

Make's rule system completely breaks down when you need to create a zip file of all *.c files in a directory tree. But it's a side-point, for complex software, build variants, configuration, installation and distribution are much harder problems to solve than dependency-based compilation. Waf is a replacement for the whole autotools suite, not for GNU make.


You're right, I have no experience with waf, and I'm not dismissing it. I'm saying much, if not most, criticism of make is unfounded and misplaced.

Want a zip file? Use a command like "git archive -o foo.zip HEAD".

Why do you insist on having a single tool do all your tasks? A good craftsman has an arsenal of tools and picks the right one for each job.


To bad I'm not using git. I still want a zip file of a directory. Really, don't say "use a command like" actually show me exactly how you would accomplish that task, using (gnu) make, in an efficient and cross-platform way without causing unnecessary target rebuilds. You can't? Now you see why make is insufficient.


Let's assume for sake of argument that you are right (although as ori_b shows, you are not). Why do you expect this functionality from _make_ specifically? Nobody ever claimed it can do everything. In fact, it's a good thing it doesn't even try to do tasks for which other, purpose-built tools exist. Creating compressed archives is a very different task from managing dependency graphs. Make was built for, and excels at, the latter. For the former there are better choices (and they can be called from make if need be).


> Creating compressed archives is a very different task from managing dependency graphs.

What? Creating a compressed archive is exactly the sort of thing that Make was designed for. A binary file or library, if you squint the right way, is just an archive of object files, which are just transformations of .c files.

Make's biggest problems for me (and the reason that I'm slowly gravitating towards writing a replacement for my own purposes) come from the fact that it can't handle inferring dependency graphs correctly. The usual 'gcc -M' hacks that work for C source don't work for code where dependency tracking is more than just an optimization. For C, remember, as long as you have the appropriate headers and .c files, you can build a .o file completely order-independent.

If you have a dependency digraph that looks like this:

    a -> b, c, d
    b -> c, d
    c -> d
    d
Then you need to build in this order:

    d, c, b, a
Without some way of explicitly generating these dependencies ahead of time (which, I admit, is doable), you can't get make to do this properly. The best hack I can think of would look something like this:

    depfile: $(SOURCES)
            update-deps -o $@ $?

    include depfile
But at that point, you're pretty much generating a makefile with another tool, scanning all changed sourcs, at which point you might as well move on to just building the DAG and running the build on them while you're at it.


Actually creating the archive is not make's job, which is what bjourne seems to be complaining about. Collecting a list of files and passing to a command is what make does. It doesn't care if that command is a compiler or an archiver, nor should it.

Regarding your chained dependencies, how do you expect _any_ tool to figure this out without either being told or examining the inputs (which implies knowledge of their formats)? Relegating this to an external tool seems perfectly appropriate to me.


    zipfile.zip: $(wildcard dir/*.c)
            zip -ur $@ $?
Note, $? will only include the prerequisites that are newer than the zipfile, and therefore, will only update the files that have changed. You will not be re-zipping things repeatedly.


That's for trying, but 1. doesn't work unless you have zip installed, 2. doesn't recurse the whole directory tree, 3. doesn't rebuild the target if a file is renamed or deleted, 4. uses volatile time stamps instead of content hashes to detect targets needing update. And is GNU make specific. Then what if you want bz2 compression instead of zip?


1. Are you seriously complaining over make itself not creating the zip file, duplicating the functionality of the 'zip' tool?

2-3. This command does what you want: zip -R -FS foo.zip '*.c'

4. In practice timestamps work just fine unless you deliberately sabotage them. To use content hashes, the hashes of the input files when creating a target would need to be stored somewhere for future reference. This storage place could just as easily be corrupted. Make is a build tool, not a VCS. Treat it as such.

Yes, some things are specific to GNU make, but it is the de facto standard, something waf can only dream of. If you're going to complain about something being non-standard, your suggested alternative should be more of a standard, not less.

If you want bz2, why did you ask for zip?


Do you have some link to a good tutorial/guide showing those possibilities?


The official manual is mostly easy to understand. There are various 3rd-party books available, but I haven't read any of them.


Yeah, scons looked well-designed when I encountered it in 2012 or so, but it also looked dated and unmaintained. Good to know someone's picked up the torch.


The releases on http://freecode.com/projects/scons are out of date. The last release was Scons 2.3.0 in March 2013, not 2.0.1 in 2010. See http://www.scons.org/ and http://www.scons.org/CHANGES.txt. It looks like Scons has one release every year or so.


Over 10 years later, and this is still relevant. It's scary how entrenched 'good enough' solutions become.


It's also kind of amazing how terrible most of the replacements people developed are. Some are significntly worse than autotools, which is no mean feat.

I blame make's awful syntax.


s/Some/All/

I hate using autotools, but every time I touch a project that uses something else (and that is too complex for a ten-line Makefile), I inevitably end up wishing that it used autotools. I suppose it shows that sometimes a few decades of polishing a turd does sort of work.


what's about cmake?


CMake is interesting because the CMakefiles are just a different type of editable/configurable build script (ie. slightly different Makefiles). It's still an interpreted step between build instructions and the output that you actually use to build which isn't meant to be edited and therefore obfuscates part of the process... so autoconf. This is both a strength and weakness in that there are so many external modules for CMake (eg. the various FindX) that they work great most of the time but when they break they are more difficult to fix than an atomic change to the actual compiler flags in a Makefile.

I don't think there will ever be a standard build tool, simply shunts to get the needed functionality from others. Eventually all build tools will exist in each other and the only interface will be recursion.


So far CMake served us well. I'm using it for 8 or so years now for small and larger systems.

I especially like that one file allows me to develop on XCode, somebody else on Visual Studio and finally deploy on Linux/gcc. One config file creates native projects on all 3 platforms, no manual hacking.


CMake has the nice property of being able to generate non-Make build systems, but otherwise is more of just "autotools done differently" rather than "autotools done better". In practice I haven't found it to be any more pleasant to use than autotools.


Have you tried using cmake on e.g. IRIX?


Yes! A former coworker dug up my old O2 running a 6.5 variant, installed cmake and built BRL-CAD ( http://brlcad.org/ ) with it. Supposedly, it worked quite well (but took a very long time, I believe it was a low end cpu, I want to say 183mhz). Given that I rewrote the build system in 2003 for BRL-CAD to use automake instead of cake, I'm very impressed by cmake.


Yes! But I'm not so sure we can entirely blame the make syntax. Something made the CMake people believe that inventing a new macro language / m4-redux was a good idea. I'm thinking that either m4 is a mind-virus or they're serving wine in lead glasses at the annual build-system conferences.


Or; a build system is harder than it seems.

Makefiles work fine in the trivial case, the problem is that things quickly become complex. Automating that complexity seems like it should be easy but, as it turns out, it isn't.


Is there something about the (admittedly very difficult and thankless) task of automating builds that justifies re-inventing the wagon-wheel of languages? Because that was my specific complaint, and there's a reason why it was specific.


Because it seems easy. And easy things done "wrong" require re-invention.

There should be a way to $FOO. Well, there's a way to do it in a Makefile but figuring that out is harder than should be, and doesn't make sense when you finally do figure it out.

"Well that way is stupid. Here's how I'd do it:"


CMake started for needing a suitable build system for a medical visualization application[0][1] that Kitware was working on, iiuc. How they could be using Tcl (an established, purpose built embeddable scripting/control language) in the Insight project, come to the conclusion they need a custom build system, and then when it comes to a control language decide to roll their own is a mystery to me, and a dropped ball, as far as I'm concerned.

I like cmake, and used to use it exclusively, but after running into so many issues with their homebrew scripting/configuration language, I've mostly moved on. I wouldn't be surprised if I end up coming back, and I'd encourage everybody to give it a shot (the ease of typical simple Makefile generation, but with extra muscle), but the omission of Tcl as a control/config tool has always baffled me.

  [0] http://en.wikipedia.org/wiki/CMake#History
  [1] http://en.wikipedia.org/wiki/Insight_Segmentation_and_Registration_Toolkit
edit: formatting


Most of the so-called modern build tools avoid make entirely, so its syntax cannot be to blame for their miserable quality.


I was thinking more along the lines of if writing for sh and make were less painful, then fewer developers would accept using such awful build systems instead. Autotools included.


I find neither sh nor make particularly painful. Python, however, I cannot stand the sight of.


I don't know about this. Back in 2003 I got all spun up on scons, and later working at Google and their Build system which was pretty amazing in what it could do. At the same time I discovered that you can't run Windows software, PC hardware, and a Windows OS that are skewed by more than a few years (seriously true for games, less true for productivity apps). And yet I can download thttpd and type 'make' in the directory and it just builds.

I've concluded that you can either embrace highly complex evolving systems and build tools which follow that evolution in such a way as to provide functions. Or you can make "point in time" sorts of things that are ephemeral in their ability to do what they do. Not a good choice but the only two that seem to be durable.

I completely agree that it is a challenging place to be.


I don't have many problems with the auto* toolkit.

Yet, there is one problem which has cost me lots of money for buying beer and getting myself filled up: why can't auto* check for all libraries and then output an aggregated "libx,liby and libz missing, libfoo outdated" instead of the configure - apt-get - configure - apt-get cycle?


Because it's just a bunch of macros. To create an aggregated list, they would all have to, instead of printing out what they want, add their info to some global variable, whose name they all agree on in advance and probably with some kind of standard format. Which isn't to say it couldn't be done, but it would be much more complicated.


It wouldn't take much at all to have the AC_ERROR macro append the message to a global variable, set a global error flag, and continue. At the end of the script, if the error flag is set, the aggregated error messages would be printed. The variable names would only need to be coordinated between AC_ERROR and one other macro called at the end.


it's interesting how some software problems have garnered many really good solutions while others are still mired in stuff like this.

e.g. no matter what your preferred source control tool is I think we can all agree that there are some pretty good options out there these days. compare that to the insanity that prevails in packaging in every corner.


Text editors are another classic example of this conundrum. Some problems seem to attract problem-solvers, and some do not.


I don't follow. We've had TextWrangler, SubEthaEdit, TextMate, Sublime Text, and Smultron alongside rapidly improving IDEs like Coda, XCode, VS, LightTable, and a host of web-based environments like JSfiddle, Cloud9IDE, and Codepen. Text editors don't just attract attention, they attract a lot of attention, and I'm glad for it.


I have used Emacs and vi since the 1980s, and view this differently. A lot of the work you refer to was largely duplicative.

Of course, I'm not talking about IDEs or web-based editors. You're right though, I do enjoy the innovation in web editors.


First of all, drawing a line between IDEs and text editors is a surefire way to ignore the kind of innovation that is happening with regard to text-editing. If you insist on doing this then in your search for innovation you've drawn a box so narrow that you should not be surprised to find nothing in it. There are only so many pure text-editing actions with positive ROI and E&V mastered the bulk of them long ago. Even so, E&V haven't been standing still. It's true they refuse to innovate in certain directions:

1. Out-of-the-box integration with modern toolsets (0-configuration documentation, smart autocomplete, build, debug, jump-to-definition, etc)

2. Present an interface that is easily discoverable by GUI-minded folk

3. Take advantage of the opportunities afforded by non-textual displays

but each of these is done "poorly" in E&V for a good reason, as per your text editor / IDE distinction. They can't change these things without stepping on the toes of their power users (i.e. all of them) and leaving behind part of what made them great, which is why this kind of innovation is happening away from E&V. Most of the apps I listed address between 1 and 3 of the above issues. Just because the young upstart editors don't support keyboard input and customization with the fluidity of E&V doesn't mean they are purely derivative. They do something that a lot of people want done that E&V don't do, and they do it well. That's what innovation is all about.

So I ask again what it is that you would consider innovation and why the projects I listed don't count.


You know what, I don't really disagree with what you are writing.

I'll repeat that we're partly talking past each other, but you are correct that the kind of (valuable! useful!) innovations you list tend to happen by an upstart and then are copied into other programs. So it is healthy to have a field of contenders rather than two dinosaurs.

You're also pitching this as some kind of argument, which I think is your mistake.

Never did I say that people shouldn't make new text editors. I remarked that some problems seem to attract developers, and some don't. I think that structural issue is more interesting than editor feature evolution.


Yeah, I think you're right, we were talking past each other.


Text editors are a solved problem since Emacs was written.


If that was true then people wouldn't be trying so hard to build alternatives.


Haha... Oh. You're not joking are you?


Emacs has an infinite number of features; but the end user may be required to write a few thousand lines of configuration file to enable some of them :)


The power of Emacs is that the configuration file is more than just that. It is code that becomes part of the editor and can replace pretty much any part thereof if the user so desires.


You got the joke!


I'm surprised there has not been any mention of Gitian, the build system used by Bitcoin to perform exactly deterministic builds. The purpose of this is to enable multiple developers to prove that they are all signing the same binary.

http://gitian.org/


As build systems go, this seems like a promising experiment: http://sourceforge.net/p/meson/wiki/Design%20rationale/

tl/dr: -DSL for builds with well defined semantics, leverages existing toolchain

-design constraints for the system that make sense, i.e. speed, portability, usability and common sense

-does not try to reinvent the wheel but rather simplify the usage of existing tools

- the build config it generates on my Ubuntu box seems to really deliver on the speed promise - I have no benchmarks to quote but a small c++ codebase using all sorts of dependencies including boost compiled nearly instantly


Running "./configure && make && make install" usually results in a working installation of whatever package you are attempting to compile.

followed by

the auto tools are constantly a thorn in the side of users and developers alike.

Which is it? Do the autotools "usually [result] in a working installation" or are they "constantly a thorn in [our sides]?"


Usually in that case is not enough. Developer is potentially dealing with big number of users. Even a small percentage of people having problems with build is being a very suboptimal situation.


https://xkcd.com/927/

But on a serious note, I really want to see Gyp become more popular, for the simple reason that integrating multiple Gyp projects is essentially zero-effort.

It's a beautiful way of working, even though Gyp certainly has room for improvement.


AFAIK, Gyp's biggest user is Google (Chromium uses it, and maybe others). Are there any non-Google projects that use it?


In addition to the big google projects using it (Chromium, V8, WebRTC/jingle, etc), there's also Joyent's Node.js (and forks, + numerous node packages containing native code), as well as numerous private projects that I've been involved with. Unfortunately, it hasn't taken off too much in the open source community apart from the above mentioned big name projects. Better documentation and more visibility could probably make that happen, though.


That's an old school build approach (it was 2003).

The modern approach is: apt-get install (or your OS equivalent).

It's 2014. I can't remember the last time I needed to install a package from source. Worst case scenario I need to add a repo.


> Joe GNU/Linux User has just downloaded and untared this package.

Regular users are not supposed to install software like this. The only sane way to administer a distribution is to make sure everything passes through the official package manager. This leaves the developers and package maintainers to deal with autotools and I don't hear them complaining (very loud[1]).

Autotools is simply good enough for the job and instead of being some sort of hated legacy it's being sometimes preferred for new projects (cough [2] cough).

[1]: https://blog.flameeyes.eu/tag/autotoolsmythbuster/

[2]: https://github.com/stefantalpalaru/vala-skeleton-autotools


Software freedom is the ability to do whatever you want with the code, for example, compiling it. I'm not saying there's any sort of license violation here, but it's not in the spirit of software freedom to "not supposed to install software [from source]"


Granted, but the freedom to ruin one's system is not a freedom I would promote.

Installing software from source is a good thing, you just need to go through the intermediate process of creating a package for your distribution first.


Not every user has root/sudo access so they can install software system-wide (it's even arguable whether they should, instead of keeping those programs user-local).


Wait, what? I've been downloading, untarring, configuring, making, etc since as long as I've known anything about any unix. Since when is this the wrong way to do it?


Since we're not running Slackware any more and we have understood the need for package managers with proper dependency resolution capable of doing system-wide upgrades with one command.


In a perfect world, every single piece of software ever written is in every single distribution's package manager.

This is not always the case in reality, however.


In a perfect world, the following also holds:

- The versions in the repositories are always up-to-date with the latest numbered version.

- Users never want/need any features or bugfixes that haven't made it into the latest numbered version.

- Nobody ever needs to install an older version of anything, because new versions never introduce compatibility-breaking changes. Which of course are never necessary anyway, because software designers always have perfect foresight, so the initial design of every package is always something that will remain perfectly suited to it forever.


All those problems are fixed by providing a way for the users to maintain a local package repository (Like Gentoo does with its local overlay[1]).

The only downside to that is that it's more work to create new packages and you end up with part user / part maintainer hybrids that no longer fit the "regular Joe User" model.

[1]: http://wiki.gentoo.org/wiki/Overlay/Local_overlay


Of course not. The solution is for the user to create new packages from arbitrary tarballs instead of relying on "make install" and "make uninstall". But that's not a regular user anymore, is it?


I administer my system with apt-get, ... But when I develop for an old release of redhat, I need to compile some packages to have a couple of recent libraries.


Last time I built anything that required one of these autotools monstrosities, which was LAME I might add, it gave me some obscure error about not having a suitable type for a signed 64-bit integer. Which was unbelievably stupid since it had just "found" uint64_t for its unsigned counterpart. So I opened its nearly megabyte sized, thirty-two thousand line long configure file to track down the problem. I seem to recall that in the end I fixed it by correcting the placement of some braces around it many nested ifs to fix it.

Now I avoid anything that needs anything like these tools or anything that has a step before running configure from a source code checkout.


There is nothing wrong with autoconf or libtool.

Take a look how sane developers (nginx, cpython, etc) are using it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: