Hacker News new | past | comments | ask | show | jobs | submit login

I believe the core problem here (that led to containerization, application images and alike) is that correct packaging for most distros is hard.

There are tools like fpm or even checkinstall that can build simple good-enough-but-not-really packages, but I think maintaining a "proper" Debian packaging requires some pretty arcane knowledge that's spread around various pieces of documentation (and maybe I'm just stupid, but also a lot of trial-and-error).




Completely agree with you in regards to Debian packaging. I googled around and found at least 5 different official guides (on the debian wiki) all using slightly different approaches. I tried 3 of them before giving up, as none of them seemed to work.


Oh? The basics are not that hard. dh_make will do most of the work for you.


The basics may not be that hard, but the multitude of overlapping tools and frameworks that sit on top of them and try to make things easier is seriously confusing and offputting. For instance, there are build tools called dpkg-buildpackage, debuild, git-buildpackage, pbuilder, cowbuilder, and doubtless some others I've forgotten about. Different guides will recommend different ones.

Once you've made a package, you either try to get it into Debian, or put it in your own repository, both of which come with additional challenges.

I've done some Debian packaging before - both to go into Debian and to put in PPAs. I've given it up: the effort was too much and the rewards too little.


Sad thing is that this is 100% a problem related to the tools for building Debian packages (same goes for RPM). The tools can be replaced without sacrificing binary compatibility at all.

If any aspiring hackers are around, I suggest taking a look at Arch and its makepkg/PKGBUILD tools. Pretty much a simple shell script to define a package, and a uniform tool to build it.


Debian packaging basics aren't hard once you know them, but starting from scratch, it's hard to collate the information. There's lots of stale information out there (including the debian wiki) and disagreement about how to do it - debuild? dh_make? dpkg directly? tar/ar manually?

Similarly, if you want a package up something without a Makefile, you may as well go home. My first packaging attempt was basically to install a tarball at a location (in-house use only), and it wasn't clear that I had to make a Makefile to do this first. And then you get introduced to the bizarre world of make with all of it's idiosyncratic rules and behaviours.

Then you get to play with all the dpkg control files, and if you're using debuild, you have to avoid the lintchecker - because you can override lint rules... but the lint profile then disables overrides for some useful ones (the "install to /opt" rule gives me lots of spam). So now you need to learn about the lint-checker so you can do your own profile (or just disable it).

Then, as takluyver says above, you get to add it to a repo so systems can access it...

Once you know all this stuff, it seems pretty simple, but getting over that hump is difficult.


One part of it is that every damn package manager out there (outside of perhaps Nix/Guix and Gobolinux) is hung up on having one canonical version of every lib package.

Meaning that the package manager balks at having lib 1.0 and 1.1 installed at the same time, even though ld can handle it just fine, unless you play sleight of hand with the package names (meaning that you get something like lib0 and lib1 for 1.0 and 1.1 respectively).

This in turn leads to a bunch of busy work regarding backporting fixes, maintaining manager specific patches, and recompiling whole dependency trees on minor updates.


There's nothing fundamental that means that has to be the case; in fact, Debian does (in some cases) ship multiple library versions in a stable release.

The primary reason this is usually not done is the work required: it means you need to support two versions in stable, two (or more?) versions in unstable, &c. This quickly becomes a great deal of work :)


Err, their solution is the one i lay out in the second paragraph, where foo 1.0 becomes foo0 1.0, and foo 1.1 becomes foo1 1.1 simply to get around package manager limitations.

That is where the extra workload is coming from, the need to juggle effectively two package trees, and the patches for each.

If instead the manager was properly able to handle multiple versions, they could all be named foo, and the manager would be able to traverse the tree and see if foo 1.0 could be discarded or not based on the dependencies listed in other packages.

You get something of this nature in Nix (though they take it one step further) and Gobolinux, by using the directory tree as the final arbiter.

On Gobolinux you have /Programs/name/version, thus installing foo 1.0 would end up in /Programs/foo/1.0, while foo 1.1 ends up in /Programs/foo/1.1.

Then as long as you have one or more programs installed that need 1.0, you can leave it in place, and remove it when no longer needed.

For sake of compatibility, Gobolinux also have a /lib (along with the rest of the FHS) that contains symlinks back to the individual lib files in /Programs, using SONAME to its full potential.


I think lwf was talking about the human cost of having two versions. That doesn't go away when they are both named foo instead of foo0 and foo1.


Ranking package managers by difficulty of creating packages (1-10, lowest to highest):

1. tarballs (slackware)

2. Ports (FreeBSD)

3. Portage (Gentoo, Calculate Linux)

4. pacman (Arch, Archbang, KaOS, Manjaro, Antergos)

5. building from source (Linux From Scratch, Most old installs eventually turn into this)

6. rpm (Redhat, Mandriva, Scientific Linux, UnitedLinux)

7. deb (Debian, Ubuntu, Mint/LMDE, Hanna Montana Linux)

Why Debian and Redhat-derivatives are the most popular, I'll never understand. I have _way_ less headaches maintaining custom Calculate Linux chromebooks than I ever did with fleets of Redhat or Ubuntu laptops.


The web of dependencies for both build and execution can be maddening. This is why apt-get and yum have won: they take care of finding and downloading those 3246536245 libraries which are absolutely essential for each stupid two-liner application out there. And when it comes to large software (which you probably don't manage on a chromebook), good luck downloading and compiling several GBs of KDE or GNOME source code with the right options for your hardware.


running/building a full plasma 5.5 KDE 5.x current. Full install including most office apps (Libreoffice, etc), a bunch of emulators/games (free and proprietary - steam included), and other apps.

Intel haswell/sandy bridge chromebooks with custom hard drives. I think you're greatly overestimating how difficult it is to maintain all of this on Portage. I invite you to try a distribution that isn't Debian or Redhat-based. You might never go back.


A comment like this is even better if you mention what distro you were using and maybe drop a link to a guide showing how easy Portage is to use. Then, people might experience what you describe.


I already mentioned in the great-grandparent that I was doing this with Gentoo and Calculate linux (which are interchangeable).

I try not to specifically invite people to "install gentoo" thanks to /g/. they'll find it on their own. There's a lot of learning before quickpkg makes an install take 3-5 minutes.


Gotcha. But thanks to /g/? Didn't know that existed so how would I have found it on my own? And "lots of learning" despite you having way less headaches? I'm a little confused as some of your answers inspire more questions.


/g/, the technology troll board of 4chan, has a meme of "install gentoo" whenever someone says "which is the best distribution to install for a new user?" Those users usually come back about a week later hating everyone for putting them with such a difficult OS. Calculate Linux has effectively removed most of this barrier, though, making Gentoo really easy to maintain for newbies.

The Gentoo Handbook is an amazing tool to learn linux. By the end, you have an expert's understanding of how Linux works, and how to install it without a GUI (or even a package manager). I learned it during the "Stage 1 era", when the install started with bootstrapping, then building your compiler before installing the rest of your system. Nowadays, you partition disks, format, chroot in, set timezone and encoding, untar the stage 3 tarball, emerge --sync, build your kernel (which is the hardest part), install a bootloader, install wpa_supplicant, and you're done. It's about 30% more difficult than an Arch install, becuase you probably will screw up your first kernel configuration and make a nonbooting kernel, booting to a black screen. But unlike Other OS's and distros, when it breaks in Gentoo, it's probably your fault.

I have way less headaches, because chromebooks are braindead to deploy this way. flash a bios, and then install the binaries once booted off a LiveUSB. Quickpkg allows you to make binaries of your existing system (built for Sandy Bridge, until I decommission the C710's, then I'll build for haswell) with all your custom flags already set, so VLC and other apps are far more robust than their Ubuntu/Redhat versions. Gentoo is also the only distro other than GalliumOS that actually has working Elan touchpads for many chromebooks (thanks to Hugh Greenburg of GalliumOS's patches that I'm maintaining for newer kernels), as the patches have not been included in Arch or other distributions's kernels.

I run a Poettering-free install (JACK instead of PA, wpa_gui instead of NetworkManager, OpenRC instead of systemd), so having one master install I can re-deploy in the amount of time it takes to brew coffee is pretty handy, especially considering de-poetterizing a new Debian/RHEL/Fedora/Arch install is painful, if not nigh-impossible, and at the very least time-consuming.


re /g/

Haha. That is pretty evil given my experiences starting with command line Linux back in the day. Far as learning, I've heard that before about Gentoo. Thought about doing it at some point. Right now, kind of mixed between learning BSD or Linux as the security enhancements I create might need to temporarily stay proprietary to support their development. BSD vs GPL you know. I mean, I'd give back bug-fixes or feature developments where possible but differentiator has to be secretish.

"But unlike Other OS's and distros, when it breaks in Gentoo, it's probably your fault."

I'm avoiding that right now but fine with the concept. Reminds me of Mentor's Hacker Manifesto we used to quote to justify such responsibility.

"so having one master install I can re-deploy in the amount of time it takes to brew coffee is pretty handy"

That is handy. I've always especially liked how the source-based distros were customized to one's own hardware. That drastically increases the initial deployment time but significantly reduces many issues along the way.


there are two solutions for reducing deployment/build time. First is to use distcc on all machines (so when idle, they contribute to the building of any packages). Second solution is to build one master image, quickpkg it, and then deploy the binaries. I use a combination of both; all binaries are compiled for sandy bay architechture, so I get most of the benefits (though haswell/broadwell gets faster VLC/ffmpeg if I recompile), and they build at night when nobody's around.


Why hasn't Gentoo made the same changes Calculate Linux has?


Because it would pigeonhole the distro.

Calculate basically takes Gentoo and precompiles it with certain defaults.

This means that the Calculate maintainers have made certain choices for the user, while Gentoo proper would have left them up to the user.


Why would "building from source" be a difficult way to create a package? ISTM that if you have any software at all, it's already building from source? I understand that building from source is considered difficult for users, but here you seem to be talking about maintainers. Or do you mean setting up $#%&ing autotools? In that case I agree.


the dependency hell from building from source is slightly less of a purgatory than that coming from anything above it. That includes the headaches of autotools.


I dunno, Gobolinux is pretty much LFS with addition layer of tools. And more often than not, the problem is that of developers doing a crap job of actually documenting their dependencies (or hardcoding paths and/or filenames).


Perhaps it's slightly subjective, since I would place FreeBSD ports in between rpm and deb in terms of difficulty.


Debian packaging isn't too awful, though I don't like how many files are involved (I prefer a single spec file, plus patches, as found in RPM). But, apt repository management is truly terrible. The documentation is laughably bad and disjointed, and it points to several different tools and processes that may or may not work together; my Debian/Ubuntu repo generation script is a ridiculous mishmash of stuff that kinda sorta works if I squint and don't look at it too closely (and a lot of the repo is manually created and maintained because I couldn't figure out any tools to automate it).

Contrast that with creating yum/dnf repos: run 'createrepo' on the directory of RPMs, and you're done! Signing RPMs is much better documented, as well. There were like three or four (conflicting and mutually incompatible) processes that come up when you google signing debian packages and repositories, and I still don't know with certainty what the "right way" is.

I don't know why Debian never got this part of things right; apt/deb has been around for about two decades, you'd think someone would have looked at createrepo and said, "Oh, hey, we should do something like that so it's easier for people to distribute package for Debian/Ubuntu."

All that said: I'm rarely a fan of "new package managers". I like having one standard package manager on my system (I don't really care whether it is apt or yum/dnf), and I strongly believe everyone ought to be using the system standard package manager across all of their systems even for custom software. I don't like the proliferation of language specific package managers, in particular. npm, gems, crates, PyPI, eggs, pear, pecl, composer, etc. Shit's gotten ridiculous. Even on my own systems, I can't keep up with what's installed from where. And the problem has gotten much worse over the years; I used to package all my own stuff as RPMs, and all of the CPAN or Python dependencies, as well, using automated tools for building RPMs from Perl or Python libraries (the old cpanflute2 and the dist_utils RPM target; there are better ways in both cases, now, I think, for those languages). But, now that I have a much more diverse set of systems with more languages and more dependencies, that's become more difficult. And, it's also become the norm for people to document only installation with oddball package managers (npm, in particular, is the way everyone documents everything these days), with no good mechanism to package for RPM or deb.

I dunno, I think we're going down a risky path with all these "package managers" that only work for one little niche and I believe security issues will be the major negative side effect. I mean, I have great tooling to let me know when my RPMs or debs are out of date across my couple dozen servers; it's much harder to know when packages installed via all those other systems are out of date; it becomes a manual process in some cases, and it's also risky because dependencies are not always very reliable in those systems. I do a lot of, "Will this break some other thing when I upgrade this library?" Which I don't worry about with RPM updates from CentOS (because of the compatibility assurances of that system), or from my own company repos (because we do our own testing and understand the parts we're distributing).

In short: Yes, it's hard to package and distribute for RPM or deb. But, not as hard as dealing with a half dozen different package managers, containers, and a wide variety of other things being updated via a wide variety of other mechanisms. The former is hard but quantifiable and manageable. The latter is a nightmare for security and reliability.


From an application developer point of view, distribution package managers are niche: apt or rpm only works for a specific group of users on those Linux distros, whereas npm/pip/gem will work for all their users, including Mac and Windows users, who often outnumber Linux users.


> But, apt repository management is truly terrible.

Aptly is a relative newcomer to the scene that makes repo creation/management much easier. It's actively developed.

http://www.aptly.info/


Is this of help to you? https://github.com/spotify/debify




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: