In case the author reads this, and as I can't comment there: Thank you.
Debian is an awesome and somewhat under-rated distribution. Being a maintainer always seems like a thankless and slightly forgotten role. Thanks for having the persistence to keep going so long.
Had I known the text would end up shared here, I'd surely spent more than 15min writing it! I'm hardly the most active contributor in Debian, but I guess that's part of how one can manage so long.
Is it under-rated? Perhaps its marketing isn't as strong, but it's the "father" of a lot of distros today. Personally when I want a server distro, I still go with Debian. Simple and solid. No nonsense.
I think Debian and its children are so much nicer to use from a server perspective. RHEL and its derivatives have definitely improved over the years but it seemed like Debian had the right idea early on. Things like how they managed Apache modules and an out of the box emphasis on 'conf.d'-style configuration directories are just a couple of things that come to mind. The modular thinking lends itself well to learning because changes are easy to back out and automation because it's easier to compose configuration.
Personally, I feel like Ubuntu gets a lot more attention, perhaps more on the client-side. Sometimes, the foundational work Debian provides isn't fully acknowledged.
Yeah, Ubuntu probably is the best-marketed distro today. And I agree that Debian probably doesn't get the praise it deserves – IMO the nicest things about Ubuntu come from Debian.
I installed Debian for the first time almost 13 years ago and have enjoyed the "Debian way" every second.
But as the saying goes, all good things must come to an end. Due to various decisions by the Debian community, Debian Wheezy will be the last version I'm going to install and for the last few years I have been in the proccess of migrating thousands of servers away from Debian.
FreeBSD, so the migration is not just away from Debian but also Linux. Of course problems always arise in such operations, but in general we have been very happy with the change, and I'm just as excited about FreeBSD as I was with Debian before.
Do you migrate you workstation to FreeBSD too, or your servers and the like only?
In any case I want to take this opportunity to note that FreeBSD is quite nice as a daily driver on your workstation. The only missing thing is bug-free suspend/hibernate, which works for some and for some doesn't.
In my experience, you're also rolling the dice when you suspend Linux. My x99 workstation, z77 workstation, and XPS 13 have all failed to resume before. But Windows has done the same. I guess ACPI is a mess.
> But Windows has done the same. I guess ACPI is a mess.
Of course, you don't mention macOS. I never had issues with suspend on any of my MBPs. If I did, it turned out that was my battery got empty, and the few times this happened I did think about suspend failing. Turned out I was wrong.
Look, I'm sure the macOS implementation of ACPI is great.
I said "I GUESS ACPI is a mess", because most implementations I've used (and I've only used it; I know nothing low-level about it at all) have had some problem or another.
How wrong of me not to have any experience with your preferred platform, and leave it out of the discussion.
I went into systemd with an open mind, hoping the changes would be worthwhile to learn.
I've come out the other end looking for alternatives... In the meantime I'll stick to an OS that uses init, and hope systemd get's better given enough time.
I looked into this a little bit. Alternatives include OpenRC (complements but does not replace init), runit, upstart (looks like most distributions that used upstart are switching to systemd), and GNU Shepherd.
I am looking at trying out GNU Shepherd as it is the init system of the Guix distribution, so you get both Nix-style package management and an init system that is not systemd, both written and configured in Guile Scheme.
Let me recommend Void Linux. Its init system is runit, which is djb daemontools based and super simple.
I switched a couple of months ago from Arch, and was surprised at how easy it was, and how completely useless (at least for my use case) all the systemd steaming mess had proved in retrospect to be.
As someone who basically started getting into Linux with Systemd (I had dabbled before, but not in-depth) I love it. Mind expanding on what you don't like about it?
One thing I do remember before Systemd though was the fact that every service was essentially a glorified bash script, and I hated that. With Systemd, there seems to generally be a much more clear-cut definition of how things operate, without all the cruft.
One of the big issues a lot of people have (or at least had, back when it was first becoming a thing) was that systemd uses binary log files rather than traditional text ones, which makes them harder to deal with (e.g. you can't `tail` or `grep` them, at least not directly). Another common gripe is that while it started out as an init system, a rather large portion of userland on a typical system depends on one or more systemd components directly or indirectly, making it somewhat difficult to avoid it if you'd prefer not to use it.
My biggest issue with systemd is that when something goes wrong, it's pretty impossible to tell why. As an example, virtually every system I've used with systemd after a few months starts to have different services fail to stop on shutdown, causing a timeout of (by default) one-and-a-half minutes. From searches that I've done online, I don't seem to be the only person who runs into this, but I haven't found any good solution, leaving me with the options to reduce the timeout to something more manageable, force shutdown my laptop literally every time I'm done using it, or sit through 90 seconds of systemd trying and failing to stop whichever service is failing that time. Maybe I'm just lucky, but it's fairly rare that any of my non-systemd systems is unable to shut down properly once, let alone every single time consistently.
But journald lacks way more than simple text files. I tried to "mute" a process and redirect output to a file, because it was counting in the maximum journal size, and found it to be impossible "by design". Had to use an actually good log daemon, but it seems quite hard to just disable journald and not lose any log.
Having just set up a Debian server this weekend (Windows dev diving into Linux for the first time), can I ask what the main difference between init and systemd are? Mostly from a operational/security stand point.
Init-scripts are a bunch of file system conventions and shell scripts; it's an imperative way of bringing a system into a particular state (networking, services etc.). Init itself is the very first process the the OS executes; all processes in -nix are created by forking. When using init-scripts, init is extremely simple.
Systemd replaces init and uses a declarative approach for the system and its services, and the dependencies between the services (replacing init-scripts). Systemd is more complicated but can do more stuff, like initialize things concurrently. Functionality that used to be implemented in services themselves (e.g. restarting, recovery) is migrating into systemd, and systemd is acquiring more and more logic. Some people feel that this is contrary to the *nix philosophy and is not architecturally sound.
Operationally, instead of using shell scripts and symlinks designed to be sorted in a particular order (normally maintained using other tools), you use descriptions of how the service should start and what it depends on.
Security-wise, systemd is a bigger hairier ball, so it probably has bugs. But it also implements stuff once, whereas before implementations were distributed and of variable quality. So the variance in security level is probably lower with systemd, but depending on your mix of services, the mean may be higher or lower. And you don't get to control it.
You've made the mistake of assuming that the old system couldn't start up stuff in parallel. That's simply not true; Debian had insserv/startpar to create a global dependency graph and run the scripts in parallel while satisfying all dependency constraints.
(As an ex Debian developer myself, I spent many many hours working on this stuff while I was one of the sysvinit maintainers.)
One of the most informative writings I've read on the topic of systemd and its benefits and comparison with other tools was written by Russ Allbery, which explains his contribution to Debian's choice to adopt it [1]:
> I did a fairly extensive evaluation of both upstart and systemd by converting one of my packages[] to a native configuration with both systems. []I tried to approach each init system on its own terms and investigate what full, native support of that init system would look like, both from a Debian packaging perspective and from an upstream perspective. I also tried to work through the upgrade path from an existing init script with an external /etc/default configuration file and how that would be handled with both systemd and upstart.
> I started this process with the expectation that systemd and upstart would be roughly evenly matched in capabilities. My expectation was that I would uncover some minor differences and variations, and some different philosophical approaches, but no deeply compelling differentiation.
> To my surprise, that's not what happened. Rather, I concluded that systemd has a substantial technical advantage over upstart, primarily in terms of available useful features, but also in terms of fundamental design.
The essay goes on to elaborate on the details. I personally found this and other writings a compelling argument in favor of the approach. Another useful article was "Why systemd?" [2]. There's also the blog post series "Systemd for Administrators" [3].
What systemd strives for makes a lot of sense to me. It allows you to describe the startup of services with declarative configuration in a simple and easy-to-understand format. Systemd is natively integrated with OS namespaces, cgroups, and the process hierarchy. Russ gives an example in his essay of how this allows systemd to track and display more information about daemons than the alternatives. It might be more complicated than the alternatives in one sense, but that buys you the power to do things like: activate services on-demand, when requested by a client; automatically launch services per user session, and clean them up on logout; concurrently start services for a fast boot; consign managed services to an OS namespace. By supporting these features in the init and process management system, it frees individual services from redundantly building this logic in their shell scripts and daemonization routines. That reduces complexity, and makes the entire system more feature-rich, more secure, and easier to manage.
As far as security, systemd makes it substantially easier to employ kernel namespace, cgroups, capabilities, and other isolation facilities with simple configuration switches. Let's say that we'd like to run some daemon with a private /tmp directory for isolation. This is as simple as adding "PrivateTmp=yes" to its configuration. What if we want to change the run-as user, or even launch the service in an isolated user namespace? Perhaps we want the daemon to have a private network or private /dev? It's as simple as setting User=, PrivateUsers=, PrivateNetwork=, PrivateDevices=, etc. respectively:
Take a look at all the options you can apply in [4]. CPUAffinity=, CapabilityBoundingSet=, IOSchedulingPriority=, etc. It's great to be able to set all of these options in a single consistent place for all daemons.
Thank you for the informative and neutral response. I knew there was a larger debate over the two, but I didn't have a good enough understanding of the two to know why.
barrkel talked about systemd as a replacement for init, but that's not the goal of its authors. Nor was there a debate over the two.
There was a debate in Debian Land over at least four choices: systemd, upstart, OpenRC, and sticking with van Smoorenburg rc with Debian's various enhancements.
The stated goal of the systemd authors pretty much from the start was not to "replace init", or even to replace both init and rc. What barrkel wrote could be said of daemontools from 1997, after all. That, too, encouraged a move of common procedures and mechanisms out of bespoke daemon programs and scripts and into a common daemon management system.
systemd, rather, was to provide a common layer, beneath everything else and above the kernel, used on all Linux operating systems. Its authors saw the differences such as /etc/sysconfig/network versus /etc/HOSTNAME versus /etc/hostname versus /etc/conf.d/hostname and wanted to unify all that, so that all Linux distributions worked the same. They didn't just write process #1 program. They wrote a name-lookup server with a local protocol to replace the DNS protocol (and the protocol that GNU libc uses to talk to its lookup helper processes), a network interface setup/teardown utility, a whole bunch of service utility programs such as a program to save/restore randomness from /dev/urandom across system restart, a centralized log writer, a centralized login session manager, and a whole bunch of programs that provided RPC interfaces, over a centralized system-wide Desktop Bus instance to GUI tools running on user/administrator GUI desktops, for things like setting the default timezone and pretty-print hostname. To that they added rules about where to find different sorts of stuff, from administrator-written unit files to /etc/machine-id; guarantees about "API filesystems"; rules about /run, /run/user, and a whole bunch of related memory filesystems; deprecation of things like /var/run/lock; rules about what sockets old syslog programs had to change to using, in place of what they had been; per-user service manager instances and a whole extra set of PAM hooks that connected it with the new runtime directories and the login session manager; and requirements such as that /usr be already available at the point that /sbin/init is invoked from the initramfs. They got some additions made to Linux in support of this, such as subreapers; and failed to get others, such as kdbus.
"systemd replaces init" is both superficial and a blinkered Debian world-view. In the world outwith Debian, in Ubuntu Land and Fedora Land, systemd replaced upstart, which had been the Fedora and Ubuntu system and service manager for a number of years before systemd was invented. The world has never been van Smoorenburg rc scripts versus systemd, not even when the whole Debian debate was had.
without realizing it you casually strolled into one of the biggest points of contention and controversy in computing history.
Systemd came out a few years ago, and wars were fought over it. It fixes a lot of bugs that perennially came up with run scripts, but it's also a huge monolithic program that is in charge of nearly everything on your system.
That's about all I'll say on it, because it's almost as divisive as the Israel/Palestine conflict.
I fully migrated my personal laptop to FreeBSD (as TrueOS) a few weeks ago, after using it as my second OS at home.
And because I need Linux for work, I migrated to Devuan on my workstation, as I need to use Ansible and Docker in a stable way as part of my DevOps job.
Even though I use ubuntu on my desktop, it is from Debian, and I ran Debian on my servers for years. Debian is the most important software for me for decades. For that I thank you, Riku, and everybody else contributes to Debian!
What prevents you from using Debian on the desktop? I'm using Debian testing for quite a while already, and it works rather well. The only annoyance is the the freeze period, but it got shorter lately.
Too many software development environments are using ubuntu out-of-the-box these days, I thus follow that to save some setup time. If I use desktop for simply surfing or office needs then yes Debian will be my preference.
I never had such kind of problems when using Debian, but if you need to target a specific distro, you can always add a VM or chroot / container to target it. There is no need to replace the desktop OS to do it. Linux is quite good for such kind of thing.
Thanks Riku, debian has become an excellent distro thanks to the hard work of people like you.
At the same time, it is sad to see how much work you guys need to do for things that should have been automated or done by the original developers... Let's hope AppImage changes that
The problem with a distribution-agnostic approach is tossing distribution policy into /dev/null means instead of maintainers and admins and users and devs having to learn the one, probably fairly complicated, policy, or more likely numerous policies for various distros, instead msdos era anarchy is unleashed on the world.
Sections 6 thru 12 are what it means to be "a Debian" rather than say freebsd packaged into .deb files.
For some very end user applications that don't interact in any way with anything else, games perhaps, that works pretty well, until something is run into that does need to interact with other components also operating under the same anarchy.
The problem with not having a closed system or standard or method of operating a system aka an operating system, is you end up with the deployed machines having an Apocalypse Now quote conversation "Are my methods unsound?" "I don't see any method at all, sir." If you're a system Administrator what does it mean to Administrate mere anarchy?
Packaging is a complicated task and isn't as automatable as you'd think, It is important to consider how changes impact other tools. AppImage depends on maintainers too. Consider for example how appimage recipes depend on wget and bash and thusly openssl and libidn and glibc and libdl and git too, which brings with it libz and, and, and ...
PS: Thank you, Riku. I never understood the sheer amount of effort involved in being a packager / maintainer until I started doing the same task for NixOS. I regularly depend on Debian's excellent patches and CVE details to do my work. May you not be bogged down by nirvana fallacies :)
I don't see the connection. I didn't say that they have more packages, just that they have very good packaging tooling that makes it simple to make consistent packages.
When you package 24000 things, you find significant diversity in build systems, languages, source code, testing etc. It isn't easy to package all of that in simple ways.
I don't know exactly to what the 24000 number refers to. Solus Project does already have 5000 packages in the repos. I think that is quite an accomplishment for a pretty small team of 3-4 people with some help from the community.
What I find complicated about Debian packaging is not how to use the tools and how to arrange the package, although that is certainly complicated, but not intractable. What I don't understand is once you have a package, why it is such a long and complicated process to get it into Debian.
You can't just upload it and forget, you have to sign up and become associated with that work, you have distribute your keys and get other people to trust you, you have to make a bug and file the package against that bug, etc. There is this whole social dynamic within Debian that I just don't understand at all.
If it was more like software development; make the package, git commit, push, post it to software like reddit and have people approve (upvote) it, then I'd be a maintainer already. Instead there's this whole process behind becoming a maintainer, finding a "mentor" etc, that for years I've just found to be a complete roadblock.
Leaving aside the specifics of Debian's tooling and social process (undeniably, these are arcane and complex), I suspect this has a lot to do with why Debian is such a reliable environment over time in a way that few software collections manage.
It would almost certainly be better if it were easier than it is to participate in Debian. I've thought about contributing for years, but I certainly haven't had the energy to clear those hurdles. On the other hand, there's something really important in the distance between typical modern software publication and packaging for an ecosystem like Debian.
I think we'd be a lot better off if more people were committed to the hard, tedious process stuff that renders software accessible to users, a good neighbor to other projects, and maintainable over the long term.
"Upload and forget" leads to packages which are uploaded once and then abandoned. This then becomes a burden. While the Debian process is time consuming and arcane it does select for people who are prepared to commit to maintaining packages for the longer term.
That said, the Debian processes are over 20 years old. I find contributing to the FreeBSD ports and MacOS X homebrew package collections simpler, and without the same level of jumping through hoops. Homebrew's git-based submission, review and CI testing is simple to use. Likewise submitting a patch for the FreeBSD ports. Debian could do something similar, but its workflows predate this significantly. Were Debian to adopt a similar process, I think it would make the process significantly more transparent. The existing practice is still oriented around single individuals maintaining and uploading single packages (though it can also be done by groups with their own private version control for the package/group). The newer methods are significantly more open with much lower barrier to entry.
These days there are several very active sponsors so it should not be a problem to get new packages in.
Upload and forget leads to dead packages that get removed next time there is a library transition. Debian is about long-term maintenance.
The social stuff (as well as the social contract and DFSG) is also what creates trust within and towards the Debian community and holds it together, which is the main reason it has lasted so long.
I wouldn't mind a large percentage of end user applications coming directly from upstream developers them self. However, I hope it doesn't take away end users ability hack/tinker/modify/rebuild the application installed from appimage/flatpak.
Already often a big part of Debian packaging work is making the software compile outside the developers personal enviroment without manual steps - or in the other end decoupling from the developers CI loop.
How should Flatpak change that? These two (DEB and Flatpak) are, at least currently, entirely parallel, each with its own set of advantages and problems. I don't think it would make any sense to migrate a distribution such as Debian to Flatpak entirely.
I don't think having developers doing the packaging is an ideal situation. I often hear developers complaining about the multitude of GNU/Linux distros because they think it's somehow their responsibility to provide binaries. It's not. The role of the upstream developer is to make their build system easy to use so that other people (like distro maintainers, but also just "regular" users) can compile from source without feeling like they're pulling teeth. If your software is easy to build, it will naturally flow into the distros when its users want it. I package a lot of software and a lot of software is difficult to build without tons of hacks.
For interested readers, here's some best practices for being a good upstream:
- Just use the GNU autotools. Users expect `./configure && make && make install` to work. Too often people roll their own configure scripts and Makefiles and they always miss something important. Distros expect there to be certain knobs to tweak, and configure scripts and Makefiles generated by the autotools have all of them.
- Don't bundle third-party dependencies. For security (and for better documentation of the true dependency graph) distros often must go through extra trouble to unbundle third-party libraries when present. Some project even add their own custom patches to their bundled source. Resist the urge to do this.
- Include accurate copyright information. Put a license header at the top of every source file. Any serious distro will need to do at least a cursory inspection of licensing info to make sure it meets requirements.
- Make proper source release tarballs. Do not depend on your version control tool being available at build-time. Do not depend on the autotools being available build time. Use 'make distcheck' to make a fully bootstrapped tarball to distribute.
- Do not make any use of the Internet during a build. That means no downloading third-party libraries, pre-built binaries, etc. It's imperative that a build can succeed without network access, and some distributions isolate builds from the network to ensure they don't misbehave.
- Do not hardcode absolute paths to binaries. No /usr/bin/bash or etc. Your assumption will surely fail on a non-trivial number of systems. Find the location of a binary at configure time by inspecting $PATH. GNU autoconf can do that and substitute the absolute file name where it's needed, such as in a script's shebang. The same advice can be applied for anything else you need an absolute file name for.
- Do not assume /usr exists. The Filesystem Hierarchy Standard is not as popular as it used to be. This is a more generalized form of the previous point. Again, if you use the Autotools you will be doing the right thing by default.
There's surely more, but that's what I can think of right now. Surely a Debian developer or someone else has compiled a more thorough list. Anyone know of one?
I think that today's software being so difficult to build is making practices that are frowned upon by distributions (for very good reason) seem like acceptable solutions, which leads us to the growing popularity of Docker, Snappy, and Flatpak. "This software is nearly impossible to build, so just use my {Docker,Snappy,Flatpak} bundle!"
tl;dr - Make your software easy to build, don't just package up a mess.
> tl;dr - Make your software easy to build, don't just package up a mess.
You've pretty much hit the nail on the head. You forgot one additional bit though, please for the love of god don't have a crazy web of dependencies.
I see a lot of Node and Ruby apps online that I think would be incredibly useful in the Fedora package collection and have considered contributing them on more than once occasion. What always stops me is the 50+ dependent NPM packages or Gems they require that aren't already packaged by someone else.
The incredibly annoying part is most of these packages provide minimal functionality that you could have just implemented yourself, or that shouldn't in turn need another 3-10 transitive dependencies of their own. I get that not re-inventing the wheel is generally a good idea, but please try to pick your dependencies wisely if you want to see a distribution include your package - because a volunteer maintainer likely doesn't want to be responsible for your package + a dozen or more dependencies if they can avoid it.
As a Fedora user the same problem used to exist for Perl packages some packages you could install via yum and some you'd have to get from CPAN. It's been probably a decade since I did anything serious in Perl so not sure if issue still exists but I suspect it still does because I have seen the same issue with python packages and the "PIP" tool.
I see more and more programming languages trying to bundle their own dependency management tools 10 years ago I thought CPAN was great nowadays I'm not as sold it's basically reinventing distro style package management but in a way unique to each programming language.
To be fair, all of these language specific package managers make distribution packaging easier as well. It's super easy to make a package for anything distributed as a CPAN package, Ruby Gen, NPM package, distutils/setuptools package, etc.
The real problem comes when developers start using these package managers with reckless abandon and letting their dependency tree grow out of control. I don't mind packaging an extra library or two, but a dozen or more is pushing my patience.
CPAN packages can be translated into perl packages automatically in many cases, or with little modifications. Same for python, ruby and node packages. See fpm[1] for example of one of such tools.
That's great, but distributions rightfully don't allow fpm generated packages. For all of these languages we've already got easy to use infrastructure, but maintaining a dozen or more packages just to get a single app in the repository is a huge commitment.
fpm is a bad example. Sure, it makes .deb or .rpm formatted things, but they are not proper packages by any means. Bundling up something pre-built from another packaging system is not what packaging is about. To do it right you need to build your own binaries from source code using only your own packages to provide the dependencies.
Of course, automatic translation of binary packages is bad thing which will produce wrong result in lot of cases, but automatic translation of source packages, with build instructions and meta-information, is time saver. For RPM, I will have a .spec file, which I then can edit further, or use tool options to fill fields with proper values. When .spec is ready, in most cases I will need to update version and changelog only to upgrade to newer version.
Got any advice for ways to bundle stuff written in Rust, Go, Ruby, or Node? All of these languages come with package-managers that encourage reusing packages from their respective ecosystems.
I packaged a Github clone called Gogs (written in Go) for Debian/Ubuntu, complete with Lintian support. But I had to compromise on the 'rules' file and add a "get-orig-source" target that uses Go's package manager to grab all of the dependencies. I used that rule to grab all of the source files required to create the source package (which can then be built in isolation).
But if I understand Debian's official packaging rules, this is verboten because it winds up including a bunch of interconnected third-party libraries. Since I didn't write Gogs or any of its dependencies, I can't exactly go through and eliminate all external dependencies. And even if I could, much of Go's standard library exists only in ecosystem form.
How should a prospective package maintainer handle these kinds of ecosystems? Trying to distro-package every library (Perl-style) would be a Herculean effort, and could conceivably be met with hostility by the upstreams.
There is so much software being written in Go/Rust/Ruby/Node/etc. How can we go about packaging it?
There's no easy road. We need to convince upstream to change their ways. The proliferation of language-specific package managers is a problem that many people don't yet understand is a problem. I often get hostile reactions when I advocate for general-purpose systems package managers over language-specific ones.
In the meantime, we can use the information available in these language package managers to help bring that software to the systems package managers. How easy it is all depends on the language. If the language/package manager is sane and the build system isn't conflated with the package manager, we can make quick progress by writing importer scripts that automate most, but not all, of the work. Node, Go, and Java are utter nightmares for various reasons. Python is pretty good. Ruby is somewhat annoying but doable. It seems that Rust is decent but I haven't used it. All I know is that someone recently wrote a Crate package import for GNU Guix (the package manager I contribute to and recommend highly) that seems to work. [0]
I watched the Rust packaging effort via the mailing list, and that seems to be going quite well.
Would you happen to know a good video or writeup on why language-specific package managers are a bad idea? I mean, the situation with C and C++ libraries seems significantly worse to me, and I personally really enjoy having the fully Crates.io index at my disposal on any box that runs Cargo.
I think language-specific package managers are just fine for sharing source code for that language with other developers of that language. But as soon as you need to do more than that they become extremely problematic. The dependency tree for a Crate (or a package in any other similar system) ends where the Rust dependencies end. It cannot describe the full dependency graph for any package that depends on a program or library written in another language. I'm more familiar with Ruby so here's a real-life example: The mysql2 gem needs to compile a C extension and link against libmysqlclient. However, the 'gem' utility only handles Ruby dependencies, so in order to 'gem install mysql2' you need to use your system package manager to provide 'libmysqlclient'. There's always going to be this disconnect and you'll have to use multiple package managers in order to get anything working. It's very error prone. Wouldn't be great if a single packager manager could describe the complete dependency graph? This is a big reason why I advocate for GNU Guix. I do devops for a living, and much of the difficulties I face are due to problems trying to glue multiple package managers together.
Not your parent, but I personally believe that using "package managers" to describe both of these things conflates the issue. That is, both are valuable, for different reasons. The shortest way that I can describe it is this: I use my system package manager to install things for software that I'm not developing, but language-specific managers for software that I am actively developing. When I used to write a lot of Ruby, I had my own install, but now that I mostly write Rust, I have Ruby installed via apt-get.
The two styles of managers just have different goals. That is, a package manager such as apt has the goal of creating a single, harmonious system from stuff written in many languages. But a language-specific package manager like Cargo has the opposite goal: to provide a good way of writing software written in one language across multiple systems. This is where most of the tension comes from. The rest of it is from the same general structure, but with different specifics: the goals of these kinds of systems are very different, and conflicting.
There's no need for two styles of package managers. GNU Guix and Nix can serve both purposes (and more) very well, for example.
I think language-specific package managers are fine for easily sharing source code amongst developers using the same language, but they shouldn't be used in a production system.
If you're on Windows, then fine use whatever is available. A language specific package manager is about as good as it gets there. There are no good package managers for Windows, and I don't think there can be. I don't even know if you can isolate builds in a container like you can on GNU/Linux. That's a crucial OS feature. Besides, I aim to liberate users, not enslave them, so I develop for the GNU system, not Windows.
The third sentence is not a contradiction. I'm just saying that I can live with people using language-specific package managers, but really they would be better off with a general-purpose one.
Debian has a package for rustc and cargo in Stretch, specifically for helping package stuff written in Rust. They also have a way of converting crates.io packages to Debian packages for this purpose. Asking about this on http://lists.alioth.debian.org/pipermail/pkg-rust-maintainer... is probably the best way to get advice, that's where the people doing this work congregate.
The problem is, my experience is "use autotools" and "don't package third party dependencies" make my software much harder for my users to build, and distributions are going to generally have out of date versions.
I'm not saying packaging is easy, and I do try to make it friendly for distributions, but don't pretend that doesn't make it worse for general users in the process.
You don't have to use autotools, but it makes doing the right thing easy. If you want to use CMake (I much prefer it myself!) just make sure you use pkg-config, same with scons or whatever. These are all included in Fedora's (my distro of choice) package collection and there's no trouble using them to build.
But please, if you decide to bundle third party libraries make sure you can build without them and use ones provided by the system instead. It's a political nightmare to include packages with bundled libraries because it makes security updates a huge headache since we can't simply rely on Anitya (https://release-monitoring.org/distro/Fedora/) to send us notifications that a new release of the library is available, not to mention the extra work of actually updating the bundled library once we do find out an update has been published.
It is best to not bundle third party libraries, instead, have your build system check if the libs are installed and if they aren't then download and build them. rleigh mentions below that cmake can do that.
It absolutely does not make it worse for general users in the process. The whole point is to help the user! I'm not saying there aren't problems on the distro side. Distributions like Debian do have the problem of moving much too slowly, and apt and the other "imperative" package managers are severely flawed, but the basic best practices I aligned make things better for all users.
And I'm not saying that you should never provide some prebuilt binary to your users if their distro is lagging behind. And if you really feel the need to bundle third-party libs then just make sure there are configure switches that can be flipped so that system libs are used instead. The best thing for users is for them to be able to get all of their software from their distro, and that requires distros and upstreams to each do their part.
> I don't think having developers doing the packaging is an ideal situation. I often hear developers complaining about the multitude of GNU/Linux distros because they think it's somehow their responsibility to provide binaries.
It is in many scenarios. If I have a little app I want to package then it becomes my responsibility. For commercial software I make it always is and this is part of the reason that linux sucks for commercial software.
And then there's issues like security patches. Developers need to know what branches are used downstream.
>It is in many scenarios. If I have a little app I want to package then it becomes my responsibility. For commercial software I make it always is and this is part of the reason that linux sucks for commercial software.
You are talking about proprietary software, where developers have unjust power over users. If you want to distribute such software then yes, you have to do the work of making binaries for each distro you want to support by yourself. I would argue that it's not GNU/Linux that sucks here. If instead you gave your users freedom by using a free software license on the source code, then others may package the software for use on the system of their choosing.
> - Don't bundle third-party dependencies. For security (and for better documentation of the true dependency graph) distros often must go through extra trouble to unbundle third-party libraries when present. Some project even add their own custom patches to their bundled source. Resist the urge to do this.
If any of the dependencies aren't currently packaged in Debian, how would one follow this guideline?
Install them separately, but don't embed. CMake provides features like the external project stuff which lets you fetch and build other sources. But even then, you don't need to embed that in your source tree either--do it at a higher level which builds all the dependencies plus your own sources. This keeps your sources free of embedded junk, giving it the flexibility to be packaged, or used standalone.
If I wanted to package my third-party dependency, the first thing I would do is "learn about personal interests of sponsors" and see if my third-party dependency and a sponsor's interests intersect. There's a link to a page describing the sponsoring process, where apparently I'd file a bug against a "sponsorship-requests" pseudo-package and then, I guess, wait.
Next (or perhaps concurrently) I'd file a separate "Intent to package" bug against the "Work-Needing and Prospective Packages" pseudo-package. There's a whole page about WNPP and format guidelines for submitting said bug using the "reportbug" tool. Those format guidelines are longer than the JSON spec.
Then I'd still need to make the package, after all. That link you gave lists five important reference materials, one of which is said to be "must read" and has 12 chapters and 7 appendices. There's also a "New Maintainer's Guide".
Then I need to publish my package. There's an account to sign up for. Plus I'll need to create, keep up with and sign stuff with a GPG key because uploads are http/ftp only.
Once that is finished I apparently get an email response. Finally... I am done!
Now it's time to find a sponsor.
There's a whole section on what to do if you can't find a sponsor. The first is to follow up on the WNPP request I was supposed to make six paragraphs ago. The other is apparently to look up sponsors in a sponsor search-engine on the Debian website and bother them.
Then there's another section on actually getting the package into debian through an ftpmaster. (Both the sponsor and the non-Debian Debian-package maintainer are ominously reminded here that the ftpmaster's _opinion_ on inclusion is binding.)
And then maintaining it.
I would be, for the life-time of my application, maintaining the Debian package of one of my third-party dependencies. This, in response to my query about how to be a good upstream citizen in the hopes that downstream maintainers can more easily package my application! :)
If you are doing something unrelated you probably aren't interested in packaging some dependency for Debian, so you may as well just manually compile and bundle your deps into a container format like docker/appimage.
As a Debian Developer I often find myself throwing away the upstream Debian packaging and starting from scratch. It's not that it doesn't work but that it doesn't fit with Debian policy and so is not easily included or modified without basically starting from scratch anyway.
In the cases (which happens more than you might think) where an upstream developer has actually got really good packaging, I've usually taken the approach of tidying up the last few details and committing back and then mentoring them for a while with them doing the majority of the work and I just double check and upload it.
The level to which people are willing to work on it varies from accepting patches that affect the distribution package version (e.g. hardcoded paths or porting issues) to actually doing the packaging.
MongoDB's Debian packaging is very different from Debian's own packaging. The MongoDB people decided almost a year ago that Debian users only get to use one init/rc system each for Debian 7 and Debian 8.
Also, upstream may have different concerns. For example, they want to be built on various versions of the distribution while Debian will usually only target unstable.
I am upstream for some of my packages and I don't provide the same debian/ directory upstream as I use for Debian.
Now that we have faster moving packages and especially where we have packages that depend on external systems (like cloud services), I think more and more Debian Developers need to be making use of backports to ensure that stable versions remain up to date.
I've heard time and time again that there are issues with Debian's version being out of date, but this doesn't have to be this way, at least not by policy. If there is a backport maintainer (doesn't even have to be the same person working on unstable) then the latest version can be installable within a stable system giving you the latest and greatest of the applications that need it with a stable base underneath.
I am always hoping there are enough people who take up maintanance as a sort of hobby, ,because I don't have the time and skill and want a just works (tm) solution.
Thats very nice. Although I wonder what Debian's 'official' container format will be.. almost everyone else seems to be standardizing around Flatpak so it'd be nice if Debian did as well, especially since that will mean Ubuntu (hopefully) will switch to it in the future as well instead of Snappy, same as the whole Upstart -> SystemD story.
Frankly more and more it seems that whatever comes out of Gnome/Freedesktop these days do so to fix perceived problems with Fedora but end up being applied across the Linux ecosystem for "reasons"...
I considered become a DD about 10 years ago, but a friend was going through the process and it took them over a year with at least one restart-from-scratch because the bureaucracy had been lost or something.
> Regis NM did start somewhere in 2003. There was a period of him being on hold, in 2006 we did continue the process, which used some time, but most of the delay up to now is, again, my fault. Seems like all the few NMs left in my AM queue do have some huge level of patience available somewhere...
I wonder if Debian has improved their process or replaced the ineffective people since. Do they still have the second-class-citizen system (whose name I can't remember offhand)?
While such delay is frustrating, do bear in mind that these "inefficient" people are entirely unpaid volunteers doing this on their own time and at their own expense. Which is not to say that the process could not be improved, of course. They did introduce the "maintainer" status, which makes it simpler to get some level of privileges, which removed some of the blockers.
I went through a similar experience with the NM process some ten years ago except I ended up giving up on the whole thing. After being told I absolutely had to answer a load of questions within a week for a week that happened to be very inconvenient, my answers then got completely ignored. I carried on maintaining the one package until a few years ago when I gave up because nobody would sponsor an upload until the freeze at which point I was being told I had to backport bug fixes to the old release. I actually ended up jumping to FreeBSD before the whole systemd mess hit and I've found it to have a friendlier community.
These days the questions thing has been replaced by a more streamlined process, basically if you have been contributing for a while you'll be accepted.
Debian is an awesome and somewhat under-rated distribution. Being a maintainer always seems like a thankless and slightly forgotten role. Thanks for having the persistence to keep going so long.