Hacker News new | past | comments | ask | show | jobs | submit login
Debian Handbook for Debian 10 Buster (debian-handbook.info)
312 points by rlsph on June 1, 2020 | hide | past | favorite | 111 comments



In terms of linux documentation, generally, I have two recommendations. One, install Arch Linux in a VM and mess around with it a bit. Not because I particularly like it, or use it for any production purposes. But because the Arch wiki is such a wealth of information on individual daemons and subsystems.

Secondly, make use of the Arch wiki. By knowing at least the basics of how Arch is organized, you can better discern what technical reference material is unique to specific quirks of arch, and what is general to the same daemons running on debian or centos (postfix or dovecot, for instance).

examples of generically useful reference:

https://wiki.archlinux.org/index.php/Network_Time_Protocol_d...

https://wiki.archlinux.org/index.php/Postfix

https://wiki.archlinux.org/index.php/Systemd


You do know that Debian supports an Arch-like install workflow via debootstrap, right? See https://wiki.debian.org/Debootstrap on the Debian Wiki for details.


I definitely wish this install method had a good guide. Having figured it out myself and installed Sid in the past, it really makes for a nice rolling-release distribution experience when combined with apt-listbugs. It's definitely not for the faint of heart!


Does https://www.debian.org/releases/stable/amd64/apds03.en.html fit the bill for you? It's a bit outdated (does anyone still use lilo? It also talks of MBRs and grub-pc instead of grub-efi) and an updated version would be great, but you can substitute the modern replacements for the old stuff and it'll do the trick. An update to that guide would be nice. Nonetheless, I've installed dozens of systems that way (clean install of the new release on a second partition, set everything up, reboot for an upgrade with less than 5 minutes of downtime and the option to go back to the old install at any time with a simple reboot (which is a requirement for these machines)) and it works fantastically.

I also use sid (with apt-listbugs) on my personal machines and can't remember the last time I had something break. You should definitely know your way around a Debian system in case something does break, but it's not any more prone to breakage than Arch in my opinion.


The Arch wiki is today's Gentoo wiki. I don't use either distro, but end up finding my solutions on the Arch wiki more often than not.



I've never installed Arch, but regularly end up finding the info I need on the Arch wiki. Amazing resource indeed!


If you want to understand how the parts of a working Linux system come together build Linux From Scratch once, fully manually while going through every step.

Arch Linux is like every other binary Linux distribution, only more difficult to admin for no reason. Hell if you squint hard enough even Gentoo has more of a reason to exist. A package managed that is source code first.


Second that. Before arch, my desktop linux experience ended with me breaking the system and not having a clue on recovery (a few attempts at slackware and one attempt of debian).

Arch's install process hit the right balance of me knowing how various low level components work together (bootloader, kernel, xserver) and overwhelming me with LFS.

I did eventually migrate to ubuntu LTS server + awesomewm for a few reasons:

  * at the time I was using dozens of AUR packages (ROS) that depended on boost and some other libraries, which made each update a several hour endevaour of re-compiling those packages,
  * the latest and greatest versions of each package got painful, as something I was working on relied on older versions of those packages (might be some codebase that needed particular gcc version or something like that).
However, I am forever grateful for the things I learned during the few years of me using arch.


I once encouraged everyone looking to learn Linux to install Gentoo. The Gentoo forums contained a wide variety of troubleshooting and debugging information including a wide variety of not just system configurations, but software build configurations.

Then I encouraged them to uninstall it once they realized they're fighting gentoo eccentricities, rather than learning about unix build systems. But you learn a lot about unix build systems in the time, and the journey to that understanding is what's important.

I think Arch has overtaken that role in many respects, except instead of forum posts it's a wiki and a whole lot more are design decisions optimizing for something different rather than eccentricities optimizing for "That's how it was done".


I tried to install Arch a month ago, and after about 20-30 mins of doing some progress I gave up. It did not feel like it was time well-spent for a non-Linux (at the time) person.


Wait I see how Arch Linux is the first recommendation, what's the second exactly?


I somehow forgot to type that in and now added it. Second point was to make use of the arch wiki as general non-arch linux reference.


I installed arch, and it was so much faster that it is now my favorite distribution.

Ubuntu by comparison is a slow intrusive distribution.

I do use debian-ish by way of raspbian.


Whether a distro will seem "slow" or "fast" is almost entirely down to which desktop is installed by default on that distro. But users should know that they are not limited to that default desktop, and they can just type a few commands and switch to some lighter desktop if their hardware doesn’t smoothly run the default. Therefore, it is rather a waste of time to completely switch to a different distro just for the sake of speedups.


i had gnome on ubuntu and gnome on arch, and arch was significantly faster. I should try a debian install.


Probably the background services and the fact that Arch doesn't come with AppArmor or SELinux running OOTB.


I've done some testing and if you install minimal Gnome on Arch and minimal Gnome on Debian Testing/Sid (Gnome version on Stable is few versions behind) you pretty much get the same speed and memory usage. Much faster than Fedora and even Ubuntu.

To install minimal Gnome on Debian, use the netinst.iso, untick everything on the last step where it asks you to choose a DE, log in after installation is complete, do apt install gnome-core and reboot into a very minimal Gnome. I write this from that setup.


The desktop is relevant, but so is the default selection of background services. On Debian, choosing a lightweight desktop is trivial (no need to download a separate image, unless you're specifically doing a live install) and the background services are a lot more streamlined as well.


I love Arch compared to Ubuntu, but I haven't noticed a speed difference. What kinds of things do you see a difference in?


Pacman is way faster than apt (or used to be, when I was using it).


If you want to make pacman feel really fast, try DNF.


brb while I install Fedora or CentOS8 and wipe out all my stuff. In other words, both of those distros on the desktop are the same good idea as arch on the server. Speaking from experience.


Oh right, it does feel that way. Somehow the package manager didn't come to my mind when I said that :-) anything else you've noticed?


getting to the desktop is instant. shutting down is instant.

I have an arch server. I just tested and can do "sudo reboot", it becomes unpingable after 2 seconds. The turnaround from shutdown to sshing back into it is just over 20 seconds. (I'll bet it would be faster if I didn't have the 5-beep no keyboard sound at boot)


Arch comes with a desktop? :-) (Which desktop are you talking about?)


the one with the desktop was gnome. I think I installed gnome and gnome-extra


Debian used to be a necessity. The informational complexity of an OS installation was defined by the skill of the package managers. You would rely on them to make smart decisions, which would be codified in the debian/ directory alongside the original source, in the dpkg sources themselves.

All that feels by the wayside nowadays. The information about my site installation is all in a set of ansible playbooks. They could blast their way through an installation of any modern Linux OS and the result would look like an horrendous mess to a sysadmin from the 90s, but who cares? It’s still as reproducible as the carefully tended garden that was a Debian installation of the old days, but with the advantage that whenever the install drifts into instability you can just nuke everything and rebuild from scratch.

In fact, it’s all so automated, if you aren’t regularly blasting everything away and reinstalling, you’re doing it wrong.

No longer do I have a gigabyte or more of backups representing my ossified Debian install as a fallback. Instead I have a few kB of YAML and ansible to build everything.

Case in point. Debian 10 ships with LXC 3. If I want LXC 4 I just script the build and installation. The script is what needs backing up. The OS is disposable. In the past I might have taken care to ./configure to install in /opt but why bother? It’s more work (LD_LIBRARY_PATH needs setting) so just blast it into /use/bin and don’t lose sleep over treading on Debian’s carefully pedicured toes. If it doesn’t work out, reimage and try something else. It’s wonderfully transient.

I miss the old days, but I embrace the controlled chaos, volatility, and dynamism of the future. I can do what I want instead of what, in the last, was carefully curated for me by expert Debian maintainers. A bazaar to their cathedral, sort of.


I think a lot of devs and organizations these days take the "cattle" idea too far, to the point they're like industrial factory farms where all the animals are sickly, and the pools of effluent occasionally break out of containment and kill everything in the local watershed. Yeah, cattle, sometimes servers die and you replace them. But these organizations get too accustomed to just re-launching a whole new autoscaling group because the old one went bad for some reason that no one really wants to figure out. I get the impression they don't realize how unusual it is for a quarter of the cattle to die in a week. Meanwhile, my 100 or so "pets" are managed using an appropriate level of automation and monitoring, and I get a random bad-hardware incident about every 6 months. Compare that to monthly whole-cluster outages from the full-best-practices all-cattle people.


> Compare that to monthly whole-cluster outages from the full-best-practices all-cattle people.

I'm trying to phrase this as nicely as I can: this statement does not reflect the reality of modern day operations.

edit: For people downvoting

1) How exactly do you think operations at companies like Google, Facebook, Amazon, Netflix, Uber, Spotify, etc. work?

2) How often are those companies having monthly outages?


Well at a low level we all know that AWS works akin to a Rube Goldberg machine, where a failure in some forcibly dogfooded service means even their own fucking status page can’t be updated, not to mention the dozens of seemingly unrelated services that suddenly don’t work.


HN is weirdly hostile towards the modern distributed systems and DevOps ideologies. The amount of people making incorrect opinions on K8s threads is just staggering for a technically inclined crowd.


Debian adds a lot of value in tracking and patching security issues.

Debian adds a lot of value in dependency management.

Debian adds a lot of value in community.

All of these things can and are done by other organizations, but Debian is definitely doing it.


I have great respect for the project, and IMHO you missed the headline feature: Debian’s Free Software Guidelines and the commitment to true freedom in main is an important contribution to the ethos of a universal operating system.

And you’re right about Debian Security Advisories. They are an important part of a stable base OS for anything internet facing. (The days of local users attacking each other via vulnerabilities seem less relevant in 2020.)


The days of hacked routers and IoT devices harken back to that, though.


Debian adds another two things which are priceless IMHO:

- Set and forget servers (enable automatic security updates and just let it run).

- Painless and guaranteed stable to stable upgrade paths.


Debian adds negative value on security issues. The Debian ssh key vulnerability was among the worst vulnerabilities ever seen in general-purpose computing, and also a predictable result of Debian policies which remain in place to this day. It will happen again sooner or later.


The ssh key vulnerability was as much a fault of the upstream as it was Debian: the Debian maintainer who made the patch explicitly asked the OpenSSL mailing list if it was okay, and they indicated at least acquiescence in the change.

Given that upstream OpenSSL itself would have the Heartbleed bug revealed a decade later, with all the revelations of its source code quality made more notable as a result, I'm willing to place more blame on OpenSSL than Debian here.


> The ssh key vulnerability was as much a fault of the upstream as it was Debian: the Debian maintainer who made the patch explicitly asked the OpenSSL mailing list if it was okay, and they indicated at least acquiescence in the change.

IIRC they asked a list that was not the main project list and not intended for security-critical questions. Code that actually goes in to OpenSSL gets a lot more scrutiny from the OpenSSL side, and other big-name distributions either have dedicated security teams that review changes to security-critical packages, or don't modify upstream sources for them. Debian is both unusually aggressive in its patching (not just for OpenSSL; look at e.g. the cdrecord drama) and unusually lax in its security review.

> Given that upstream OpenSSL itself would have the Heartbleed bug revealed a decade later, with all the revelations of its source code quality made more notable as a result, I'm willing to place more blame on OpenSSL than Debian here.

Heartbleed was a much more "normal"/defensible bug IMO. Buffer overflows are a normal/expected part of using memory-unsafe languages; every major C/C++ project has had them. Not using random numbers to generate your cryptographic key is just hilariously awful.


> Debian ssh key vulnerability was among the worst vulnerabilities ever seen in general-purpose computing

interesting, I wasn't aware of the history of this vulnerability. For anyone else curious, here's an analysis of what happened: https://research.swtch.com/openssl


also, afaik no telemetry.


You can opt-in to sharing some info with the popcon tool.


Yeah, the biggest problem I have with Debian is that they tend to take popcon statistics as evidence.

No corporate installation ever adds popcon. Nobody sensitive to their privacy adds popcon. It's a small, self-selected group, and massively overrepresents desktops and laptops at the cost of servers.


Main problem with telemetry. The reverse is also true. Software that has it enabled by default will just get data from noobs, because more technical users will disable it. (Looking at Firefox)


> that they tend to take popcon statistics as evidence

Please provide a source to back that claim.


Sure. Go read debian-devel on the topic of systemd.


...


If your systems are cattles [0] your case holds. If not, your method won't work on the long term.

Also, preseeding can do much more than ansible/salt/puppet. It can produce %100 reproducible systems under 5 minutes.

Case in point: I manage a cluster and our deployments are managed by XCAT [1]. I just enter the IP addresses, the OS I want (CentOS or Debian), and some small details in 5 minutes. With three commands, I power up the machines and in 15 minutes ~200 servers are installed the way I want, with the OS I want, identically and with no risk to drift into anything.

The magic? XCAT generates Kickstart/Preseed files and instead of pushing a minion and blasting through the installation, it configures them properly. Some compilation and other stuff is done with off the shelf scripts we have written over the years. More stable and practical than trying to stabilize a salt recipe/pillar set or an ansible playbook.

I only re-install a server if its disks are shot or it somehow corrupts itself (which happens once a year per ~500 servers due to some hw glitch or something).

The new ways are cool, but they don't replace the old methods. You're free to choose the one you like most.

[0]: https://www.engineyard.com/blog/pets-vs-cattle

[1]: https://xcat-docs.readthedocs.io/en/stable/


When your user data / logs / spools / db files are on a filer or confined to one local place that’s easily backed up and can be deployed — I tend to use /data — there’s no reason to have ‘pets’ any more.

What’s the non-cattle use case you are referring to? Systems that can’t be wiped because of important persistent user processes? That’s the only volatile state I can think of that would be lost by reimaging.


In normal enterprise operations, it's easily attainable. I presume your servers are not extremely under load and they mount a remote storage system.

In our system, there's tens of storage servers under heavy load (while they are redundant they are generally actively load-balancing) and more than 700 nodes which are cattle but under max load 24/7/365. The whole thing doesn't have any time or space to breath basically.

While losing nodes doesn't do any harm, we lose some processes hence lose time and, we don't want that.

Even if we can restore something under 15 minutes, avoiding this allows us to complete a lot of jobs. We don't want to rewind and re-execute a multi-day, multi-node job just because something decided to go south randomly.

Our servers are rarely rebooted and most reboots mean we have a hardware problem.


> In fact, it’s all so automated, if you aren’t regularly blasting everything away and reinstalling, you’re doing it wrong.

The road is the goal? Some people actually use installed systems.

If everything improved so much, how come that the Internet is worse than in 2006?


It’s good practice to ensure your code still compiles after a make clean.

Wiping and redeploying is the same idea with infra: just another part of disaster recovery, or even just regular recovery.


> If everything improved so much, how come that the Internet is worse than in 2006?

What facts are you basing this opinion on?


> The OS is disposable.

Hmm. You can install a basic userland with a custom kernel, with no particular dependency on a distribution (Let's use openSUSE today!) in just a few KB of YAML?

Have to admit--I'm skeptical.


I do it with a few text files.

https://github.com/pauldotknopf/darch-recipes


This seems to explicitly reference ubuntu, i.e. a distro?

> sudo darch images pull pauldotknopf/darch-ubuntu-$IMAGE

So your "build" is actually just grabbing what the distro already built and then using its infrastructure.

Your description sounded more like you were actually doing things from scratch or at least weren't relying on a distro.


The point is that the distro is a commodity. Yes, you need one somewhere, but it doesn't really matter which particular distro it is, which packaging format/repos they use...


So why did you just happen to pick one of the largest with lots of manpower, large repos, lots of software support etc and not for example RebeccaBlackOS (yes, that exists and no it is not what you think)?

There has been quite a bit of standardization and improved compatibility, so that you can choose relatively freely between the "big" distros. But that does not say they are a commodity but rather that all (five or so) of them are good.


People tend to go for the big names everyone has heard of because they're the big names everyone has heard of. A no-name distro is probably fine too though.


Your last sentence is specifically what's being called into question.


This seems to be leveraged on top of a distribution, and IMO is a good way to do things. But definitely you're relying on the work that the distro people do.


Debian itself does support automated installation/deployment. You can run the ordinary debian-installer while "preseeding" answers to every question it would normally ask during the install. It's not clear what your scenario is adding there.

FWIW, most stuff that you download from upstream sources will install under /usr/local/ by default, which is standards-compliant for your use case. (You don't generally need to use the /opt/ hierarchy.) Overriding that default and putting stuff in /usr/ is just breaking the OS for no apparent reason.


Preseeding is a beautifully built technology but it isn’t very relevant today.

It helps answer questions like partitioning and encryption that is hard to do on a running OS, but really everything else can be done once the OS is installed and running. The development cycle in creating a preseed config that actually does everything you want is painfully slow compared to writing shell scripts / playbooks / cookbooks for a running system.


> Preseeding is a beautifully built technology but it isn’t very relevant today

I think preseeding is still relevant with the advent of container / immutable operating systems such as CoreOS, and perhaps Nix too. The technology has changed and overlaps with configuration management tooling, but only handles a small part of a servers lifecycle giving room for a proper cm tool.


Can you compare the time required to generate a preseed file and a playbook?

Also, how often do you rebuild a playbook and a preseed file?

Honestly asking, no traps here.


When you’re at the end of the week and you know what you want to do, the two are equivalent.

When it’s Wednesday afternoon and you are still riffing on some ideas as to how to configure a new service, the preseed edit-test-edit cycle is in the order of minutes instead of seconds, compared to a script run via ssh on a stable running system.

That makes a huge difference to productivity, for me.


Oh, I understand now.

I generally do service configuration at the post-install stage and if I have a working configuration just get that from our central storage. Or just write a small script and add to the post-boot steps of XCAT to run the commands and configure that stuff.

I configure the service by hand at one server, polish it, get the file (or steps) and roll it out.

So, preseed file stays very stable. We only change a line or two over the years.

Thanks for the answer BTW.

Edit: It's late here. There's packet loss between my brain and fingers.


Direct link to online version (as I had to click on three different links to get to it): https://debian-handbook.info/browse/stable/



That's probably on purpose so you make a donation.


I can live with that.

I think of you are benefiting from a FOSS project day on and day out, donating "one cappuccino" once in a while would not be a major spending decision.


How many "cappuccinos" do you spend on closed software? Why do you spend less on software that gives you additional rights?


Cappuccinos are not the upper limit.

It's just a good start.


I thought the effort behind this book died, given the lack of one for Debian 9. This was always an excellent resource for me in the Debian 6-8 days, and I'm glad to see it back!

Nitpick: the cover still says "Debian Jessie"


I tried to use this handbook yesterday to answer the question: “how do I start the SSH server on Debian?” and I couldn’t answer it. Compare this to something like the FreeBSD handbook [1], where the information is very easy to find.

[1]: https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/


I’d imagine it doesn’t cover that specifically because almost all services start automatically when they’re installed on Debian.


It doesn't cover it generally, either. What you have just stated appears nowhere in the headlined Handbook.


I've been using Debian for about 4 years now. Before that I was a Ubuntu user. Before that, Windows. Never looked back. Great distro. Highly recommend it.


What are your thoughts on Ubuntu vs Debian with respect to functionality and he support (laptops in particular)?

I have used Ubuntu at work and in home for a while now and things have been surprisingly free from hassles. I don't mind moving to Debian if things work as smooth as now.


From my experience, as long as you choose the "nonfree" ISO [1] to install Debian from, you should be fine. On the default "free" iso my Wi-Fi didn't work while on the nonfree ISO everything was fine out of the box. Debian these days also has the "live and nonfree" ISO so you can test everything out before installing.

[1]: https://cdimage.debian.org/images/unofficial/non-free/images...


This! Some computer have proprietary parts like modens and graphics card. Non-free takes care of all that.


How much non-free would one need on something like a Thinkpad? I guess the wifi chip, what else?


I've been owing Dell for quite some time now, and on Dell machine you need it for the wi-fi and graphics cards (if they have a nvidia, for instance). Don't know much about thinkpads, sorry. But, people lots of people from /g/ use it, and they usually use Gentoo, meaning they compile it from source (almost).

You can use the free version of Debian and install what you need later. But as I was saying, if you're installing on a laptop with no ethernet cable, non-free is the way to go.


Complementary to this handbook, if you maintain one or more debian servers it is useful to learn and adopt a configuration management tool -- assuming you are not already using another state management technique that gives you even more control.

The time invested to learn and automate repetitive configuration or admin tasks (upgrades, installing dependencies, deploying a new version of your app, etc) through ansible and ansible-playbook pays for itself pretty fast. For common tasks there are often high level modules that provide a simple declarative interface to non-interactively configure the desired state compared to using the underlying command line tools directly. E.g.. installing/upgrading packages with apt: https://docs.ansible.com/ansible/latest/modules/apt_module.h...


This seems really useful!

Related question for all debian people I guess will visit this page: Is there a version of this for all the small stuff in debian?

Background: A few days ago I installed debian under WSL. Normally I used to install Ubuntu.

I've set it up to be comfortable (bash completion etc,) but I think someone here mentioned some other guide or something for setting up debian machines, this handbook seems to be more about large systems administration and less about configuring it as a developer workstation.

(I also did some cursory searching on this to see if I could find something about vadh completion in this handbook, but couldn't find it.)


Debian has ~60,000 packages and it'd be pretty impractical-to-impossible to cover them all in a single book. This handbook is about the really common tasks--even most of the specifics it goes into, such as with mail servers, LDAP, etc, they have alternative packages and implementations in Debian that aren't covered.

Thankfully, most packages are very well-documented, you can usually browse around /usr/share/doc/$PACKAGE to see the upstream documentation and possibly any Debian-specific information. HTML and PDF versions of documentation tend to be in an independent $PACKAGE-doc package, so as to keep the main package rather small.


> Debian has ~60,000 packages and it'd be pretty impractical-to-impossible to cover them all in a single book.

I'd admit my question wasn't the best, so I try again:

I'm talking about a howto that describes how to apply a number of the small conveniences that Ubuntu has by default but that seems to be optional on Debian:

- bash completion

- man pages (at least they are lacking in the Debian wsl edition)

- etc

I figure this stuff out, it only takes 5 minutes every time I notice something is missing bjt I'm fairly sure I've seen a Debian enthusiast here saying there's a short how to on how to configure the end user aspects of debian cli, and that is the thing I'm looking for :-)


odd, I find the docs to be routinely missing, often substituted by a some kind of shell text file with a gz compression on it (?)

first look on an old system:

  /usr/share/doc/amd64-microcode
  /usr/share/doc/anacron
  /usr/share/doc/apache2
  /usr/share/doc/apache2-bin
  /usr/share/doc/apache2-data
all fit that pattern


gzipping documents is a long tradition. Most common text reading tools (cat, more, less, grep, diff, and some more) has "z" variants which work on gzipped files directly, without ungzipping them.

Most of the tools bigger documentations come in man and info files. If there's an even bigger, additional documentation, it comes with -doc package (like apache2-doc).

Otherwise every package comes with some basic documentation and Debian specific changes file, most of the time.

Debian's specifics on packaging is very strict. You just cannot package something and publish in the main repo.


It was always somewhat annoying though, particularly when it broke inter-document links, and it was a legacy of older times. Today, it's way past time to drop individual file compression. I mean, we have transparent compression with systems like ZFS, so there's no penalty to pay by half-assing it with gzip.


If Linux was a big-iron only OS, I could happily agree but, this stubborn thing works from Raspberries to mainframes and everything in between. So it's not always possible to run it on a CPU which can drive an ZFS on a multi-disk enclosure.

I'd rather have individually compressed files instead of running a heavier FS layer with smaller/less powerful systems.


> odd, I find the docs to be routinely missing, often substituted by a some kind of shell text file with a gz compression on it (?)

They're just compressed. Have you tried zless?


On some systems, less can automatically detect gzip compression via lesspipe: https://github.com/wofr06/lesspipe


all packages have its man but many packaged have also option al doc packages (i.e if you have installed foo look for a foo-doc package).

I still remember the HOWTO text files


no, I meant that to save space, more complete docs are replaced by one or two terse files (that also also gzipped).


try zcat on the gz'ed files.


What kind of small stuff? I may either try to help myself or just point you to the right direction.

In fact, Debian has a structure and, once you get hang of it, you can just guess and do the thing you want.


It really depends. Sometimes it's a matter of figuring out which startup script controls X default setting.


I got curious to know how it was published. I see that it has been published using publican.

I looks like the source is all in xml. Are these hand written in xml or there is some kind of UI to write? Writing everything in xml looks painful.


It isn't painful. I do doco in DocBook XML myself. I write it with a text editor and view it with a WWW browser or (when I am using a terminal) a TUI DocBook XML viewer that I wrote.

Too-little known fact: with a small amount of CSS, mainstream WWW browsers can view most DocBook XML directly. Witness:

* http://jdebp.uk./Softwares/nosh/guide/commands/linux-vt.xml

* http://jdebp.uk./Softwares/nosh/guide/commands/linux-console...

* http://jdebp.uk./Softwares/nosh/guide/commands/console-docbo...


If you (like me) enjoyed the previous version, which discussed Debian 8 Jessie, and are mostly interested in checking out what's new, the commit history might be a good place to start: https://salsa.debian.org/hertzog/debian-handbook/-/commits/b...


so, does anyone even use Debian anymore? And if you do, what for (no, rPIs don't count)


1st order of business if you're looking into Debian Buster:

Install Devuan (www.devuan.org) Beowulf instead. It's Debian Buster without systemd. Almost all of the administrator's handbook applies verbatim.

If enough of the Debian usersbase chooses Devuan, its minor diffs will probably be implemented in Debian mainline and we can get past this sad affair.


Couldn’t you just install Debian, choose sysvinit and uninstall systemd?

Also, I personally won’t be dropping systemd any time soon because the alternatives on Debian are:

Go back to Sysvinit and have to write sysvinit scripts myself; or

Use some even less common init system and have no package provided init scripts/units/what have you.


One system has, at last count, over 600 provided service bundles in a Debian package. (It's somewhere around 670 in the development version.) These range from "accounting" through "keepalived" and "swift@container-auditor" to "ypbind".

* http://jdebp.uk./Softwares/nosh/debian-binary-packages.html#...

One can also pull in other people's run program collections, of which the world has several.

* http://jdebp.uk./Softwares/nosh/guide/creating-bundles.html

And there's a handy tool for what's left.

* http://jdebp.uk./Softwares/nosh/guide/commands/convert-syste...

* http://jdebp.uk./Softwares/nosh/worked-example.html

Note, for the sake of completeness, that van Smoorenburg rc scripts changed format on Debian back in 2014. Most of the boilerplate has been eliminated, and writing them is a lot closer to how would would write a Mewburn rc script on FreeBSD/NetBSD or an OpenRC script.

* https://manpages.debian.org/buster/sysvinit-utils/init-d-scr...


Sorry but a slightly better script based init system maintained by one person just isn't gonna cut it.


It is fortunate, then, that neither of the ones that I referenced are that. The several run program collections are, as we can see, provided by a range of different people from Wayne Marshall to Glenn Strauss; and van Smoorenburg rc on Debian is maintained by several people, including Petter Reinholdtsen who introduced the aforementioned 2014 change.


No, that's actually impossible. Many/most of the core packages are dependent on systemd - directly or through intermediate dependencies. If that weren't the case - there would be no motivation to fork the distribution.

Part of the arguments that led to the fork was the "viral" nature of systemd use, i.e. it's not just an opt-out option, but more integral than that.

At the same time - breaking the systemd dependency was not extremely involved technically. Very little code had to be written, and most of the work is tying things together at the distribution level.


> Couldn’t you just install Debian, choose sysvinit and uninstall systemd?

Maybe, but Debian policy reserves the right to break your programs in future updates, right?

> Use some even less common init system and have no package provided init scripts/units/what have you.

If you just want to use the most common thing with everything being easily packaged, why would you be using Debian rather than, say, Windows? The things that traditionally set Debian apart from Windows are the same things that set something like Devuan apart from modern Debian, IME.


> Maybe, but Debian policy reserves the right to break your programs in future updates, right?

I'm pretty certain packages wouldn't see a change such as removing sysvinit support files unless you upgrade to a new major version (i.e. upgrade from Buster to Bullseye). In that scenario, sure, all bets are off. But in that scenario you could also just find the package missing too, so no amount of sysvinit scripts will you if the binary itself is gone.

> If you just want to use the most common thing with everything being easily packaged, why would you be using Debian rather than, say, Windows?

Wat? A pretty basic Debian Buster box I setup recently has close to 200 systemd unit files from just 24 Debian packages. Are you seriously suggesting that I should remove systemd, install... some other declarative service manager, and then write/find appropriate unit files for all of those services?

> The things that traditionally set Debian apart from Windows are the same things that set something like Devuan apart from modern Debian, IME.

I'm glad you made sure to classify that as your opinion.


A dordy affair, then?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: