The official documentation on these directives was of great value when I started looking into unit file hardening. There were a few minor cases where I had to, or felt the need to go elsewhere for deeper explanation, but for the most part it was readable and comprehensive.
I was able to understand the changes that I made and while carefully testing, few unexpected problems resulted.
The changes that I applied as a result of this has meant the unit files now score around 1.5 from the "systemctl-analyze security" system. Considering I approached the process with almost no knowledge of systemd, this speaks volumes on the quality of the documentation, its timeliness and practical relevance, and the fruit that can be borne of excellent documentation.
I have to admit, having recently spent some time learning it and updating my Linux skills I really don't understand why systemd seems to have provoked so much hate and controversy in the Linux world. So far I really like it. It makes a Linux server feel more like something designed rather than incrementally patched together out of hacky shell scripts written 20 years ago.
The unit config files are small and simple. Every line makes sense. Things are coherent - learning how to start a service at bootup means you've partly learned how to configure the new equivalent of cron jobs. The same configuration works at the system and per-user level. You can quickly locate logs. I didn't quite like the command line interface at first e.g. why "systemctl" and not "service", plus I always forget if it's the service name in the second or first position. But the basic functionality is all there.
I realise Docker is all the rage at the moment but I've found it kind of flaky and complicated. For my own purposes systemd feels about the right level of abstraction. It's not picky about where software comes from, it just manages it. You can unzip a tarball and make it run isolated, depend on other services, and a whole lot of other neat things. If you've learned it on one distro you learned it for the rest, unlike SysV init. And it seems to constantly get more useful features that are all pretty easy to configure as well.
>having recently spent some time learning it and updating my Linux skills
It has gotten better over time. If you're just updating it now you're probably not having nearly as rough a time of it as you would back when it was first being pushed. The documentation was pretty terrible and things would break randomly a fair bit. I've had multiple occasions where my networking just stopped working due to systemd components.
One thing I still find annoying is how "enterprise-ey" it is. There's no "put all your unit files here" easy to look at folder, you need to first use some systemd specific command to find out exactly which of the many many sub folders the unit file lives in. There's also a tendency to split configuration across many different files, which I guess is how we always did it but somehow seems more annoying when coupled with the whole "you don't get to know where the unit file is" system. Like one service could have 3 or 4 different unit files if it's complicated, if it relies on socket activation or mount points or times or anything like that.
One of my biggest (practical) complaints these days is simply that everything is spread out all over the filesystem.
> I really don't understand why systemd seems to have provoked so much hate and controversy in the Linux world
A few major reasons:
* It's originally written by the person who also wrote Pulseaudio, so any vitriol people have left over from Pulseaudio's configuration issues gets poured into systemd as well.
* The project takes a monolithic approach to the problem of configuring services, and many people think that's wrong. [1]
* It's change, and change means work for people who have their old setups.
[1] This criticism is usually described as "it breaks the Unix model," a formulation I don't like because it's meaningless and an appeal to authority instead of articulating the issues they have with it. Watch how people try to explain how the similarly monolithic Linux kernel doesn't break the Unix model, it can be funny sometimes.
Right, what does monolithic approach to configuring services even mean? I seem to have hundreds of systemd services on my box, all of which appear to do a tiny part of system startup. That doesn't come across as very monolithic, if anything service overload is one of the things I dislike about systemd (but it wasn't better previously).
> When your system boots up the kernel is executing a given binary in its known namespace. To see what are the only tasks the application running as pid 1 has to do, see sinit. Just wait for child process to reap and run some other init scripts.
I strongly disagree. PID1 is a single point of failure for the entire system. If it crashes, the kernel will immediately panic. Try it out yourself and see. A good design would move every last bit of complexity out of PID1, leaving PID1 doing the absolute bare minimum required.
There is no need for the vast majority of systemd's functionality and complexity to be physically present in the PID1 image. They could fork off another process to do that. I've always found that aspect of its design to be utterly bizarre. Jamming additional complexity into PID1 is fundamentally wrong, it's obviously a poor engineering choice, and there are plenty of better ways of architecting the system.
Purely my perspective here, as a 25-year user of Linux and BSD (for context). On the one hand, I very much agree with you that Systemd brings a lot to the table. The files are much easier to work with, the service ordering and integration is logical and works well (to the extent I've beaten on it), and I can't deny that a faster boot sequence is helpful for things that boot often like container images. It's been a lot easier writing systemd config files for my new services than it was to write init.d boot scripts, for sure, and the integration of systemctl is really nice: one command does all the service things from info to disabling.
The flip-side for me, the one that continues to get under my skin, is the approach of the systemd project. It's the habit systemd has of simply subsuming all other system functions into itself (DHCP client? Sure! DNS client? Mine now! Logging? We handle that now. Firewall control? ALL SHALL BE ASSIMILATED.). If the systemd versions of those functions were both obvious in presence and easy to replace with more fully functional replacements when I wanted, I'd feel better about it, but I keep running across cases where system functions that have worked that way since Linus was in knee-pants have suddenly been replaced behind the scenes with a systemd module whose configuration files and knobs aren't obvious or well-documented, it's difficult-to-impossible to uninstall, and trying to disable and replace it leads to cascading failures.
Worse, I keep seeing security issues brought up to the systemd devs and then tossed aside with "well, just don't do that" or "how is that even a problem". It's not pervasive or constant, but it's steady enough to be worrying. Obviously not every security issue raised will be top priority, but it concerns me how much of my systems are being subsumed by a project that seems to prioritize "do all the things now" over "do things securely".
Compare that to the approach taken by, say, OpenBSD, which has also been steadily replacing long-standing system bits with their own custom-developed pieces. Their approach has been "we will provide basic functionality that is iron-clad secure", while leaving you the ability to swap in something else for stuff like OpenSMTPd without breaking your system. And yes, Theo can be just as much an <unprintable> as Poettering, but I'm a lot less worried about the outputs of his work for the above reasons.
Ultimately I think systemd is a good way forward, but it needs someone else to take over the project, rein it in, and keep it focused on being good at what it does rather than trying to be all things everywhere. Or, alternately, it needs to just implement its own kernel and go off to be SystemdOS v1, which seems to be the trajectory it's on right now.
>Worse, I keep seeing security issues brought up to the systemd devs and then tossed aside with "well, just don't do that" or "how is that even a problem". It's not pervasive or constant, but it's steady enough to be worrying. Obviously not every security issue raised will be top priority, but it concerns me how much of my systems are being subsumed by a project that seems to prioritize "do all the things now" over "do things securely".
I would be even harsher than that. It's not just security issues that earn "don't do that" from systemd devs - it's everything that doesn't fit their narrowly imagined use cases. You don't even get "do all the things now" - you just get "do this particular thing now". Generally with no regard for POSIX. And if you want the old behaviour back, expect to boil the oceans. Exhibit A: https://news.ycombinator.com/item?id=19023885
To be fair, if I had to pick the heavily-used specification I'd most like to see ground into dust and rewritten from scratch, it's POSIX. There are several misfeatures that can't be easily undone (fork, and its maddening interaction with file descriptors, for one).
I also strongly dislike the shell-based model of development that people usually appeal to for POSIX. Shell makes for a crappy language (witness how you effectively have to ban spaces in your filesystem paths to make things work). Stringification of identifiers makes time-of-check-time-of-use attacks possible. I suspect it's also a driving factor for some of the misfeatures, because terminal programs and the shell need to implicitly share a lot more OS resources, so programs end up doing weird things like passing all open files to your children by default.
Were I to write my own operating system in 2020, I'd not think at all about POSIX until I finished the design, and relegate it to a compatibility layer for people who want to write programs as if it were 1970. Amusingly, when I looked up Fuchsia last week, it does seem that they designed the OS APIs along some of the ideas I had (e.g., ditching signals; handle-based API), so maybe there is some hope for a better-than-POSIX future world.
> The flip-side for me, the one that continues to get under my skin, is the approach of the systemd project. It's the habit systemd has of simply subsuming all other system functions into itself (DHCP client? Sure! DNS client? Mine now! Logging? We handle that now. Firewall control? ALL SHALL BE ASSIMILATED.).
well, it's in the name. systemd is a system daemon. system is... well, everything that isn't your text editor and music player mostly.
I mean, technically correct (the best kind of correct!), but I have to wonder: if they'd started the project with "this daemon is intended to take over all functionality on the system not currently occupied by specific applications" instead of "this is intended to replace init as the core process handler", would people have been quite as quick to jump on board with it over one of the alternatives?
- "systemd starts up and supervises the entire system (hence the name...)."
- "timer shall provide functionality similar to cron, i.e. starts services based on time events, the focus being both monotonic clock and wall-clock/calendar events. (i.e. "start this 5h after it last ran" as well as "start this every monday 5 am")"
- "More importantly however, it is also our plan to experiment with systemd not only for optimizing boot times, but also to make it the ideal session manager, to replace (or possibly just augment) gnome-session, kdeinit and similar daemons. "
so, it's not like this did come up as a surprise, did it ?
Gee, I don't see a mention in there of DHCP, DNS, firewall control, or indeed any network management whatsoever. I see mention of gnome-session and kdeinit, which are client-level equivalents to initd, but I don't see any mention of network-level functionality, much less all the other things systemd has folded in lately. Indeed, looking through that blog post, the only mentions of any of those network functions (or even the word "network" itself!) are mentioning other system components doing them. So yes, I'd say adding all that in really did come as a surprise, and from the thread here I'm guessing I'm not the only one.
Remember: I don't think systemd's poison. As an init manager, I think it's a good redesign. My concern is that they're busy borging the rest of the system into themselves and (just as importantly) prioritizing "MOAR STUFF NOW" over "how about we do the core things people adopted us to manage more securely".
According to "IP Accounting and Access Lists with systemd" [0], posted ~2.5 years ago:
> With v235 another kind of resource can be controlled per-unit with systemd: network traffic (specifically IP).
> ...
> IPAccounting= is a boolean setting. If enabled for a unit, all IP traffic sent and received by processes associated with it is counted both in terms of bytes and of packets.
> IPAddressDeny= takes an IP address prefix (that means: an IP address with a network mask). All traffic from and to this address will be prohibited for processes of the service.
> IPAddressAllow= is the matching positive counterpart to IPAddressDeny=. All traffic matching this IP address/network mask combination will be allowed, even if otherwise listed in IPAddressDeny=.
You're correct that "firewalld has no relation to systemd" -- this systemd functionality doesn't use iptables/nftables/NetFilter -- but the commenter never claimed that; he did mention "firewall control" but the meaning was clear (to me).
Honestly, maybe systemd should manage the network and firewall? With Ubuntu 18 my experience of firewalls has been quite painful. In fact I'm pretty sure that trying to make that work right has taken up about as much time as all the other tasks I was doing on that machine. The experience has sucked far more than the systemd experience has done and I was doing nothing complex at all.
The problem is that simple firewall configs are really about per-service access control, and services are defined in systemd. On Ubuntu they have this thing called the "uncomplicated firewall" which is ... OK, it's better than iptables. But. It has its own notion of apps and profiles, and frankly the CLI isn't really intuitive (e.g. the notion of apps seems bolted on). To bring up a service and then ensure it can only be reached from localhost like a local nginx I have to configure systemd, and then separately configure ufw, and then wonder why it doesn't work because this machine is old and was upgraded from an older Ubuntu which for some reason had the netfilter-persistent package installed, and those two were fighting over the kernel firewall oblivious to each other.
It took me many unhappy hours because there was no logging or errors or really any indication anything was wrong at all. Of course it did, because this is Linux and nothing is integrated or works right, it's all just a collection of random distro specific scripts thrown into a cauldron and replaced every couple of years with a new bunch of hacky shell scripts - except, apparently, for systemd! Ohhhh how I would have liked to just write
DisallowRemoteAccess=true
in a .service file and be done with it. Sounds like IPAddressDeny/IPAddressAllow would essentially let me do that.
And don't get me started on netplan. Of course what I want Ubuntu to do after an apt-get upgrade is forget about the network entirely until I find a monitor and keyboard to plug into it then hand-copy a magic YAML file from a random website. If systemd can make that stuff work right then I'm all for it.
> Ohhhh how I would have liked to just write DisallowRemoteAccess=true in a .service file and be done with it. Sounds like IPAddressDeny/IPAddressAllow would essentially let me do that.
You could set the
PrivateNetwork=
option which, ironically, is documented in the article we're commenting on:
> Provides a network namespace for the service with only a loopback interface available. A great option for applications that do not require external network communication.
People forget that Systemd was crafted by Lennart Poettering and Kay Sievers while working for Red Hat. Red Hat is a multibillion dollar company who sells "Linux solutions". If their solution is the standard, then they control the market. This is lifted straight from the Microsoft 101 handbook: Embrace. Extend. Extinguish (going back to DOS ain't done till 123 wont run).
> Or, alternately, it needs to just implement its own kernel and go off to be SystemdOS v1, which seems to be the trajectory it's on right now.
SystemdOS already exists: Red Hat Enterprise Linux. That's the gold standard. All other penguins shall follow. This is the extinguish phase.
The hate for Poettering is misplaced Red Hat hate. He's just a hired gun with terrible ideas along with his sycophants. How many blog posts did those creeps write boasting of how it will help the desktop community which Red Hat doesn't give a damn about it. Red Hat had them lie about that to push it through to further their control over the Linux Server. Its enterprise grade bullshit.
This. While systemd has a lot of issues that are obvious from a distance, everyone who hates it has had it break something because of a really stupid design decision it made.
I don't really mind Systemd as a replacement for init, it's just that other unrelated systems are now requiring it explicitly that probably shouldn't (e.g. Gnome) and also Systemd is creating more and more sub projects that replace even more standard Unix functionality. Let's see:
1. Systemd-network is a whole replacement for Network Manager, which itself was bloated compared to old ifcfg files and other simple tools that manage just single aspects of the network (dhcp, wifi, etc.)
2. Systemd-resolved - replaces everything that deals with DNS. This really bothered me when it was discovered that it was ignoring certain standard configurations such that users using VPNs / encrypted DNS were leaking DNS lookups. That could get someone tossed in prison or worse in countries like Iran or China.
* Systemd-logind - replaces various tools used for logging in to both X and the console. Seems like it's a mixed bag, though I know there was a lot of complaints because it broke standard behavior with tools like nohup.
* Systemd-boot - replaces Grub and other similar tools. I hate Grub, so this doesn't really bother me.
* Systemd-locale, hostname, timedate - these replace traditional tools dealing with the obvious features their names describe. This isn't a big deal to me as it's just replacing one simple tool with another, though I'm sure the dependencies will end up requiring a lot more unrelated systemd libs, which the original tools were completely stand -alone.
Seems like more and more are coming out every year - encrypting and exporting home directories, partition managers, Cron, etc.
Even if the replacements libs are better than the originals, it seems the developers purposefully make sure that the more of these projects your distro depends on, the more you're locked into all the other Systemd projects.
This will be the end of BSDs and any other Linux distros that don't want to use any Systemd, or even a few of the smaller tools, because everything will eventually require the DBus and Systemd / Journald main libs.
I'm not a Linux expert at this level, but it certainly reminds me of a few notorious Java libraries - you import a single library to use just a few methods, and all of the sudden your app has grown by 100MB of other dependencies that have no relation to your requirements, and it breaks everything because it adds all sorts of Servlet libraries that override and break things everywhere else. Usually the only way to keep things clean is to NOT use these libs or pull out just the source for the things you want.
Interestingly, the libraries that have caused me the most trouble over the years were created by Redhat - you end up with all sorts of JBoss specific modules that, for no apparent good reason, override your logging, validation, web, etc. libs with their own versions.
Ah yes, Systemd-resolved puzzled me too. From what I could tell this is actually the fault of Golang. Historically Linux implemented DNS control using config files read by the C library. On Linux for not really good reasons the C library is perceived as an optional gateway to the kernel rather than the primary OS interface as it is on other operating systems like macOS or Windows, so statically linked programs or programs that don't use the C library ended up bypassing your configured DNS resolvers.
Arguably they shouldn't have done that and directly invoking syscalls is a bad idea, but Go does it and Linus encourages it, so to make DNS configurable again means making the DNS resolver on the network card a userspace daemon that can then apply/reapply the configuration.
I read the nohup thread linked to elsewhere in this discussion. Poettering doesn't seem like the best communicator but his point was that systemd had to choose between a security-oriented default and a compatibility-oriented default, and they chose security. I can see that the security argument here is rather theoretical (how many systems got hacked over the years because of nohup?!) and the compatibility argument is real, also Poettering's belief that distros are either security or compatibility focussed and would know about this setting is a bit off. But this isn't really a systemd specific issue: macOS and Windows have also upset devs and users a lot with security-oriented backwards incompatible changes.
> Historically Linux implemented DNS control using config files read by the C library. On Linux for not really good reasons the C library is perceived as an optional gateway to the kernel rather than the primary OS interface as it is on other operating systems like macOS or Windows, so statically linked programs or programs that don't use the C library ended up bypassing your configured DNS resolvers.
I'm not a systems programmer and haven't touched C/C++ since college. Though I'd like to learn more.
Do you have any links that explain how different operating systems rely on standard C libs vs direct system calls for things like DNS? I guess I don't really remember how system calls are made without importing stdlib and other posix libs like pthreads.
In the early days of systemd it broke systems on the regular, and instead of humility, Lennart Poettering and his homies tended to react with ego and arrogance.
This sandboxing for services provides similar isolation as various container runtimes. Plus due to integration with systemd things like live update without dropping a single connection is possible to implement with straightforward application code.
If I understand Docker correctly, it's not actually intended to be a sandbox and wasn't designed as such (e.g. the daemon runs as root, or at least used to). It's not clear to me what the threat model for running untrusted Docker images is, or how you'd know what the expected set of permissions were except by reading a README.
Whereas this feature is explicitly a sandboxing feature, and the needed permissions are enumerated by the service file.
Not that it's exactly relevant to this article, but on RHEL 8, at least, Docker isn't supported, and instead they use their own container runtime called Podman along with Buildah for building them.
Podman does not run as root, and thus neither do the containers.
I tested it out on my development backup laptop; I usually use Docker-CE on my main MBP. Podman and Buildah were able to deal with all my individual containers, but their replacement for Docker-Compose failed on all my compose environments, and the errors were not helpful. I ended up installing an unsupported version of Docker-CE, and everything worked fine.
Cgroups limit the impact anything inside the container can do to anything outside the container.
It doesn't matter that the daemon runs as root, it starts processes in an a way that prevents them from interacting with other daemons, filesystems, etc. resources.
It's not cgroups, but rather namespaces and seccomp (and apparmor/selinux on some distros) that sandbox the processes inside the container.
cgroups are used mostly for resource limits, not for sandboxing (aka namespacing).
docker by default does have a slightly more lax security posture than systemd or lxc (i.e. a default set of capabilities that isn't explicitly enumerated and a focus on UX over tweaking them, no usernamespaces by default, etc), though you're right that it is largely meant to be a secure sandbox for untrusted containers, as long as you know hat you're doing.
To quote Jessie's blog post [0]: "containers were not a top level design, they are something we build from Linux primitives [Linux namespaces and cgroups]".
cgroups can be used without namespaces, and the reverse is also true. Both of them are part of linux container implementations (like lxc and docker), but for an easy example, systemd uses cgroups for every service, and only uses namespaces for ones you very explicitly turn them on for.
Don't quote me on this, but I also think cgroups landed in the kernel many years before namespaces did.
It's not enough that a system has the capability to do something; Ideally it needs to be well documented, easy to use correctly, difficult to use incorrectly, repeatable, and have it's correct usage verifiable. With logging and monitoring available.
When you have a piece of software you want to sandbox.. how exactly are you going to do it? What are the steps? Are they going to be easy for other people to follow and understand what is going on? How do you know it's working correctly?
These sorts of things matter. Not just in terms of usability, but also security. Having the same limitations and kernel hooks underneath doesn't make sandboxing implementations identical. it's still very much possible to have one that is objectively better then another.
I don't know if this is the best implementation, but it's certainly nice that if you are using Linux you probably have it available already. Out of the box.
You're correct at a very narrow level considering only the mechanism used to apply the sandbox but think of the larger picture and especially how container runtimes are not created equal. For example, dockerd involves a running daemon with access control issues which many people handle by handing out root access. podman is better but far less common outside of the Kubernetes world. If you're trying to give generic advice, systemd avoids needing to drag in that extra discussion about which launcher you're using and how it's configured.
An interesting question would be integration with other features like SELinux or seccomp, since those are commonly punted on but make a huge difference in security.
No, he's saying that isolation via systemd is basically the same kind of isolation that you get with runtimes like docker. It's just Linux namespaces, which conventional wisdom is to assume that it's a relatively modest security boundary at best. The key takeaway here is that you can get this added security with minimal to no performance impact in a way that's simple and straightforward for the sysadmin to configure.
It's actually more mature for desktop sandboxing then the other solutions because its been effective for years and has a lot of community work on sandboxing -
Not every app is well sandboxed with flatpak but this is the goal and they are making very good progress. Also a great way to track a larger project that your distribution doesn't keep up to date as you'd like.
just to note -- flatpak/flathub use bubblewrap under the covers which is very promising but I don't see the community profiles like firejail yet.
I think the effective answer is you don’t; this is for services. However, sandboxes for this specific purpose exists too: Flatpak, for example. If you want to sandbox programs that are part of the system installation, Firejail can provide pretty useful sandboxing. SELinux and AppArmor also can be configured directly, and some distros will come with default configuration for system apps. No one solution will ever be a panacea for either security or use cases, but together they can help build a setup with defense-in-depth.
Something I'd like to see is a simple way to define “containers” on my desktop that would allow me to run sandboxed versions of my standard apps in bundles.
The plan would be something like the following.
You have a simple gui that would allow you to create new containers, for which you could define what it has access to (specific folders, internet, sound, etc).
You could then add apps to your container, and they would only be able to play with each other in the container with the restrictions given.
I think that would work with a simple app essentially based on bubblewrap.
For example:
* A "torrents" container, where the only apps would be firefox, deluge and vlc, and access to no folder in my home directory, but the container would have its own home directory.
* An "admin" container, with only firefox and thunderbird and libreoffice, say, and access to my ~/Downloads and ~/Documents folders.
You should be able to run the admin.firefox and torrents.firefox side by side, since they'd have different profiles.
By default, each container would have its own "virtual filesystem" with no accesss to anything outside (modulo what's really needed), and only by toggling "links" would it be able to access your actual fs tree.
The GUI would be easy enough for computer "illiterate" people to work with it.
And the GUI would be smart enough to create desktop files with each new application I add to a container, with customized icons.
I don't expect it would be too complicated (essentially bookkeeping on top of bubblewrap).
If anyone is interested, I'd happily discuss it more!
It takes some fussing about but you can do it in more or less the obvious way. I regularly use firefox in a systemd-nspawn container, the only annoying thing at this time is needing root to start the container but there's already a plan for how to fix that and I believe someone has also posted a PR for supporting starting "machines" (in the systemd-machined sense) as a regular user.
Not to start a flamewar, but I'd like to point out why I think snap is not only inferior to flatpak technically but actually a threat to the linux desktop:
Snap is very deliberately centralized, with a single hard-coded repo URL. The server is also closed-source. This is because snap's somewhat transparent primary goal is to give Canonical central control of app installation across all linux distributions. The plan from there will include taking cuts of sales revenue and publishing fees. The pieces for this (like DRM) are falling into place.
Flatpak, because it isn't born out of such business a model, supports an arbitrary number of user-defined repos, which are trivial to host because they are static folders and can be installed with a single-click via a `flatpakrepo` url.
This is on top of other advantages such as upstream support from the likes of GNOME, support for sharing code between apps using "frameworks", supporting themes, using namespaces instead of modified AppArmor, p2p support, a better permission system, etc.
Snapcraft is about Canonical control over devices, the sandbox is just a mean to an end. Snapcraft leaves you with no control over which code runs on your device.
What I'd like to know is how whether it's possible run GUI applications in their own containers. From what I understand about X, if a GUI app runs in the same context as the DE, it will have access to all other windows, the clipboard, etc.
That makes me think that Xephyr is mandatory in order to run an app in a container, but I haven't found a satisfactorily easy way to do so. Would systemd be the easy solution I'm looking for?
Firejail, mentioned elsewhere in the thread, should do that correctly. Personally though I've been doing docker (substitute with systemd-nspawn or whatever you like) with xpra; not sure it's as secure, but it should block accidental snooping while still supporting clipboard transfers.
It appears that the relevant developers are pushing towards using Wayland for more secure remote windowing, but I do not know what state it's in.
Yes, these mostly look to be features in mainline systemd and should work in any distro using systemd (assuming that version of systemd has these options.)
Also, you can update and upgrade without an RHN subscription. Technically, it's a complete recompile of the source code used for RHEL into CentOS rpm packages.
I was able to understand the changes that I made and while carefully testing, few unexpected problems resulted.
The changes that I applied as a result of this has meant the unit files now score around 1.5 from the "systemctl-analyze security" system. Considering I approached the process with almost no knowledge of systemd, this speaks volumes on the quality of the documentation, its timeliness and practical relevance, and the fruit that can be borne of excellent documentation.