> Like many long-time Unix users, I’m not a big fan of systemd. Then again, I’m also waiting for the whole “windows, icon, mouse, pointer” fad to die down.
Nobody knows what the end game of systemd even is. Started out as an init replacement, now it's doing all kinds of stuff. I had to bail on Debian because systemd failed hard when it took over user directories.
the windows service manager like UI of systemctl exploratory mode is one thing to tackle.
the lack of visibility on dependencies that are not obviously direct. or even relevant (like ssh depending on key generation at first boot, which depends on time, which depends on time sync, which depends on network target... which causes ssh to not get started, without any logs, if you don't have network... despite it not being the first boot... :mindblown)
i won't bother with whole list because 1. there's no alternative 2. "I'm just another systemd denier" like i wouldn't have to be using it from early on to accumulate so much grievances
Oh boy. I'm currently battling with Kubernetes, and I mean it.
Compared to k8s systemd is simple and easy.
I can wholeheartedly say I hate k8s and its guts.
Everything is so overly complicated.
A bazillion configurations for every single little detail.
And she's a touchy little princess.
Hard to find help, expensive courses and a not so great documentation site, that kind of explains the components, but then again really doesn't in detail, also no complete configuration reference.
And no matter which way to setup you choose, something is always wrong.
It's a tool to drive you to expensive public cloud offerings.
For the small price of only ~$2600 per month you can have your 5 node k8s cluster on GCP, cheap cheap.
Burn money money burn money money burn.
Managing and maintaining k8s is a full-time job.
In comparison systemd is well documented and you don't really need to ask people for help.
You can easily use the shell, you don't have to battle with wrong nginx configurations that were autogenerated, because you wrote them and you know what you're doing.
Fleet was cool, but Redhat bought CoreOS and killed fleet, can't have a simple effective system, it has to be complex and enterprise so you can sell services and tutelage.
Fucking IT people.
Many, if not most, companies don't need the things Kubernetes is designed for, though. It's interesting tech and I can see why people are drawn to it, but I feel like some people pick it more because they want to use Kubernetes rather than it solving a real problem a company or organisation is facing.
To this day, I cannot tell what Kubernetes is designed to do. I hear about it constantly from this website, and based on the conversations you would think it was designed to do anything and everything, and all at the same time.
There's a ton of hype around k8s and tons of people positing it as a solution for everything under the sun. It's not. It's a good building block for everything under the sun, if and when you need the scale of "More than one team that doesn't want to get paged in the middle of the night". A ton of the solutions built on top of it are terrible.
What K8s is good at - Running collections of stateless application servers. If you have a dozen copies of the same process that are all identical, K8s is right for you. There's a lot more it can do, but that's the one that is most common.
If you need to host (hundred of) thousands of servers across the globe with an uptime indicated by more nines than sense, Kubernetes makes a lot of sense. You still need to write the tool that configures Kubernetes (you can't get away with manually written YAML at scale) but it solves a lot of problems, like "what happens when the master node goes down" and "how do we redirect to a fallback server without overloading the nodes" at the cost of spare, reserved capacity and compute+networking+memory overhead.
If you're hosting things like emergency services support, the extra spend and complexity can be worth it. It can even be worth it as a band-aid if your application isn't particularly stable and you want to increase uptime while you fight for developer capacity to fix the underlying design problems. If all of your IT team already understands Kubernetes, it may even be worth it to run it in scenarios where you want to set up and tear down quick development/test environments, assuming your company doesn't mind spending extra on Kubernetes specialists once the current IT team leaves.
It kind of was designed to be able to do anything and everything if you plug enough components into each other. That's probably why it's complex to the point of unusability; a framework that's designed to support an IRC server network ad much as it's designed to support MRI machines is very demanding for the people configuring it.
I think the problematic part is that many people portray it as "just fancy Docker that does most of the work for you". Once the cluster is set up, that's practically what it does, but the first time setup of Kubernetes is YAML spaghetti hell, and learning about tools upon tools upon YAML.
Yep. Needlessly complicated and completely dev-centric, it looses sight of the goal. Pretty hilarious, as the whole argument for outsourcing infrastructure to developers was to make delivery faster, simpler and easier. What a joke.
CoreOS switched attention to K8s over fleet fairly early into fleets life span so it never really had the chance to develop into a seriously used tool. Red Hat killed most things CoreOS did after fleet was dead. Etcd lives on and so does Fedora CoreOS. Its not quite the same distribution CoreOS was but takes the same principals and kept some really talented engineers. I wish it got more attention.
Fleet's code is still around if the pain of Kubernetes outweighs the benefits. The problem, in this case, is that systemd is not exactly minimalist, and Fleet built on top of it. I've used it in the past and it felt complex as well, especially when debugging problems.
For self-hosting I've found https://k3s.io to be really good from the SUSE people. Works on basically any Linux distro and makes self-hosting k8s not miserable.
I honestly fail to see in what aspect Kubernetes is poorly documented? It is complex yes, but just about any aspect I've come by is documented. I think that one reason that the documentation at kubernetes.io is kept in a rather short format may be to avoid it to become overwhelming.
That's a low bar. I never had any issue doing this with any other tool (supervisord, inetd scripts, docker-compose, Kubernetes... running a command isn't hard)
> Like it or not, systemd is here and probably here to stay for the foreseeable future.
Which is why I love the extreme compatibility and openness of Linux. systemd is free to stay and I'm free to just never use it. This fact only seems to bother one of the groups.
Debian maintainers changed sshd to notify systemd when it is ready. This notification is only a few lines of code, but it's even _fewer_ lines if you call the sd_notify() convenience function in libsystemd.so
So now you're linking to libsystemd.so. What's also in libsystemd.so? Logging functionality, for programs that need to read systemd logs. That could be in a separate library, but this is systemd, so of course it's not. Everything's in one library. To read compressed systemd logs, libsystemd.so requires a bunch of compression libraries, including liblzma.so.
Anyone linking to libsystemd.so, e.g. to notify at startup, ends up loading liblzma.so, the backdoored version of which abuses glibc ifunc functionality to replace functions in libssl.so in order to take over sshd.
no. anything in the dependency tree could call that lib. systemd is the first. systemd also provides libs to write things the exploit will use. but the important part you missed again is that the exploit is executed on symbols loading. it just choose to do just of the work when checking for a key. but after simply starting the code it's all lost already.
nothing needs to call any code on libzma, just linking against it is enough to run the exploit.
That's hilarious. No, you can not.
Why is there a "WantedBy" for multi-user.target when it is started "After" network.target and auditd.service? If you understand systemd the answer is obvious, but if you don't this should confuse you.
>It can replace inetd, syslog, and many other traditional services.
On Ubuntu it replaced the fstab.
>This is a benefit or a drawback, depending on your point of view.
I don't know from which perspective it is good to have one program extend itself into random unrelated areas and absorb their functionality into itself. Certainly no university course or anyone I ever worked with had that perspective. Usually people talked about defining and limiting scope and having a clear vision of what your software should do.
I think the "Unix Philosophy" debate is largely silly, mostly because it misses the point. Regardless of what some people in the 80s thought about UNIX system programming, it is bad software engineering to not have a defined scope for your software and let it sprawl endlessly.
Whether software should do one thing only is neither here nor there, but it certainly shouldn't do a couple dozen unrelated things while replacing perfectly functional existing system software.
systemd is nothing but red hat market capture. you cannot sell Linux certification if you must teach people to code, as was the case with other inits.
nobody write code to get a mcse. there's a reason they were called mouse clicking solutions expert. redhat wants that too. faster and cheaper certification.
and yes it's a stupid plan. but everything noteworthy on Linux was contributed by companies. either when nobody was looking what a programmer was doing, or when some old code get donated, or thanks to stupid plans like this.
so we took it. it's not worse than before, and hopefully mr systemd will get bored counting money at Microsoft now and let the project evolve to something sane.
Yes of course.
If you can't read a shell script you shouldn't be doing any of that.
As a private individual you can obviously do whatever you want with your system, but if your job is technical support you need to be able to understand and write code at a basic level.
Nowadays everybody who has a technical job has to learn how to write code. It doesn't matter what that technical role is, but only will be more and more of a prerequisite to do anything.
Or perhaps declarative services are superior to shell scripting' init scripts for entirely different reasons you never thought about, and it never had anything to do with whether or not anybody can write bash scripts in the first place.
> The reason systemd has succeeded in becoming an SysV init replacement is simple: it did the work. Not only did it put together a lot of good ideas regardless of their novelty or lack thereof but its developers put in the time and effort to convince people that it was a good idea, the right answer, a good solution to problems and so on. Then they dealt with lots and lots of practical concerns, backwards compatibility, corner cases, endless arguments, and so on and so forth. I want to specifically mention here that one of the things the systemd people did was write extensive documentation on systemd's design, how to configure and operate it, and what sorts of neat things you can do with it. While this documentation is not perfect, most init systems are an order of magnitude less well documented.
[snip]
> You can call this marketing if you want, although I don't think that that's a useful label for what is really happening. I call this 'trying' versus 'not trying'. If you don't try hard and work hard to become a replacement init system, it should be no surprise when you don't.
[snip]
> Since that may not be clear, let me be plain: systemd is a better init system than the alternatives. It does more to solve real problems and it does it better. That alone is a good reason for it to win in the practical world, the one where people care about getting stuff done. That systemd is not necessarily novel or the first to come up with the ideas that it embodies is irrelevant to this. Implementation matters more than ideas.
What system that systemd replaced was so bad that it needed to be replaced by the init system.
I am not against new software, but if using a new piece of software, requires replacing dozens of other system components, then something is going very wrong.
The thought process behind people at Redhat who think "We need a new system logger, you know the init system is the perfect place to develop that" is just inexplicable to me.
I like a writer with perspective :)