Hacker News new | past | comments | ask | show | jobs | submit login

> For example, how do you like the fact that init now comes with "hidden" timers ? After you've scoured every place that a "traditional" Linux might put scheduled tasks, you've come to the realization "Ooooh, now my init has its own cron!"

They're not "hidden", just managed with systemctl like other services, you can still use regular cron if you'd like.

As to the rationale of including timers, it is for stuff like mounting and unmounting remote disks etc, at boot, which I personally would include under the responsibilites of a modern init system and 'system manager' in general, same with taking care of TRIM every so often etc., but I appreciate that it isn't for everyone.

> Or the subtle way it breaks existing SysV compatibility.

The SysV init scripts would regurarly break in subtle ways by themselves to be honest, I find systemd a much saner option.

> How about the systemd-hostnamed service? Why on earth would we need a service to change the hostname? And why should it care about the "chasis type" of the machine?

The systemd network stack is entirely optional and intended for scenario where you can't afford/don't need the 'fatness' of NetworkManager, just because it's there, doesn't mean you have to use it.

> P.S. berate him over it.

Plenty of people already did so, over and over, but have fun, considering you'd be buying him a beer...




> They're not "hidden", just managed with systemctl like other services.

So far in the history of nix, services have never equaled periodic tasks, to my knowledge. In that sense, it's a hidden surprise that would sooner or later bite everybody that has not been initiated in systemd. It makes things harder to debug.

> As to the rationale of including timers, it is for stuff like mounting and unmounting remote disks etc, at boot

Now do you go from that need to "and that requires an internal crond implementation in your init system". Why not atd, or regular cron, or sleeping processes? I'm genuinely curious, so if you're aware of public discussion of it, please point me in that direction.

> The SysV init scripts would regurarly break in subtle ways by themselves to be honest, I find systemd a much saner option.

I appreciate that the chaining and dependencies of SysV init were horrible. That doesn't make it OK for systemd to introduce more subtle breakages in even a very basic* usecase.

> The systemd network stack is entirely optional and intended for scenario where you can't afford/don't need the 'fatness' of NetworkManager

In the scenario where you do not need NetworkManager/GUI hostnamed would be quite worthless as well. /etc/hostname and the `hostname` command are more than enough to handle that case, thank you.

I guess my main issue with systemd is that it introduces (mostly?) unnecessary complexity, which makes me waste more time debugging problems. Of course it is better than SysV init, and I'm very happy with the syntax of system files. Yet, upstart showed that you can have those niceties without an excess of complexity.


As to the rationale of including timers, it is for stuff like mounting and unmounting remote disks etc

The implementation is retarded. Last week my providers' iSCSI fabric suffered a glitch, which hosed a whole bunch of servers for hours because systemd refused to boot when it found the volumes not present. These were in no way critical to the operation of the stack. However, some moron somewhere decided that locking the system for 5 minutes on boot, and then simply refusing to boot properly, when everything required to boot is actually in place and OK is the correct course of action. I have enough of that kind of deranged thinking to deal with coming from Windows, I don't need it from my Linux machines.

Systemd sucks for servers. It might do a lot of nice fancy tech stuff, but it is extremely poorly thought out for use on the server.


> systemd refused to boot when it found the volumes not present

This is not normal systemd behaviour, it waits for the resource, but only 1min 30s by default and then continues booting, (unless the services are considered critical for reaching certain target), logging a failed service, someone must have therefore explicitly configured a custom behaviour in your situation.

> The implementation is retarded.

May be, or may be whoever configured the server this way is, who knows?

> I have enough of that kind of deranged thinking to deal with coming from Windows

As I said above, this is not standard systemd behaviour.

I am not saying that systemd is perfect, but your case seems like misconfiguration, rather than "deranged thinking" from the systemd devs.

I'l encarouge you to read more on its configuration, it's actually fairly flexible, this[1] is a solid starting point.

1 - https://wiki.archlinux.org/index.php/systemd


Bailing out and dropping to the rescue shell when a mount point in /etc/fstab failed is DEFINITELY the normal systemd behavior.One has to mark it as nofail otherwise systemd will assume it is required for boring the system.

(this was originally meant as a reply to above comment was mistakenly posted to grandparent.)


Standard Ubuntu 16.04. From my reading, it waits 90 seconds, then some more, and then even more.


Not the case on Arch, but you can customize timeout anyway[1], using TimeoutSec, TimeoutStartSec and TimeoutStoptSec, or even the global setting[2].

1 - https://www.freedesktop.org/software/systemd/man/systemd.ser...

2 - http://stackoverflow.com/questions/33776937/how-to-change-th...


Bailing out and dropping to the rescue shell when a mount point in /etc/fstab failed is FOR SURE the standard systemd behavior.One has to mark it as nofail otherwise systemd will assume it is required for boring the system.


> The systemd network stack is entirely optional and intended for scenario where you can't afford/don't need the 'fatness' of NetworkManager, just because it's there, doesn't mean you have to use it.

Currently.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: