That's a really interesting question, actually. (This was posted a day ago so I'm not sure anyone will read this any more, but...)
If the job of a process "supervisor" is to launch you, wait(2), launch you again, repeat... who has the role of doing zero-downtime restarts?
I'll define a zero-downtime restart in this instance as: a new process must be launched while the old process is still running, so it can negotiate a handoff of responsibility (via migrating who owns the port and draining connections in the case of nginx or haproxy), and the old process only dies once the handoff is complete.
If you wanted this behavior with systemd/upstart/etc as a supervisor (where processes are foregrounded and monitored), the supervisor would require a special case for "restart" which would just start a new process and monitor that one instead, and not bother with killing the old one.
I have no idea if systemd can accommodate for this without switching to a non-supervised process management mode (which is definitely possible.) I don't have much familiarity with advanced systemd or upstart, although I have plenty of familiarity with mesos schedulers, which can be kind of a datacenter-level version of systemd, and in this instance we do "rolling restarts" where new instances are launched, we wait for them to pass health checks, and then we drain the old ones, while a load balancer is responsible for routing incoming connections to the healthy instances. It's an interesting notion for what a single machine's process supervisor should do in this case.
If the job of a process "supervisor" is to launch you, wait(2), launch you again, repeat... who has the role of doing zero-downtime restarts?
I'll define a zero-downtime restart in this instance as: a new process must be launched while the old process is still running, so it can negotiate a handoff of responsibility (via migrating who owns the port and draining connections in the case of nginx or haproxy), and the old process only dies once the handoff is complete.
If you wanted this behavior with systemd/upstart/etc as a supervisor (where processes are foregrounded and monitored), the supervisor would require a special case for "restart" which would just start a new process and monitor that one instead, and not bother with killing the old one.
I have no idea if systemd can accommodate for this without switching to a non-supervised process management mode (which is definitely possible.) I don't have much familiarity with advanced systemd or upstart, although I have plenty of familiarity with mesos schedulers, which can be kind of a datacenter-level version of systemd, and in this instance we do "rolling restarts" where new instances are launched, we wait for them to pass health checks, and then we drain the old ones, while a load balancer is responsible for routing incoming connections to the healthy instances. It's an interesting notion for what a single machine's process supervisor should do in this case.