The persistent state is really just the data, not the process. With kubernetes you store the data on a persistent volume (which could be EBS, iSCSI, etc) and the process runs in the container. Host dies? Kubernetes can re-attach that volume on another host, start up a new container and you are back in business.
*Note: looks like in this example they are not setting up a persistent volume.
Ideally, yes. It's not actually the case though. Again, ideally.
Well tested databases have evolved to have to survive arbitrary power cuts- even in the middle of a transaction; and so have slowly become reliable enough to trust to start from on-disk state only.
Good f-in' luck bringing your db flavor of the month back online from disk state alone. Hope you didn't have any transactions open to your DB before it got rescheduled to a different node in your cluster.
We just played with Elastic File System mounted into a Pod via `nfs` and it worked like a charm, with the additional "oh, wow" of being able to attach the same EFS to several Pods at the same time. I was also thrilled that they mounted with the root uid intact so there wasn't any kind of dumb permission juggling.
I did say "played with" because we haven't beat on it to know if one could run Jira, Gitlab, Prometheus, that kind of workload. I wouldn't at this point try Postgres but maybe it'd work.
I wonder how suitable EFS is for Postgres. It's supposedly low-latency, high-throughput and supports Unix locking semantics and so on. On the other hand, it's NFS (one of the worst protocols out there), and there have been reports of less than impressive latencies. EFS is also a lot more expensive than EBS.
*Note: looks like in this example they are not setting up a persistent volume.