Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Kubernetes can re-attach that volume on another host, start up a new container and you are back in business.

There are still some bugs with this, particularly on AWS. Getting better with every release, though.



We just played with Elastic File System mounted into a Pod via `nfs` and it worked like a charm, with the additional "oh, wow" of being able to attach the same EFS to several Pods at the same time. I was also thrilled that they mounted with the root uid intact so there wasn't any kind of dumb permission juggling.

I did say "played with" because we haven't beat on it to know if one could run Jira, Gitlab, Prometheus, that kind of workload. I wouldn't at this point try Postgres but maybe it'd work.


I wonder how suitable EFS is for Postgres. It's supposedly low-latency, high-throughput and supports Unix locking semantics and so on. On the other hand, it's NFS (one of the worst protocols out there), and there have been reports of less than impressive latencies. EFS is also a lot more expensive than EBS.


EFS is also a lot more expensive than EBS.

That may be true, but getting a k8s cluster unwedged from EBS volume state mismanagement is expensive, too.

What I really want is the hutzpah to run GlusterFS but I am not yet brave enough to be in the keeping-a-production-FS-alive business.


I use my postgres with EBS though.


Can you elaborate? I run my postgres on k8s with EBS.


Probably the most severe issue: https://github.com/kubernetes/kubernetes/issues/29324. You won't encounter it until a pod is rescheduled, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: