Hacker News new | past | comments | ask | show | jobs | submit login

> Anti-Pattern: Direct Use Of Pods > Kubernetes Pod is a building block that itself is not durable.

Kind of.. but you can set `restartPolicy: Always` and will always restart in case of failure.




No. If the node the pod is scheduled to goes away, the pod will not be restarted. 'restartPolicy: Always' applies to the containers in the pod, not the pod itself. Deployments (Or daemonsets, jobs, replicasets or replication controllers) actively maintain the pods, to assure in the event that the pods are deleted they're replaced.


Other posters have already commented on this but, if you are not using deployments (or at least, replication controllers), stop what you are doing right away and fix that. Otherwise you lose one of the biggest advantages of k8s.


Deployments are key. One annoyance is now some of the lower units feel more clunky. Is there roadmap to say let you do a stateful set as a Deployment? Obviously a bit tricky and rolling deploy could never surge...


WRT to StatefulSets I assume you are talking about the fact they are immutable (minus a few fields)?

If so, the hack we've applied for StatefulSets is to delete the StatefulSet with `--cascade=false`. This keeps the Pods of the StatefulSet online, but removes the StatefulSet itself. We can then deploy the StatefulSet with the new configuration vlaues, and manually delete the Pods one-by-one to have the changes be applied.

Graceful: Nope. Needs Improvement: For sure Gets the job done: Yep!


Hum I have deployed a SS with just delete and create. No need for a cascade flag. Maybe that is the default?

But yah like I said, not as nice as deploying a deployment.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: