Hacker News new | past | comments | ask | show | jobs | submit login

How is Kubernetes not declarative?

I don’t disagree about the exposed complexity, that’s a fundamental decision Kubernetes made about openness and extensibility. Everything is on a level playing field, there are no private APIs.




As I recall, running "kubectl edit deployment..." doesn't do anything except edit the definition of the config. Instead, to have it take effect you seem to have to manually kill pods, and the new pods will come up with the edited config. If it were declarative, it should detect what needs to be changed, and automatically update accordingly. Same thing with editing a config. It's possible it was the funnel my local DevOps forced on me (and lacking needed permissions at every turn), but my experience was that if you removed deployments, configs, etc on the next deployment, nothing would be cleaned up and you had to manually remove. Again, that's not declarative.

In my experience Terraform and CDK are much more declarative; where you never issue commands to delete a pod or a load balancer or similar. Instead you describe what you want, and their engine figures out what it needs to add or remove or change to get to that state.


That’s not accurate, Kubectl edit (or an apply on an existing resource) does immediately detect what needs changing.

For example if you edit a deployment, it will create a new ReplicaSet and new pods and do a gradual rollout from the old one.

There’s corner cases where a controller won’t let you edit certain fields of a resource because they didn’t cover that case, but that’s relatively rare.

Deleting a pod , which IME isn’t too common day to day but can be useful to recover from some failure conditions (usually low level problems with node, Storage, or network), is also a demonstration of declarative reactions at work: if it was created by a controller it will be immediately recreated. Pods are meant to be ephemeral.

Terraform certainly is declarative but it isn’t typically used as an engine that enables high availability and autoscale by scanning its declarative state and comparing to the real world. This is what Kubernetes excels at - continually scanning and reacting to changes in the world. Terraform I have found to be tricky to run continuously, any out of band state change can lead to it blowing away your resources.


That's not been my experience at all. Have had to manually delete pods all the time. Is it possible that this was something fixed in newer versions?

Example case: DevOps pushed out a new version of Istio (without talking with anyone) and even though the container configs are referencing the new version of Istio, only half of the pods in the namespace got restarted, so we get paged because a number of services can't make any network connections with the other services. Had to manually delete all the pods, and then the new pods all came up with the right version of Istio and are able to communicate again.

On a side note: how is it at all acceptable to have a networking "mesh" that isn't backwards compatible? I can count on no hands the number of times that my fargate/lambda services couldn't communicate because half of my fleet is running a different version of VPC. Thus far my experience with Istio is that it has never added any business value (for projects I've been involved in), and only adds complexity, headaches, and downtime.

Back to the declarative thing: I'm fairly confident I've edited service configs, added service configs, edited the container image, and container environment variables, and never saw kubernetes restart anything automatically; had to manually delete.


Istio is a whole different and very advanced beast, maintained outside of the Kubernetes core, and not for the faint of heart.

The issue there is that it literally needs to rewrite the pod YAML to inject the sidecar envoy proxy. So say you want to upgrade Istio. Well Istio needs to change the Pod spec, and it doesn’t do this automatically. If you look at the upgrade instructions here: https://istio.io/latest/docs/setup/upgrade/in-place/#upgrade...

Step 6 is “After istioctl completes the upgrade, you must manually update the Istio data plane by restarting any pods with Istio sidecars:

$ kubectl rollout restart deployment”

Istio can be useful (most security teams want it for Auto-mTLS, it also could save you from firewall hell by using layer 7 authorization policies, and can do failover across DCs pretty well) but is crazy to use on its own as unsupported vanilla OSS without a distro like Solo, Tetrate, Tanzu, Kong, etc., or without significant automation to make upgrades transparent. Istio is often very frustrating to me because of cases like yours: it’s too easy to make a mess of it. There are much easier approaches that covers 80% (an ingress controller like Contour or ngnix + cert manager).

On editing configs, one area Kubernetes does NOT react to is ConfigMaps and Secrets being updated. Editing an Image or Env var in a ReplicaSet or Deployment will definitely trigger a pod recreate (I see this daily).

Though take a look at Kapp (https://carvel.dev/kapp/) which provides clearer rollout visibility and can version ConfigMaps + trigger reactions to them updating, also there is Reloader https://github.com/stakater/Reloader




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: