> What you're saying is pretty much the result of my biggest gripe with Kubernetes, though it's one I don't have a lot of ideas of how to fix; there's too much damn boilerplate. 1000 lines of YAML to store maybe 100 relevant lines.
I think that's more a helm issue than a k8s issue. I've been using helm in production for over a year and k8s for almost three years. Prior to adopting helm we rolled our own yaml templates and had scripts to update them with deploy-time values. We wanted to get on the "standard k8s package manager" train so we moved everything to helm. As a template engine it's just fine: takes values and sticks them in the right places, which is obv not rocket science. The issues come from its attempt to be a "package manager" and provide stable charts that you can just download and install and hey presto you have a thing. As a contributor to the stable chart repo I get the idea, but in practice what you end up doing is replacing a simple declarative config with tons of conditionally rendered yaml, plug-in snippets and really horrible naming, all of which is intended to provide an api to that original, fairly simple declarative config. Add to that the statefulness of tiller and having to adopt and manage a whole new abstraction in the form of "releases." At this point I'm longing to go back to a simpler system that just lets us manage our templates, and may try ksonnet at some point soon.
The stable chart thing is so weird. Internally use we some abstractions, but I looks at stable charts and it requires so much time just to understand all of what's going on. Everything is a variable pointed to values, and you can't reason about any of it.
It seems like the hope is, just ignore it all, and the docs are good, and just follow them, but I don't live in any kind of world I can do that.
And the commits, and the direction of all of them seem to go more and more impossible to read conditionally rendered symbols.
I've had such a challenge understanding and using helm well enough. Small gotchas everywhere that can just eat up tons of time. This doesn't feel like the end state to me.
> It seems like the hope is, just ignore it all, and the docs are good, and just follow them, but I don't live in any kind of world I can do that.
Yep, agreed, we've used very few charts from stable, and in some cases where we have we needed to fork and change them, which is its own special form of suck. The one I contributed was relatively straightforward: a deployment, service and a configMap to parameterize and mount the conf file in the container at start. Even so I found it a challenge to structure the yaml in such a way that the configuration could expose the full flexibility of the binary, and in the end I didn't come anywhere near that goal. You take something like a chart for elasticsearch or redis and its just so much more complicated than that.
Right, I'm in particular working on charts for ELK, and it's just a mess. I just took down all my data (in staging, so all good) due to a PVC. The charts won't update without deleting them when particular parts of the chart change, but if you delete them, you lose your PVC data.
So I find the note in an issue somewhere stating, this is.. intentional?.. and that of course you need some annotation that will change it.
Let alone the number of things like, xpack, plugins, the fact that java caches the DNS so endpoints don't work on logstash, on and on.
It seems like everyone is saying operators are going to be the magical way to solve this, but if anything it seems like one set of codified values, that don't address any of the complexity.
You're using a statefulset? Here's a tip: you can delete a statefulset without deleting the pods with `kubectl delete statefulset mystatefulset --cascade=false`. The pods will remain running, but will no longer be managed by a controller. You can then alter and recreate the statefulset and as long as the selector still selects those pods the new statefulset will adopt them. If you then need to update the pods you can delete them one at a time without disturbing the persistent volume claims, and the controller will recreate them.
I think that's more a helm issue than a k8s issue. I've been using helm in production for over a year and k8s for almost three years. Prior to adopting helm we rolled our own yaml templates and had scripts to update them with deploy-time values. We wanted to get on the "standard k8s package manager" train so we moved everything to helm. As a template engine it's just fine: takes values and sticks them in the right places, which is obv not rocket science. The issues come from its attempt to be a "package manager" and provide stable charts that you can just download and install and hey presto you have a thing. As a contributor to the stable chart repo I get the idea, but in practice what you end up doing is replacing a simple declarative config with tons of conditionally rendered yaml, plug-in snippets and really horrible naming, all of which is intended to provide an api to that original, fairly simple declarative config. Add to that the statefulness of tiller and having to adopt and manage a whole new abstraction in the form of "releases." At this point I'm longing to go back to a simpler system that just lets us manage our templates, and may try ksonnet at some point soon.