Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes 1.14 released (kubernetes.io)
173 points by gtirloni on March 26, 2019 | hide | past | favorite | 54 comments



The inclusion of Kustomize[1] into kubectl is a big step forward for the K8s ecosystem as it provides a native solution for application configuration. Once you really grok the pattern of using overlays and patches, it starts to feel like a pattern that you'll want to use everywhere (even outside of the k8s ecosystem).

I'm excited to see that the kubectl docs[2] are actually recommending -k as the default solution (vs -f).

  Though Apply can be run directly against Resource Config
  files or directories using -f, it is recommended to run
  Apply against a kustomization.yaml using -k. The 
  kustomization.yaml allows users to define configuration
  that cuts across many Resources (e.g. namespace).
Really amazing work from everyone on this release.

[1]https://kustomize.io

[2]https://kubectl.docs.kubernetes.io/pages/app_management/appl...


In which situations would you consider using Kustomize over helm? Helm seems like a decent option for defining your app, especially if you’re using other helm dependencies.

I’m yet to deploy anything to a production k8s cluster, so I’m looking for opinions of people that know what they’re talking about.


Helm is great right up until you run into a chart that doesn’t support what you need, i.e. if you want to run on tainted nodes, it needs to support tolerations. Want to add a sidecar to a pod? No.

It also lies about the actual status of the installation. A deployment that applies successfully but has issues with the new pods coming up can still show successful.

The larger charts are a nightmare which isn’t helm’s fault but it doesn’t make things appreciably easier either.

Kustomize has a lot of warts too. If the type definition in k8s source doesn’t have a merge key defined, it’s going to overwrite the entire section of the resource. You can use JsonPatches to get around that problem but that is super gross too.

The entire ecosystem is trying to solve the same problem and no one has come up with the definitive way of doing it (because it’s a HARD problem) It seems like Helm templating and then passing to kustomize to add in the one-off changes is the direction a lot of people are heading but it’s more of a least-bad solution than a good one.

I will say, if you’re using helm templates for your own stuff, you very likely going to be happy with it.


This is a problem we got sick and tired of at $work, so we made kr8: https://github.com/apptio/kr8

You can define a component which you install into many clusters and then slightly differentiate them based on cluster parameters, kind of like Puppet or Chef (without the application stage)

Alongside this, you can actually patch helm charts. An example component can be found here https://github.com/jaxxstorm/kr8-cfgmgmt-example/blob/master...

The patches.jsonnet allows you to add a commandline flag that wasn't in the helm chart at one time.


Oh no, jsonnet!

Seriously though, gonna look closer at this tomorrow at the office. Thanks for the heads up!


You're just running helm charts directly from the public repo? I assume so since you say customization is a nightmare.

I use helm fetch and pull the charts with dependencies into my repo. Running direct from the public repo is a little bit too much like piping the contents of a url to bash without checking first.

You can then customize however you like.

When I want to update the charts, helm fetch it again and merge the changes.


So you are forking and maintaining the helm templates.. :-/


Not really. I'm essentially just pinning the version and then checking the updates before upgrading.


The pattern of Helm template + Kustomize is operationalized with Ship: https://github.com/replicatedhq/ship plus some other functionality to automate pull requests from upstream updates.


Isn't it a native usecase as well? This is linked from the kustomize website as a valid usecase: https://testingclouds.wordpress.com/2018/07/20/844/

(Title = Customizing Upstream Helm Charts with Kustomize)


> Helm seems like a decent option

If you want to: have PITA with RBAC, forget about manual changes because there are no 3-way-merge in Helm2 (that's what kubectl apply does, only hope it will be in Helm 3) and after this change helm cannot apply desired state, some of the biggest pains lasting for years had been resolved only in 2.13 with --atomic parameter.

Aside from dependency resolution, everything that Helm does could be made by any template engine + API caller.


My team uses Helm extensively and will probably stick with it for the time being as it seems to have the most traction at the moment. As an example, MSSQL isn't a part of my core stack, but occasionally data engineers get databases from folks that they need to extract data from. Helm allows me to get one running in approximately 2 minutes or less. For many of the core components of our infrastructure we can deploy similarly quickly. We do our best to track stable for those charts. This is the tremendous advantage of helm.

Unfortunately, many of those stable charts for core infrastructure are barely POC-worthy and take a great deal of effort to get into production state. For example, there is no helm chart for Kafka with TLS support. Maybe not necessary for every production implementation, but dammit, this is 2019.

That said, helm is not safe. A helm "release" is not an artifact. It's a deployment of a helm chart of a certain version. That helm chart version may or may not correlate to the underlying version of the software you're running as that artifact is presumably defined with an "image" in values.

For example, I've run the same aforementioned MSSQL helm chart with 2017-CU7-ubuntu, 2017-CU8-ubuntu, and 2017-CU9-ubuntu tags. A quick "helm list" might show you the same chart version across three different environments but they're still very different.

Helm is not idempotent. Helm upgrade is a nightmare. One has to perform a helm upgrade to set values. Helm upgrades often fail. Even though they report failure, a look under the hood reveals that it actually succeeded.

Occasionally a helm upgrade blows away the persistent volume attached to a stateful thing. That's always the best.

tl;dr helm is the most popular, but has some massive shortcomings so we're all looking for something to replace it and kustomize is exciting because it's now for better or worse part of the core.


I really wish they didn't include this. Whilst kustomize is infinitely better than helm neither are good tools. That said if there was a choice between the 2 kustomize wins hands down I guess.


What's a good tool then?


I really like kubecfg--it's an unopinionated powerful tool that uses jsonnet to describe k8s resources. I really tried to make kustomize work but I found it really, really rigid.

I am, however, not sure what kubecfg's future is: ksonnet (its way more complicated cousin) is being deprecated.


Re: ksonnet deprecation: I heard from some of the CoreOS team at a Prometheus Meetup recently that ksonnet is not exactly being deprecated.

Heptio (the owners of ksonnet) were bought by VMWare and will no longer maintain it. But it's possible the CoreOS guys will take ownership since they use it heavily in some of their tools.


Is Kustomize a native alternative to Helm then?


Helm provides different capabilities. The chart functionally allows you to install and manage deployments with the intent that it's little work. It also has a templating engine that can work with or without the charts.

Kustomize is purely hierarchical templates.


Yes it is. You can do more with Kustomize


Interesting these two responses seem to contradict each other no?


I’m liking two other projects in the same space:

- kosko

- pulumi


The big news here is Local Persistent Volumes hitting GA, which is the key prerequisite for running production databases on Kubernetes and envisioning Kubernetes-only production environments.

Big kudos to SIG Storage!


The Local Persistent Volumes are not quite ready for things like DB workloads. They still require manual work to be done before they can be used. So if you plan on using them you need to configure your servers and manually create the PV objects, and you need to do some configuration to make sure disks are the correct sizes since there is not dynamic packing for smaller DB's.

There was talk earlier about using LVM to be able to really provide a Local PV that is like the network attached PV's, but it doesn't look like that landed yet


Is the conventional wisdom to still avoid self-hosting? I'd like to adopt k8s but managed hosting isn't feasible in our aws region or on-premise.


Yes, you will spend more time maintaining Kubernetes than running your infrastructure if you are a small team.


This is not necessarily true. I learned Kubernetes and converted a complex network of services over from Docker about a year ago. It took me about a month. I barely have to do anything for ongoing maintenance. It's taking up less of my time than Docker Swarm did, and Swarm took up less of my time than the Ansible mess I had before that. More stuff is automated (e.g. by using k8s cron jobs). The scheduler is pretty good now and handles almost all situations by itself.

As a sole developer, I couldn't run what I do WITHOUT such an orchestration platform.

And yes, I administer my own cluster on a bunch of Vultr VMs. I've had fewer problems with this over the last 3 years than it seems people have had on GCS (recent news of outage fresh in memory).


What are you using to orchestrate the VMs? (provisioning new VMs, taking down VMs and so on)


Nothing. Occasionally I upgrade the specs on one of the machines (vertical scaling) with one click. I'm not so bothered about having a bit of extra capacity since the VMs are so much cheaper than on GCS/AWS/Azure.

Eventually I might want to add more machines to the cluster. That would take me a few minutes. I use sup scripts to do the setup. I like it much better than Ansible.

https://github.com/pressly/sup


Keights[1] has a snapshot version available for running 1.14.0 in AWS, in case you can't wait for EKS :).

1. https://github.com/cloudboss/keights


Ok, stars are not a direct measure of anything, but in this crowded space where a Kubernetes org maintained project like Kops exists, why should I even come near a tool like this?

Also, shameless plug?


How does Keights compare to Kops ? https://github.com/kubernetes/kops


Where Kops is a command line tool with subcommands like create-cluster and update-cluster, Keights uses Ansible roles with CloudFormation underneath. This works well to manage the cluster from CI/CD, where updating the cluster is just updating the version in requirements.yml and retriggering the build with Ansible. Creating or updating the cluster is the same process.

Keights also relies on kubeadm, which is released along with Kubernetes, so it can be up to date much quicker than Kops, which is several releases behind. Kops is a surprisingly large amount of code and probably needs significant changes between each release, whereas changing Keights to support a new release usually just means modifying the Kubeadm config files and changing dependency versions.

At the last two companies where I worked, they couldn't use Kops because it checked for an internet gateway on the VPC and failed if one was not found. A lot of companies have centrally controlled Amazon accounts with locked down VPCs and no ability for most teams to modify them. Keights was created to work even in such environments and assumes that your network is already set up how you want it.


Thanks for the input on kops. There's a lot of functionality in kops - e.g. we can manage CNI providers, etcd etc. In some places this manifests as more code, but my gut is that if you add up all the code you're relying on it turns out to be a wash. And you can definitely use kops from CI, driving it from yaml files that are checked in to git - that seems to be a great configuration particularly for people managing large numbers of clusters.

The long-term strategy is to get most of the kops functionality upstreamed into standalone community projects - and we're making progress with etcdadm, addon-operators, cluster-api etc. Then it will be easy to write your own tooling if you don't like some of the kops decisions, but still benefit from the community investment in e.g. etcd management etc. kops itself becomes a thinner shim around those shared common pieces. A lot of the decisions that are now generally agreed (e.g. dynamically attaching etcd volumes) weren't as well accepted when we started off, so it was harder to get them going as community efforts!

We do have support for "phases" in kops which should allow you to use a provided VPC, but to be honest it's still not as easy as the rest of kops is. We also have a few PRs in-flight that to allow you to specify an alternative to an IGW e.g. a VPN, but it's hard to reach consensus (but I guess we should based on your input!). The big trade-off here is that once you start allowing arbitrary configuration, you lose the ability to validate things, and so for some fraction of people there are going to be mistakes. That works great for small community projects, it is really great if your business model is paid support, but for a large community project it really can be problematic. I don't think we've got the balance totally correct in kops, but that's the trade-off we wrestle with.


Hi Justin, thanks for the feedback. I'll take another look at kops as it's been a while. I do think it could benefit from having a knob to turn off some of the pre-validation, or for it to do functional checks instead of checking for the existence of specific AWS resources. Many companies treat their AWS VPCs as an extension of the private corporate network, and they come up with lots of creative ways to route traffic to the internet (or select portions of it) without the VPC necessarily having an IGW or VPN. AWS is also making a growing number of services available from private endpoints within the VPC, so there is often no need for internet access. I understand the value of validating the environment, but an "expert mode" toggle would also be nice.


Thanks for explaining.

It sounds like Keights is the same as Kubespray then, isn't it ? Kubespray also uses kubeadm underneath, as far as I'm aware.


They both use kubeadm, but the design is very different. Kubespray runs Ansible against the hosts in the cluster, whereas Keights uses Ansible to build CloudFormation stacks, and all the instances then bootstrap themselves with kubeadm.


Kops is (intentionally) very out-of-date. This is a blog post about K8s 1.14 being released, but the latest stable Kops is still based on K8s 1.11.x.


I don't think they intend to be as out-of-date as they are. They are supposed to be around a version behind. So they should really be releasing 1.13 soon rather than 1.12.

They have had a few big tech changes in the 1.9/1.10/1.11 timeline so hopefully they will be able to catchup a little.


Yes, the reality is probably somewhere in between. We've heard that users like the idea that kops releases are more ready, where the .0 releases of k8s might have more issues particularly with ecosystem components. So we do lag a little behind - ideally one release.

1.12 has been a particularly tricky release getting everyone from etcd2 -> etcd3, but we've finally turned the corner on that one, so that should now let us catch up a bit.

Finally, we've also heard that users want more of a choice, so we're going to start doing e.g. 1.13-alpha and 1.14 alphas much sooner. We'll still wait to do kops 1.14.0 until everything is ready, but for users that want to run k8s 1.14 sooner, they will have an option that isn't building from source. And hopefully this also gets more people using pre-release versions of kops (in non-production environments) and also helps stabilize the releases more rapidly.


Ah yes, OK. What about Kubespray ? https://github.com/kubernetes-sigs/kubespray


Kubespray SSH:s in and does what you'd do manually, while Kops deploys a prebuilt image. That means Kubespray is pretty much cloud-agnostic, and much easier to customize, but it's also not really compatible with auto-scaling groups and the like.


And right when you thought proprietary, non-standard OSes were finally going away...


What do you mean? Kubernetes is open source and the de-facto standard in container orchestration.


might be referring to this news item from a few days ago https://news.ycombinator.com/item?id=19435854 "The Cloud is Just Another Sun"


I was referring to Windows support, not k8s itself--sorry that my language was confusing.


I guess that was a reference to Windows nodes


[flagged]


Infrastructure developers and web developers are worlds apart. We're all branded "programmers", but we couldn't know less about each other.

Moving from big data centers in Texas to small startups in San Francisco was, plainly, culture shock. Seeing Kube documentation not work in mobile, if anything, makes more sense than the alternative.

We're still in the stone ages. I can tune you one hell of a MySQL installation, but I would almost certainly also mess up making a page mobile-friendly. Half the people critiquing K8s right now don't know what OpenStack is, and half the people leading successful software startups aren't comfortable with sh. It's all madness, and on the other hand - is K8s documentation on mobile actually important?

Forgive the tangent. What knowledge is and isn't applicable, and when, is endlessly fascinating to me.


I'm one of the leads for docs. We've got an issue open for the blog CSS: https://github.com/kubernetes/website/issues/13412


they used bootstrap, which is responsive by default, but they disabled the responsiveness with body{min-width: 1200px;}, they are actually fighting their framework, disabling the acceessibility features that are already there.


This is super weird because that min-width has no positive effects. It just makes things worse on smaller screens. Removing it makes the page fully responsive and usable on all screen sizes.


Can you submit a PR?


Created by developers for developers (on 27” monitors)



Approved. Thank you!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: