Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kubernetes 1.23 Released (kubernetes.io)
65 points by ecliptik on Dec 8, 2021 | hide | past | favorite | 32 comments


Sad to see flexVolume deprecated. CSI is a more powerful interface and I knew this was coming, but the simplicity of the flexVolume exec model for node-local volumes was an amazing feature.

With very little effort, you could use them to create volumes managed by any binary or script: for example, a volume that fetches internal build artifacts or a volume similar to configmaps that pulls from external data sources on pod startup.

The new CSI interface requires operating a GRPC server on a local unix socket. It'd have been great if it were at least HTTP+JSON. GRPC really increases the barrier to entry in all languages that can't use the golang CSI driver libraries and it's hard to see the advantages for a low volume API on localhost.


But you could just create a CSI driver once that execs a local script. I dont see the issue.


That still significantly raises the barrier to entry until someone writes a generic CSI implementation that translates to the flexvolume API.

And then it's still yet another daemon running on every kubernetes node 24/7.

Regardless, in our case the scripts are full-fledged python applications so inevitably we'll just turn them into grpc servers. We're an advanced kubernetes shop so this isn't a real hurdle, but not everyone is.

My complaint isn't that it is impossible, it's that it makes it much more work compared to the previous API and by selecting GRPC as the protocol instead of HTTP, they are largely disregarding the trivial language interoperability there once was for little gain. GRPC+protonuf is a PITA in non-go languages and build systems.


IPv4/IPv6 Dual-stack Networking graduates to GA


Surprised to see that IPv6 still isn't the default/standard after all these years


Ohh thats interesting


Yes. When I get a clear signal this is in GKE (I believe there is some lag between the k8s release and adoption in public service) I'd be very keen to try it. (selfish observation: I'm mostly hands-off so the pain will be felt by others)

I also believe the blocker(s) on this were both k8s and the Google infrastructure. its possible GKE can host dualstack but the network can't forward it painlessly and its still edge proxied in over 4.

Always made me wonder why they didn't do ULA internally and avoid some issues. I guess its the hidden painpoints on the traefik or other visible boundary which got in the way, although things other people say suggest the kubernetes people just didn't even think V6 when they designed their initial network stack. It always felt to me (as a non implementer) like the obvious fit. Guaranteed unique, private address space which can handle billions of addressed things on the "inside"


GCP still doesn’t support support ipv6 except on external interface in like four small regions and only as primary interface afaik so gke unlikely getting this any time soon


Plus ça change.


It was only in the last few weeks AWS announced IPv6 only VPCs (although they've supported dual stack for a while)


Think my organization is going to be stuck on 1.21 for a while. That swath of deprecated API versions* that were removed in 1.22 (specifically Ingresses and RBAC stuff) means that lots of helm charts are just unusable.

Part of that is Helm's fault; most templates are not flexible enough to convert a resource to the new api version without forking the whole chart.

If there are tools that make this easier, like some kind of post-processor for helm, I'm all ears.


> like some kind of post-processor for helm

If you don't use Helm's "advanced" features, like helm upgrade etc. Try tanka [1].

[1] https://github.com/grafana/tanka


They got rid of Ingresses?


No, the API just changed slightly: https://kubernetes.io/docs/reference/using-api/deprecation-g...

Edited the phrasing to make that clearer.


Awesome stuff to look forward to. I wonder how long it will take GCP and AWS. Wish there was an easier way to provision dedicated hosting providers like Hetzner and OVH and create a federated Kubernetes environment.


I was wondering that too for EKS and saw on the EKS Release Calendar [1] that 1.22 isn’t even targeted until March 2022. So I’m assuming 1.23 isn’t available for EKS until at least the summer.

EKS releases were starting to almost catch up with upstream, but I wonder if the version churn and support may have been too much for AWS customers to keep up with and they slowed it down some.

1. https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-...


version churn is a real issue on EKS. my team had to dedicate someone almost full time to EKS upgrades due to them constantly EOLing. with small clusters sure it may be easy to upgrade, but with multiple large clusters it’s a lot of very dangerous ops

this felt like a big step back. on ec2 you do some server management here and there. with eks we needed a nearly full time dev ops. you would think severless would reduce the need for dev ops


As long as you can rapidly update your dependency stack with Argo or Helmfile and you have good TF or CF for the cluster upgrade, EKS upgrades are a breeze I’ve found! The biggest time sink is rolling nodes which I think managed ode groups can do automatically if you have those going.


Have you tried KubeOne? Supports both OVH & Hetzer including their CSI drivers. And it seems to support some type of federation:

> Generally, the KKP architecture supports large-scale multi-cloud deployment of Kubernetes clusters where the master cluster, the seed clusters, and the customer clusters can all be deployed in a different region. https://www.kubermatic.com/blog/getting-started-with-kuberma...


I have not, but when I looked in the past these products seem to focus on OVH and Hetzner cloud products which are different than the bare metal offerings.


Yeah, that would be a killer product. Kubernetes on bare metals machines, for a reasonable price.


Isn’t that what https://metal.equinix.com/ sells?


Reading that page, I have no fucking idea what the product actually is.


Helps us kick ass with a guitar I guess?

Edit: In all seriousness, it's basically Packet (was bought up by this company).


I really don't want it to be limited to a single hosting provider, really just any dedicated hosting provider that allows you to provide a distro image.


I think that would be packet.net but they're acquired and who knows how much investment they put in the product now.


I've had good luck with microk8s on similar hosting providers.


Windows Server 2022 support was targetted for 1.23, but it's not clear if it made the release...


Ugh, would have made me happy a month ago. We are stuck on Nomad now.


What landed in 1.23 that would have meant not going to Nomad for you?


Nomad isn’t so bad – or is it?


I looked at nomad hoping to find a more lightweight alternative, but when compared to Kubernetes options like k3s, nomad is a complete resource hog. No idea why GP dislikes nomad, but I know for me it’s always either missing built in equivalents for k8s features, requiring more engineering work to make it an acceptable option, or it requires more system resources than is acceptable.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: