... Until they hire you back because entropy exists :P
I'm on the DevOps side as well going through the same transition, k8s also allows insane customization, and I have some colleagues that are delaying our rollout unintentionally so they can play around with developing more tooling for deployments which is really frustrating. The k8s scene seems to be filled with constant scope creep and refactoring to get it just perfect before use. Either way, I agree the benefits far outweigh this annoyance that I've experienced. I'm so excited to work on developing tooling instead with my time.
However, I don't think we're free entirely from managing servers the old way with Chef / Puppet / Ansible, unless you're purely hosted there's still the rule of thumb you shouldn't run services that hold state in k8s. But with persistent vol's I do see that changing, though I'm not sure if everyone agree's that's a good idea.
"I have some colleagues that are delaying our rollout unintentionally so they can play around with developing more tooling for deployments which is really frustrating."
My impression is that the primary purpose of Kubernetes is to give SRE teams political air cover to rewrite a lot of their existing processes. Whether Kubernetes is actually required for that, or even net superior seems questionable. This unsexy work becomes justifiable because it's coupled to a mainstream accepted tech modernization.
You see this same phenomenon with database migrations. Where what the team really needed to do is just rewrite an app to use the existing database properly. But no one is going to approve that work. So what happens is people convince themselves that the existing tech sucks and use that to rationalize doing the rewrite. The result ends up not always being net superior, because sure you did the rewrite but you are also eating the operational cost of integrating a new technology into the org.
Yes, the number of proposed db switched I’ve seen is remarkably high. I once interviewed for a role as a database developer and was confused to find out they didn’t have the database that the role pertained to. One of the early questions in the interview was how quickly I could migrate production from ms sql to pg. Needless to say that was a gigantic red flag and I hope they found the right person for that job.
I’ve also seen a switch from rdbms to Hadoop because a company had “millions” of rows. Luckily on this one I only had to rewrite a handful of queries.
I've got relatively modestly-specced SQL Servers handling tables with hundreds of millions and even billions of rows without breaking a sweat. Somebody either just really wanted a new toy to play with, or has no idea what indexes are.
Exactly, I’ve seen Sql servers handle billions of rows with 2 thousand columns. I think also people that work too long at one company don’t realize how problems were solved elsewhere.
> I’ve also seen a switch from rdbms to Hadoop because a company had “millions” of rows. Luckily on this one I only had to rewrite a handful of queries.
Wat. That's gross. It probably costs them more per query now than the rdbms did.
> So what happens is people convince themselves that the existing tech sucks and use that to rationalize doing the rewrite.
That certainly is a thing that happens, but you could use that to dismiss any technology at all. In the case of Kubernetes, it makes operations a lot easier to the (important) effect that the development teams can do a lot of their own operations work. This is important since they're the ones who are empowered to solve operations problems and it also eliminates the blame game between ops and dev. Further, it eliminates a lot of coordination with a separate ops team--the dev teams aren't competing to get time from an ops team; they can solve their own problems, especially the most common ones. This also has the nice property of freeing the SREs to work on high-level automation, including integrating tools from the ecosystem (e.g., cert-manager, external-dns, etc).
Kubernetes certainly isn't the final stage in the evolution, but it's a welcome improvement.
> but you could use that to dismiss any technology at all
No, you can't; you need three (-ish) factors:
1. The technology is sufficiently incompatible with what you're currently using that you need a rewrite to use it (eg, this generally doesn't happen with gcc -> llvm, for example).
2. The technology is sufficiently (faux-)popular that it's possible to convince a pointy-haired boss that you need to switch to it (eg, this won't work with COBOL anymore, though unfortunately it successor Java is still going).
3. The technology sucks.
And really, if you want to dismiss a technology, point 3 ought to be enough all on its own (particularly since that's presumably the reason you want to dismiss that technology).
I think in your eagerness to 'gotcha' me, you missed my point. :)
Anyway, we're trying to assess Kubernetes' value proposition (i.e., to answer "does it suck?"). If your system for answering that question depends on already knowing the answer, it's not a very useful system.
> we're trying to assess Kubernetes' value proposition (i.e., to answer "does it suck?").
Well, I'm not, since I already know that, but if you don't know that yet, then your position makes more sense. (That is, using "dismiss" in the sense of finding out that it sucks, rather than (as I read it) in the sense of justifying a refusal to use technology that you already know sucks.)
Unfortunately, due to market-for-lemons dynamics, it's usually not possible to convey knowledge that a particular technology sucks until things have already gone horribly wrong. See eg COBOL or (the Java-style corruption of) Object Oriented Programming.
> you are also eating the operational cost of integrating a new technology into the org.
and in short order you will reap the savings of being able to hire people who already know your devops/infra tech stack, and can hit the ground running. not to mention being able to benefit from the constant improvements that come from outside your org.
Maybe. I don't buy that just because people are on Kubernetes they won't still kludge it up with custom in-house scripts or "extensions". Give it time.
Ha, are you me? I really pushed for us to follow the "change-as-little-as-possible and ship to prod quickly" route. Prod is where things get hard, and it's better to find out what's hard sooner rather than later.
We are running a handful of stateful services in K8s (things like MongoDB for which GCP doesn't have a compelling and affordable managed offering). It's definitely more complex than transitioning a stateless service, but so far our experiences with StatefulSets and PersistentVolumes have been good. And this allows us to sunset Puppet/OS management completely. I should note that we _are_ being extremely careful about backups. We also run each stateful service in a dedicated node pool for isolation. Who knows, maybe a year from now we'll be shaking our heads and saying "that was a TERRIBLE idea" but for now, so far so good.
We're running on GKE, so lots of things that would be hard in on-prem environments (ingress, networking, storage) are easy.
> We're running on GKE, so lots of things that would be hard in on-prem environments (ingress, networking, storage) are easy.
Agreed. The on-prem story is still really messy, but I think there's a lot of third-party work to build on-prem distributions that are cut and dry. Unfortunately, there are lots of them right now and it's not clear what the advantages and pitfalls are of each. Things will settle and this problem will be solved with time, but for now it's quite a pain point.
I'm on the DevOps side as well going through the same transition, k8s also allows insane customization, and I have some colleagues that are delaying our rollout unintentionally so they can play around with developing more tooling for deployments which is really frustrating. The k8s scene seems to be filled with constant scope creep and refactoring to get it just perfect before use. Either way, I agree the benefits far outweigh this annoyance that I've experienced. I'm so excited to work on developing tooling instead with my time.
However, I don't think we're free entirely from managing servers the old way with Chef / Puppet / Ansible, unless you're purely hosted there's still the rule of thumb you shouldn't run services that hold state in k8s. But with persistent vol's I do see that changing, though I'm not sure if everyone agree's that's a good idea.