This is a good thing; Microsoft will increase competition in this space by applying its expertise in dev tools to Kubernetes.
I just hope that MS doesn't add in too many platform-specific pieces that would encourage vendor lock-in.
For example k8s ingress right now is based on controllers and GCE controllers support/don't support a wide variety of things compared to nginx ingress... these areas of Kubernetes make me worried about potential future switching costs.
Note that Microsoft Azure was a launch partner this month for the Certified Kubernetes conformance program, which is explicitly designed to reduce the chance of forking and lock-in by defining the standard Kubernetes APIs that all platforms and distributions must support.
It mainly involves replying to HN threads on the weekend.
More seriously, this slide deck gives an overview of all the areas in which CNCF is involved, from the Certified Kuberenetes program to providing marketing and other services to our projects to offering training and certification.
I am familiar, ironically. I spend my downtime listening to SE Daily, Newstack Makers, and other things to make car rides less boring. CNCF comes up a non-trivial amount. I will check your personal site. Thanks.
Fortunately there's a bunch of ingress controllers you can use: Traefik, Voyager, HAProxy, and probably several others. And it's surprisingly trivial to write your own.
So ingresses is not currently a weak point in relation to vendor lock-in happens. And Kubernetes already supports plenty of non-Google tech; as an open source project, Kubernetes is refreshingly non-Google-focused (there are a bunch of players, notably Red Hat and Microsoft, ensuring this).
As an aside, the current ingresses (including the Nginx one and Google's own GLB one) all have annoying deficiencies (subpar support for TLS and per-route timeout settings among the biggest). Ingress support is, and has been for a long time, Kubernetes' weakest point. For example, GLBs max out at 20 TLS certs (ridiculous if you're hosting many customers on a SaaS solution) and default to a timeout of 30 seconds (you can't control this using ingress annotations; you have to manually go in and edit the backends via API or UI) (which doesnt work for big streaming requests and WebSockets). These are also very trivial problems compared to the complex ones that are being solved by big new features in Kubernetes proper, so it's a bit surprising that ingress implementations are lagging to this extent.
>> As an aside, the current ingresses (including the Nginx one and Google's own GLB one) all have annoying deficiencies
Agree, this is along the lines of what I meant. For example GCE ingress allows you to reference a global static IP while this is not possible with solely nginx ingress due to limitations with the TCP load balancer. There are separate ingress annotations for GKE/nginx/haproxy, etc. If I want to use the global-static-ip GCE annotation, this will make it a little harder to move to Azure.
Totally fair points, and vendor lock-in is something to be highly vigilant of.
For some context, though: the ingress functionality is rather recent in Kubernetes and is still fleshing out in terms of demands in the market and across cloud providers. Another year or two will do a lot for equality and saner networking solutions for small-scale and bespoke K8s deployments.
I got a little shock that I had to set up an ingress controller on each of my nodes (on our VMWare cluster), but going through the nuts and bolts of it I can see why it's hanging a bit behind the rest of the product. I think the devs are being smart letting the ecosystem mature a little instead of pushing out something half-baked and prematurely limiting.
The problem with Ingress "lagging" is that it was designed to be a lowest-common-denominator API - it only absorbs logic that exists in the majority of realistic implementations. For better or worse, cloud LBs are vastly more limited in feature-set than Nginx or Envoy, so Ingress is too.
This is a big topic for debate, and will be on the agenda at KubeCon in O(days).
This is understandable, of course. In hindsight, I suspect the current concept of an abstract, one-size-fits-all ingress was, and is, going in the wrong direction.
With CRDs, we could have each ingress controller provide its own, native ingress object ("nginx-ingress") that had the exact features it supported (with schema validation). The ingress controller would then create or delete cloud-specific CRDs ("google-loadbalancer") based on what flavour of cloud you're running under, which Kubernetes could pick up and use. Or something like that.
But as you say, some of the friction exists because cloud LBs are limited in the first place. The arbitrary cert limit on GCP is particularly egregious. We run a SaaS solution with about 100 vendor domains, which means we've been forced to use the Nginx ingress controller and terminate TLS there, instead of at the GLB level where it arguably belongs. (We could run 10 GLBs, but that would require splitting our ingresses into 10 separate ingresses, with the duplication and potential for copy/paste errors that would ensue.)
But thirdly, it's also true that several of the ingress implementations are just a bit sloppy. Traefik, Voyager and haproxy-ingress all have issues with using TLS certs (all of them have open issues about serving both HTTP and HTTPS at the same time, I believe). A lot of today's ergonomics could be solved by polishing up these projects.
First, Ingress had a purpose to serve, and it has served that purpose - it is relatively easy to handle low-complexity apps with generic Ingress.
But low-complexity apps don't stay that way.
What you're describing is very much the way my brain has gone. In my experience, most users end up using at least one non-portable annotation on Ingress. The logical conclusion then, is that people care about features MORE than portability in this facet of the API.
This is not surprising to me, given how religious the debate tends to be...
Microsoft should write an ingress controller for Azure? Not sure how you can get into a vendor lock-in situation with k8s. Maybe Microsoft extends the API with its own blackbox controllers and force that instead of kube primitives?
I seriously doubt it’s a deliberate strategy on their part in this case, but Microsoft has been known to pull this type of maneuver in the past:
Step one: “We love Kubernetes! Run it on Azure! It’s great”
Step two: “Kubernetes experience on Azure is best enjoyed using Microsoft Enterprise Ingress, Microsoft Custom Resources for Business, and developing in Visual Studio. Use those!”
Step three: “Oh, the CTO wants to migrate workloads to AWS instead? Too bad you used so many of of our custom and proprietary add ons, that’s going to make migrating incredibly expensive. Just stay on Azure and everything will be just fine...”
And Deis Workflow was marked EOL later on, something like July of this year, ostensibly so those devs could devote more time to Azure and other OSS offerings.
A lot of the complexity of getting serious deployments running on Kubernetes is just neatly abstracted away by Deis, so much IMHO that now I'm not sure what to do about getting my team to take up Kubernetes without encouraging them to use Deis. To be clear, we are using Deis, but in a very limited capacity in large part because of the perceived risk associated with those yellow construction triangles[1]. (We are so lightly invested in K8S that I think we don't really have another part of the plan as of yet. The plan is to spend 6 months on ECS and not explore Kubernetes until after, can't wait to hear what news comes this week from re:invent, or if this strategy will even be realistically possible to follow once the "EKS" news hits.)
Some of us really felt like MS making this move shortly after acquiring Deis team was a bit like throwing out the baby with the bathwater, but those devs have assured us they are not just taking directions from corporate, and that they actually are going where the demand is (that there's not enough demand for Deis Workflow specifically to be strategically important for Microsoft, but there is plenty of demand for Kubernetes at-large and more Kube-native tooling around K8S issues.)
Not sure how close you've been following this story, so apologies if I'm telling you about things you already know. And I'm not one to complain about the mode of delivery that my free stuff is received in, but this change really came out of nowhere for me and has thrown a massive wrench in my own Kubernetes adoption strategy. Some of us are trying to make sure that the OSS Deis Workflow tools are not lost to bit-rot, and development of the project now continues under the name "Hephy"[2].
Also, just a minor nit, I think that although Helm was originally made by Deis, Helm[3] the package manager has not been "taken" under the Microsoft umbrella, as it was adopted and considered as a part of Kubernetes-proper before the sale of Deis.
> I just hope that MS doesn't add in too many platform-specific pieces that would encourage vendor lock-in.
I watched one of their videos demonstrating the product, and they understood people wanted 100% compatibility with the open-source k8s system and no special Azure/MS features, and so this was one of their selling points, which is good.
> I just hope that MS doesn't add in too many platform-specific pieces that would encourage vendor lock-in.
That's Microsoft's main strategy though. Example: Users can't even install Firefox or Chrome on Windows 10 S.
Microsoft announced: "Apps that browse the web must use the appropriate HTML and JavaScript engines provided by the Windows Platform."
I wouldn't be the slightest bit surprised to see similar things happening to their cloud platform as soon as they are able to do them without users' noticing.
I've lived through the experience of using non-MS products since the early 2000s, and all of those things are related. It has been one of their core strategies and still is, though the techniques are shifting.
> "Apps that browse the web must use the appropriate HTML and JavaScript engines provided by the Windows Platform."
This seems like a thing that platform owners are doing more and more (iOS has a similar clause, and ChromeOS, well, uses Chrome).
Is this just purely a move to stifle competition from web apps that could threaten control of the platform? Or perhaps I am missing some nuance behind these kinds of decisions? I'm honestly struggling to find any room to give them benefit of the doubt here.
A JavaScript JIT requires memory that is writable and executable. W^X is a security feature and required for the security model of walled gardens. Otherwise you could pull down unapproved code and make the walled garden useless.
They don’t want shitty JavaScript interpreters making their platform look bad, so they force you to use their system JavaScript engine.
Yeah, the excuse for taking away freedoms usually is "security".
Walled gardens are one of the problems. They should be rendered useless.
It's a company that caused many people to suffer through Internet Explorer for 20 years. I don't think that security[1] or making the platform look bad[1] is the primary motivation with blocking competing browsers. MS saw that other companies got away with it using different tactics, so they just changed their approach.
> This seems like a thing that platform owners are doing more and more (iOS has a similar clause, and ChromeOS, well, uses Chrome).
Shame on all of them. If it continues, the next generation will not know what it's like to have technology freedom like we currently do (in the US, for the most part).
I just hope that MS doesn't add in too many platform-specific pieces that would encourage vendor lock-in.
For example k8s ingress right now is based on controllers and GCE controllers support/don't support a wide variety of things compared to nginx ingress... these areas of Kubernetes make me worried about potential future switching costs.