This really helps with the dev-to-production story for containers.
When people first started using Docker containers, we were promised things would run identically in dev and production - no more "but it worked on my laptop" issues. Then the rise of orchestrators meant that there again became a significant difference between running an app locally (in compose) and in production (on Kubernetes). Docker for Mac/Windows will now bridge that gap, giving me a k8s node to run against in dev.
Whilst Kubernetes has provided a great production orchestration solution, it never provided a great solution for development, meaning most users kept developing with Docker and Compose. It's great to see these worlds now coming together and hopefully leading to a first-class solution all the way from dev to prod.
in order to test your app in prod-like env you need to run prod-like env locally. ie. a k8s-cluster that is close enough to prod. for that you will have to at least simulate multi-node-setup and run all cluster-addons like in production.
i am excited about this move from docker but i don't think it will solve all the problems. i think once you have a bigger team it is worthwhile to run a second k8s-cluster besides prod where people can just test things on it. otherwise it is actually not that hard to run a local k8s-cluster with vagrant, not sure how docker wants to top that - i think there is no need to top vagrant.
I believe when you say "simulating multi-node setup and addons" - you're seeing it from an operations perspective. Thing is, those concerns don't need to be repeated for every single application. When a consumer says "test", they mean testing functionality. Not testing operations like network I/O bandwidth, sysctl parameters, rebalancing, etc. The expectation is that, operational folks (kubernetes integration tests, ops integration tests, GKE tests, etc) already have tested and verified all of that.
This is not like ops vs dev. When you use a library or framework (say, spring) - you don't test whether HTTP MIME Types are working correctly in spring. You assume the library already has all that tested and covered, and as a consumer, you write tests for what you code. The library's code (and tests) are abstracted from you. This is similar, except for operational stuff. In fact, there is no major difference between them. Its just layering and separation of concerns.
There is no such thing as 100% prod except prod. Even for rocket launches. 90+% is good enough for majority of cases, and is already on the higher side.
Not necessarily, we have been using k8s for a while now where I work and what we've been doing is running a simple minikube setup locally, with the production add-ons (DNS, nginx ingress controllers, etc.) and circumventing what is AWS-related.
It's been working quite well, no need for multi-node-setup so far that I'm aware of.
It will be possible to run all your add-ons in the local setup, including networking. Multi-node is essential for some use-case, but arguably is not critical for most people, yet it is coming in the future.
I mostly ruin it for compatibility reason, to integrate with or allow to run alongside of it. It works, butnhas major annoyances of which the shares is one of the bigger ones. Always need to restart containers as they are started before the share is mounted properly. And often looses the connection...
Minikube (and its derivative, MiniShift) have been very helpful for my team in bridging the gap between local development and production for Kubernetes and OpenShift.
Yeah, not to pile on with the comments, but more to help someone else who might not have used minikube, but it's been great for the past year we've been using it. As simple as simple gets.
There a few minor UX flaws that make it frustrating to use, e.g. having to set Docker host, shared filesystem performance is poor, networking in enterprise desktop environment is broken (just to name a few top most issues).
Also, a lot of folks end up running Docker for Mac and minikube VMs, why should they have to run two VMs?
Additionally, minikube is completely different from production-grade deployments (single binary, which means a rewrite of main function for etcd and all control plane components, as well as hard to debug basic performance issues in control plane, there is one large process and you don't know what is wrong, also there is no way to use your favourite network add-on).
Additionally, minikube is based on legacy Docker libmachine, it is not really maintained anymore.
Certain things are not possible. However, we try to match functionality as much as possible with localkube (and soon kubeadm).
Shared folders, especially using 9p and/or cross-platform have been an issue, and I personally also experience this in the fork Minishift, and this likely the performance issue you meant.
So in reading this link where information is scarce, this seems like an alternative to minikube (e.g., bundling Kubernetes as part of Docker CE/EE). Is that the right interpretation?
FWIW, we've found minikube a bit wonky. It's resource intensive, so if you want to run more than a couple services, your laptop starts to melt. One of our open source projects is Telepresence which relies heavily on Kubernetes networking, and we definitely see more weird/networking issues with Telepresence/minikube than with regular K8S clusters.
Docker for Mac and Windows will include a single node k8s cluster, so yes, effectively a replacement for minikube. Docker EE will include full support for k8s as an orchestrator as well as Swarm mode.
I like minikube for some things, but I'm hoping this will be better by allowing for multiple K8s nodes to be spun up which is handy for some learning/training scenarios
i guess the easiest way would be to run a registry in minikube.
another approach would be to run docker inside a k8s-pod (docker-in-docker), that way you can run images without having to push them to a registry but still test it in k8s-environment (at least to some extent).
Disregarding the swarm compatibility bit (which is irrelevant because swarm is kind of irrelevant), I don't really like what this "support" really means. As others mentioned minikube and k8s-cluster is already providing dev-to-prod compatibility.
Kubectl was already providing docker cli commands like "exec" "logs" etc. So you can execute some of these commands on a k8 cluster with the docker binary too? And why would you do that?
All I see is struggling for relevance, duplication of functionality and a very unnecessary vendor lock-in vector.
I see the same thing too: a struggle for relevance. The center of gravity has already moved onto k8s, even if it will takw a while for the rest of the developer community to catch up.
There are a lot more exciting work around Helm (packaging for K8S), third party extensions (plugins for K8S), and Operators (embedded managed services). The K8S community already has more contributios going towards the stack, and the three technology I just mentioned reduces enough friction for contributing innovations that k8s will likely phase-shift.
I just don't see Docker catching up. Sure, developers know them and think it has a good developer story. It doesn't. Docker for Mac and Window is practically useless when you have to struggle with file mounting for dev work. Docker Swarm is just not as robust as K8S, and the source of innovation going into Docker Swarm is coming from ideas from K8S.
Someone said it. ... K8S, not Docker, is the Linux of the cloud native platform.
From a production Swarm operator without much experience with k8s:
When you create a new Swarm cluster with `docker swarm init`, Swarmkit creates PKI that is used for everything onward: gRPC cluster state communication, encrypted overlay networks, secrets, and so on. Keys are automatically rolled every 12 hours by default. This is done automatically and transparently to the operator, without any effort on their part. Adding workers or managers after that is as done with `docker swarm join --token <token>` on the new node, that's all.
I believe operation ease for starting a cluster and maintaining it for k8s is something that's going to get a lot easier soon from Moby and k8s joining forces.
Secrets. Swarm got encrypted secrets in January, and you create one with `docker secret create <secret-name> <file/stdin>` or the API. It looks like k8s got it as an alpha feature at the end of June and it doesn't look easy: https://www.twistlock.com/2017/08/02/kubernetes-secrets-encr...
They are making secrets functionality plug-able, to match the flexibiliy in networks, volumes, and logging.
As far as development power goes, 9k contributors and 9k pull requests in the last year is nothing to scoff at. That's also a misrepresentation because of the effort put forth for CNI and container standards that make a lot of the work interchangeable.
k8s has a lot of nice stuff and a great, evolving ecosystem, but I think you are being a bit harsh on Swarm. I think Moby (and derivatives) and k8s will benefit mutually from this, as they have over the past year already.
My team pays for GKE. The keys you are talking about are done with K8S + GKE so I don't see how that is much of an advantage other than cost. I want to add a new node, I go to the UI and increment a number.
As far as secrets go, there are a couple approaches the K8S community is trying for with encrypted secrets. The one I am rooting for is the integration with Hashicorp Vault. Chances are, that area will be fairly extensible.
I agree, 9k contributors and 9k pull requests are not to be scoff at. I wasn't scoffing at them. I'm simply saying that (1) center of gravity and where the influence is now has already shifted off of Docker, and (2) Just like Github.com phase-shifted open-source development (in way that Sourceforge never did), there are three technology in K8S that will likely phase-shift K8S development. This isn't about development power, but about a phase-shift.
I don't think I am being harsh on Swarm (and since when was this about being harsh? At the core of this is about whether the technology is fit-for-purpose and helps enable people achieve their mission and Docker is a commercial enterprise) The tipping point has come and gone. I saw Docker squander a lot of developer good-will over the past two years. I remember when CoreOS announced they were creating rkt, there were a lot of backlash to CoreOS. I liked CoreOS but I thought at the time it was a weird move. If that had happened this year, there would not be as much of a backlash.
Not really sure I see the vendor lock-in here. If you use Docker EE sure you're going to be locked in to their solutions to an extent but then that's true of adopting any commercial supported solution that provides layers on top of the base k8s clustering tech (e.g. Openshift).
The API is still k8s and the YAML files are identical, so migration at a technical level off that platform should be easy enough.
I think this move is about Docker maintaining the trajectory in enterprise where people want the management GUIs and extra features but where Kubernetes and particularly Openshift is making progress at the expense of Docker EE.
Everything in openshift is open source at https://github.com/openshift/origin. The commercial version is long term support, security response and errata, and the stability around that. There's nothing that is withheld from the open source project. It is not open core.
Edit: I forgot, the logo is not open source. So the logo is withheld :)
Pretty similar. The main difference from the architecture standpoint is that Enterprise DC/OS uses Mesos+Marathon for container orchestration and OpenShift uses K8s. The other functionalities are mostly overlapping.
There is also an OSS version of DC/OS minus some security features. Also, historically Mesos has had better support for running stateful workloads but K8s is catching up too.
Recently, DC/OS announced support for K8s too which is similar to the K8s announcement from Docker.
I've not look too much at Mesos and DC/OS, so could be wrong, but my understanding is that where Openshift is focused on managing containerized workloads using Kubernetes and Docker or CRI-O , Mesos and DC/OS are more widely focused on managing a variety of workloads which could include containers but also VMs etc.
For me, the benefit of Openshift over vanilla Kubernetes is the additional management tooling and the strong default settings for production use.
Openshift has a lot of focus on things like manageability and security which make it well suited to production workloads in enterprises and anecodotally it seems to be taking off quite well in enterprise customers.
Great news for everyone except for VMWare (this is a simple compelling operating system for data centers that spans both windows and mac) and Openshift (which was one of the few viable ways of actually purchasing Kubernetes support). A lot of egos on both sides had to be suppressed to make this happen. Docker Swarm was a key driver in making Kubernetes popular because everyone realized that they needed swarm, but the implementation was so poor, no one could use it. That kicked K8s up into hyperdrive. Parts of the K8s community have been particularly partisan in doing everything they can to minimize docker.
Hopefully both sides _now_ come together and sing kumbaya, and we don't see a continuing KDE versus Gnome war a embrace and extend attitude by Docker or a continuing push to marginalize docker by the Kubernetes folks.
Surpressing egos usually means one has found a joker to beat the other in the fight for leadership. I don't think that battle is over yet, though. It's a very strong move Docker does here, but at the same time k8s is considering choosing another container engine as their main component. Currently at least in Enterprise k8s has a lot more traction than docker (I personally love docker more, but every day need to focus 99% of my effort on k8s because of that).
And given enterprise support Openshift is still ongoing as the best solution. They are afaik the only ones that offer a complete set of answers to most questions you can have in the PaaS space. Everybody else is like "here's an API, choose one of 3 billion plugins" (just thinking CNI here). In the end for the customer it doesn't matter though. Customers just want things to run smoothly and if possible reduce their maintenance work force. They don't want choices, they want solutions.
Yes, but guess what, Docker CNI is now going to be supported, so CNI is no longer a issue. Ingress will still be a issue, but Openshift is still doing their own random route thing there anyways.
Being low on the stack is a power move. It's like a NFL lineman, the lower player has considerably higher leverage then the higher player. Docker can go in, run Kubeadm legitimately, but use Docker based CNI and volume plugins and displace Openshift.
Interesting idea, but reality looks different. Everything underneath the PaaS layer becomes less and less important. With container engines it may be hard to see for most people yet, I have to admit that. But with OS and hardware you can see it. E.g. think about what OS you run your PaaS on. It doesn't matter. The only limit here is integration with Docker/Kubernetes. If these are available on the OS then it doesn't matter which one you choose. That's also why many people now start to use complete unmodified OS images that don't update individual packages anymore but the whole OS layer together or nothing. Then hardware. Would you say anybody running k8s has an advantage when running on a super computer compared to a cluster of hundreds of desktop computers? Probably not.
> or a continuing push to marginalize docker by the Kubernetes folks.
It's not like kubernetes folks very pushing to marginalize docker for bad reasons. docker runtime was one of the bottlenecks of kubernetes in production and docker inc didn't feel like improving it. Hopefully, it is changing now.
There are some technical decisions in docker that could provide similar functionality with better production support. I am not expert and just saying what I heard from dev working on it: for example, btrfs would work better than overlayfs for COW file system or docker would be better of using parts of systemd rather than implementing everything from scratch.
That's always been the struggle Docker engine had though - people demand it to be "production ready" and stable but also criticize it for not "innovating" enough technically. Docker didn't always do the best job prioritizing stability over features to be sure, but they did do a good job pushing critical innovations like registry v2 forward.
VMWare's k8s play is Pivotal Container Services, or PKS (blame Google). It's a three-way joint project between Pivotal, Google and VMWare.
I've said before and I'll say again: Red Hat, Microsoft and Google (and Pivotal and VMWare) are going to wind up making more money from Docker than Docker Inc does. Kubernetes has swept the field at the container-orchestrator level, the rest is a fight for the upper part of the stack.
Disclosure: I work for Pivotal, though not on PKS.
This seems a lot to me like Docker Inc. caving in to what has been painfully obvious for a while: K8s won and Swarm/Mesos lost the battle for hearts and minds in container orchestration. We can argue about why it happened, but I got the impression Docker Inc. were desperately trying to wish it away.
Now reality has intruded and I am glad, though I predict they'll continue to maintain that Swarm is a first class platform for a while, then quietly let it wither on the vine until one day it's forgotten about.
Also of note is Rancher 2.0's dropping support for Swarm and Mesos and focusing solely on K8S going forward.
Not sure why you would use Rancher if you have Mesos DC/OS? Mesos eclipses every significant feature, but I'd say Rancher is easier to set up initially.
We run a DC/OS+Traefik stack here and can only praise it. Shame it doesn't get the same amount of love the other projects enjoy, but so far its rock-solid and we are more than happy with it. :)
There is some real meat here, and things that should have been done long ago. A key thing is that the Docker network drivers (libnetwork) are becoming CNI compatable. This will vastly simplify one of the worst aspects of setting up Kubernetes, and ensure a consistent network space across containers, even if a given container is not in kube. That's nothing but awesome.
Yeah setting up networking has been one of the worst parts of using Kubernetes in my limited experience -- just using default Docker networking would be sweet.
Networking is a topic where you really need debugging functionality. That's why it hits you the hardest there. Same experience here. It's terrible. But the underlying problem is that there's basically no debugging functionality in cluster environments until you are skilled enough to set up your own (ELK stack etc). But you can't even get there in a reasonable amount of time.
The same incomplete debugging is also in kubeadm. It often hangs in the "waiting for control plane" level without any additional info. Helm also has such problems, reporting networking erros when there's no networking problem (if it checks ipv6 first but hten switches to ipv4 for instance). It's also possible that a helm deployment fails, gives no real reason why, then you can't uninstall it at all without restarting the k8s master.
It's maybe even more general, and a problem in the whole Go programming language world. Everytime I see a tool written in Go I immediately cringe and already know that there will be debugging problems. No ideas why nobody inside this community realises it or how they debug. I suspect they don't really debug and live in the illusion that actually others know something better, but actually the others also don't know it.
Yeah, I've run into that same issue with kubeadm. It was hanging because of the network not being set up.
I'm not really sure where your original comment is going, but I don't really feel the problem is endemic to Go. Using/debugging most software is an exercise in frustration. Just look at Linux on the desktop (which I use btw, I'm not criticizing). Fixing things is usually reduced to tribal knowledge, IRC, and Googling.
I agree, a lot of software NOT written in Go also has this problem. I don't know how it is in C world, but in many programming languages wriitng good activity reporting (i.e. logging) is considered a core skill of each program.
It is sometimes hard to read the logging messages and understand how they came to be. But just having a different status report for a different problem is already so helpful. For instance if kubeadm fails with "I need cheeseburgers" when you actually forgot to configure your proxy correctly, and with "I need more minerals" when you forgot something else, then the first time debuggin is quite frustrating. But after that you know "cheeseburger means proxy" and you can continue. But if you hit "waiting for control plane" for ALL the problems, then your brain can't even remember for what to check right now. I'm the best example, I already forgot the other five things that can go wrong and I would need to check my work internal wiki for that.
I think that's the main reason why logging exist, to increase the speed of hitting a symptom to discovering what's actually going wrong. And Go in general, k8s specifically, simply goes in the other direction the whole time. They don't report any errors, and sometimes even report errors when there is no error. This is systematic in some way but I would need to study the community to tell you more specifically what's wrong.
Have to say, I don't see the value in being able to have Swarm and k8s in the same cluster.
"Docker: powered by Kubernetes" seems to be more of a marketing thing to move down the value chain, and not be seen as a basic piece of infrastructure.
Agreed, just seems like marketing vaporspeak. Not exactly a trustworthy source, too.
For example, containerd has been moved to the CNCF and made a broader project. Fine, but Docker runs a separate fork of containerd anyways. A neat marketing sleight-of-hand, but to what end?
You can disable Swarm or Kubernetes at will in each Docker EE cluster. Hybrid is very useful for enterprises who already have to manage both - it was a highly requested feature.
I think I can understand "We have both and need to manage them more easily" as a request, because it's about pain right now.
The thing I'm unsure about - and it would be really interesting to get your perspective on - is what this means for Swarm longer-term. Is there still going to be a reason why people will want hybrid? Is it a migration play?
In a hybrid, over time I'd want to two to behave the same, and getting k8s up and running is a one-time cost and likely decreasing maintenance. It seems like a point solution.
Swarm has a very special role, because it's custom-built to integrate in the Docker platform. Because it's so specific, it has a smaller standalone community than Kubernetes, but it makes up for it in focus and speed. You should expect a lot of bleeding edge features to ship in Swarm first, and a generalized version to land in kubernetes later. That's already been the case in the past: Windows support, secrets, node identity & promotion - those all shipped in Swarm first, then made their way to Kubernetes. Not because Swarm developers are smarter, but because they can focus on a narrower, more integrated problem set.
Longer term, I think all orchestrators will converge to look more and more the same. Orchestration will become a commodity, and it will matter less and less which orchestrator you use, especially to developers. But this process will take a long time, and in the meantime enterprises (our primary customers) need to deal with the situation on the ground, which is a lot of Swarm and Kubernetes living side by side because of historical decisions made in 2015-17.
> You should expect a lot of bleeding edge features to ship in Swarm first
The development of those bleeding edge features is well hidden. The contributions graphs seem to indicate that Swarm is at best a ghost town. Perhaps the action is happening somewhere else and/or Docker will start investing in Swarm once more.
I think that's an overstatement. I talk to a lot of professionals in this area and never seen Swarm deployed to production (from small to big corps). Anecdata warnings apply.
A good starting point is the archive of Dockercon talks, we've had quite a few enterprise customers come on stage to describe their production deployments.
K8S is looking for all the world like it's becoming the One True Container Orchestration Platform. Obviously Docker can't admit that because it has investors, and one thing you can't afford to do with investors is be honest. But reading the runes, it sure looks like Docker are providing a migration path from Swarm to K8s.
So now the last question is how long does it take for AWS to finally abandon ECS and formally support K8S as a service? I think this makes it kind of slam dunk, but it is forcing aWS to give up a lot of proprietary lock in.
I'm going to be there front and center. I know that they had something they chose not to announce last year... hopefully they make up for it full tilt this year ;-)
I see them stating "...for developers using Windows and macOS" but not mentioning Linux. I feel like I'm missing something in how I'm reading that page. How can I make use of this on Linux?
We're going to support Linux also. But we want to careful not to disrupt the users of the original container engine, as we transition it to Moby. In the future there will be a cleaner separation between "Docker CE, the developer tools" and "Moby engine, the open-source container engine". The last thing we want is someone to upgrade their production Linux engine to find an unexpected and unwanted kubernetes distribution wedged in.
That separation is already in place for Windows and Mac, so we're starting there.
That sounds like a shoehorned explanation. You're leaving developers on linux out to dry because people aren't paying attention to their production systems?
Don't get me wrong, the work you guys do is cool and all, but that isn't a valid explanation from my point of view. Any company should have some sort of staging to test updated before rolling them out - it isn't up to the developers of the software to take care of this.
And not only that - the switch will come at some point or another either way, so it doesn't make sense to hold that back from CE on linux so that someone doesn't 'find an unexpected and unwanted kubernetes distribution wedged in'. Those who would find that now would also be surprised by that later on.
To add to that - containers are tested using CI/CD tools anyhow, which are predominantly powered by linux machines, which again makes this decision less convincing. The build may be fine on the developers machine and in production, but the CI/CD environment wouldn't reflect both of these environments.
This looks more like a facade for selling more Docker EE licenses rather than wanting to protect users. Which is fine, of course - but then please say that.
Yes, but think about the whipping we would (and have) given them in the past for bundling these things directly into the engine without sufficient time for testing, or good sep of concerns.
There has been plenty of hue and cry in the past about the rapid rate of change of Docker, and new features bundled in when many users would have preferred a more deliberate and planned change in what is to them a critical piece of infrastructure. You're assuming quite a lot.
If they were to plan this over a longer period of time for all distributions CE is available for I wouldn't have said anything. However they are specifically leaving out the platform most people are using docker on. And not only that - they _are_ providing k8s support with EE on linux. That pretty much deliberately points towards 'buy docker ee if you want this specific feature'.
Tbh, I wouldn't even have said anything if they were making that an EE-only feature. Thing is - they want to make money with that move and they're not honest about it. And in the process they're throwing the larger demograph using docker on the bleeding edge side of things in the mud. The people who are trying the new features on their own servers in their own time.
> There has been plenty of hue and cry in the past about the rapid rate of change of Docker
Yes, well - that is what happens when a company decides to use bleeding-edge hipster software. With puppet one minor version may not work with the server whos a few minors behind, with ELK in pre-5 versions the cluster may have gone keel over if the migration of the version hadn't been planned meticularly, with consul you may get better performance (dc-local speaking) than with etcd on one release and way worse the next.
Crying to the devs not to produce good software so quickly shouldn't be the solution.
Seems that people think Docker has gave in, but I am not that sure. If you can switch between Swarm/Kubernetes transparently, then why wouldn't you start with Swarm? (I'm talking about small companies who're just starting with containers.)
Why wouldn't you start with the one with the strongest mindshare and reputation? Even a cursory amount of googling shows there's a strong sentiment bias towards K8S over Swarm, it's not surprising that's what most people pick.
But the thing is that as a developer, it doesn't matter what you start with - you have both and you're using compose files, etc. So Docker is trying to make the orchestrator less relevant now, and says that 'don't worry about the orchestrator'. You, as a developer, are using Docker and Compose.
(Or at least that's what Docker is trying to do here, I think.)
I don't think having support for multiple orchestrators is valuable except as a transitional thing. In this case, presumably, transitioning from Swarm to K8S. Nobody is gonna want the hassle of supporting both outside a transition.
It's extremely valuable for large IT organizations. Most of them have at least one deployment of each orchestrator, because of the proliferation of container projects in their various teams. They are very eager to adopt a single platform to manage it all. Now Docker EE can be that platform.
In general enterprises don't like to be told "throw away your existing system to adopt my platform".
Because K8s adds large amount of potentially unnecessary complexity?
- Setting up Swarm is trivial, setting up Kubernetes is not so much (kubeadm is still not recommended for production, and there are valid reasons for this). I guess the only thing that's probably easier on K8s is (totally unsupported) multiarch cluster - it's somewhat messy with Swarm[1]. Although my experiments with multiarch K8s had failed (got issues with CNI stuff and postponed research for later).
- Compose file format is significantly simpler and concise. Less code to write is good.
- Debugging failing Swarm is significantly easier than debugging failing Kubernetes. Well, that's probably subjective and I haven't truly deeply debugged either, but at least I believe so - on occasions I was able to find my way through moby, swarmkit & libnetwork source, and K8s feels a very different beast.
____
Update[1]: I re-checked multiarch status for Swarm and found out that now `docker service create --name test --placement-pref 'spread=node.id' --replicas 2 --no-resolve-image ubuntu sh -c 'while true; do uname -a; sleep 1; done'` just works on a freshy set up mixed x86_64+armhf Swarm cluster (no multiarch alpine images yet, though - promised to come next week). Guess, Swarm had beaten K8s here.
I think several others have commented on this, but the learning curve of Swarm seems insignificant to the setup and implementation of a k8s cluster.
I work with a team of four total developers and realistically two of us handle the vast majority of "operations." As a result, what was important for us in orchestration was ease of setup, speed of initial implementation, and the lowest immediate and ongoing difficulty associated with whichever orchestration tooling we chose.
GKE, from the research I have done, and the small cluster I built for a side project, smooths out many of the particularly challenging aspects of a k8s implementation. I mean it is advertised as a managed service, I do not think that is particularly comparable to setting up a raw k8s cluster or Docker Swarm cluster yourself. Would you feel that is an accurate assessment?
My company is currently fairly locked into another DO for misc. reasons, and as a result would be deploying to DO, where you do not have the benefit of all of the automated tooling/management provided by GKE.
Absolutely. Running k8s on bare metal requires full-time dedicated staff. Running k8s with GKE, and possibly also Azure Container Service, though I have no direct experience, is a one-off half-day effort.
What people usually don't realize is that once you're in the cloud, all your services talk TCP. A service in GKE and another one in AWS are just a network hop away. Two considerations are:
* Network egress costs, which is currently outrageously high, and serves as a lock-in device of sorts. Depending on actual workloads, it may or maynot make sense for you.
* Security. Though there are numerous VPC solutions out there, some of them supported by the cloud providers themselves.
Anecdotically, we also run a small RDS database in AWS, though we only need ~100 small queries a day.
Yeah, thanks for sharing your experiences on that.
Unfortunately our egress costs would likely be significant.
I am honestly anxious to get my employer started on the path to k8s, but until the tooling reduces the hours required to maintain it successfully, Swarm seems like the superior solution if you are locked in on a non-GKE/Azure Container Service provider.
I have heard good things about Kops helping setup/enable production-stable orchestration, but they are not ready yet for Digital Ocean either, although I think it is on the list™.
Pivotal and Google built Kubo (recently renamed to the less memorable Cloudfoundry Container Runtime[0]) to make the deployment/management/upgrade thing easier. It's built on top of BOSH, which has a relatively long track record in managing stateful, long-lived, distributed systems on top of IaaSes.
I have built a prod K8S cluster by hand (back in the 1.1 days). It taught me a lot of the foundations of K8S, so I don't regret it. It is why we are using GKE.
With GKE, most of your focus will be around (1) tooling for generating manifests, such as Helm or (forgot the name of it). I had written something called Matsuri, but that is only useful if your team is a Ruby shop. (2) what to put into the containers and how they link up.
Because K8s has quite some overhead, a steeper learning curve and is more difficult to set up and to maintain. I don't see this as a problem in bigger companies, but in companies with less than 10 IT people and no prior knowledge I'd take Swarm over K8s every day.
We have less than 10 people, 5 of them engineers. We are overbooked getting features out. We are using K8S in both dev and production, not Swarm.
A big part of that is because I have had experience running Docker (not Swarm), ECS, K8S, and building developer tooling (Vagrant), in addition to being a regular developer.
The flip side: I had to opportunity to try out these different tech in production and saw where the pain points are. The overhead of K8S exists to solve those pain points, though that is probably not that obvious to a small team without prior knowledge.
For example, I had set up a prod k8s by hand. I will never do that again. On the other hand, I know roughly what is going on when something breaks in our Google GKE cluster.
I am not sure if I can. Setting it up by scratch let me become familiar with some of the underlying mechanisms of how k8s is put together. A part of that is:
After running through that as a kind of kata, it was easier to infer and troubleshoot things when things go wrong. The transfer-of-learning happens only if you run yourself through these exercises.
I can share some things at a higher level though:
Label selectors are your friend. Master them. They are used everywhere.
Stateless is still easier than stateful. Start with putting stateless workloads in production before ever trying stateful.
If you have the expertise to mix your stateful pods with your stateless pods, make sure you master StatefulSet and things like persistant volume claims.
If you fake stateful pods like I did in production, then Kubernetes does not know how to cleanly shut them down. Automated maintenance involving kubectl cordon and drain no longer function well. You end up having to hand migrate stateful pods from node to node.
If "Docker == containers" continues to hold true in many people's minds, then it's possible that Docker Swarm could feel like the "vanilla" orchestration platform. Of course those of us familiar with the platforms know better, but that mindset could persist, especially with pseudo-technical decision-makers.
'docker == containers' is true in that people often use the terms interchangeably. I've heard many people say 'you should get into Docker' and then talk about Kubernetes.
Could you elaborate what alternatives to docker are worth checking out? I'm unfamiliar with containers and out of my head I can't name anything besides docker.
rkt is likely going to be replacing Docker for the default engine in Kubernetes and/or OpenShift (I vaguely recall hearing that it was coming, but can't find a source to cite): https://coreos.com/rkt/
This will give IT organizations the option of getting an Enterprise supported distribution of Kubernetes from Docker.
Historically, most IT orgs requiring supported k8s have either gone cloud with something like Google Container Engine or gone with OpenShift and get support from RedHat. OpenShift is a fork if kubernetes though and lags a year or so behind. It also adds opinionated features such as Image Streams.
Docker's announcement said they were using "real" kubernetes, not a fork or a wrapper. I've setup kubernetes by hand before and it is no easy feat. I'm looking forward to evaluating Docker's solution and maintenance upgrade process.
UPDATE: My goal with this post is not to sell people one way or another, but moreso to explain where some of Docker's reasoning for this integration is coming from.
Don't take this the wrong way, but I love how we are already talking about "historically" in the context of Kubernetes enterprise support when the first release of Kubernetes was in 2014 and AFAICS pro-support is something from the last 1.5 years. Crazy world!
This. It's hard not to see Redhat (and to a lesser degree VMware) as the big looser of today's news. I absolutely want to see how this is implemented. Openshift's pricing is highway robbery.
Disclosure: my company is (was?) a big Openshift consumer...
Yes there are other supported distribution out there, I believe Cannonical also has one. IT orgs are very comfortable with RedHat though so that has been the majority of what we've seen IT orgs go to for Kubernetes.
I'll be interested in how they go too; productionising k8s is surprisingly tricky and Red Hat have made a heavy investment to do so.
I have skin in this game as well, as my other comments on this post demonstrate. We (Pivotal & Google) kinda skipped the hard bits of keeping up with k8s by packaging it as a BOSH release, so we'll always be up to date. That's actually one of the project goals: parity with vanilla k8s, because it is vanilla k8s, operated by a lower-level system.
How are they doing this? The big difference in swarm and kubernetes is the ingress. If this is seamless, then it has to be a batteries-included version of kubernetes with ingress, overlay network choice, etc already mapped out.
What are the details here?
Docker Swarm is beyond awesome and a great path for someone to scale up at the lower end of the scale spectrum (two containers). I really hope that this brings more people into Swarm.
I'm also keen to see what it means for the Kompose project.
From The Information's article "When Docker Said No to Google":
> In 2014, Google approached a startup called Docker proposing the two collaborate on software each was developing to help companies manage lots of complex applications, according to people with knowledge of the proposal. But Solomon Hykes, Docker’s founder and CTO, said no. He wanted to go it alone.
> Three years later, the cost of Mr. Hykes’ previously unreported decision is becoming apparent. The software that Google was developing was Kubernetes, an open-source product that now dominates its segment of the cloud software market. Docker’s rival software, Swarm, is also open-source but isn’t anywhere near as popular, two former Docker employees say.
That was my initial thought as well. CRI-O hits 1.0, and then this. To me, it comes across as an attempt to stay in the news. Possibly to start changing the narrative from Docker vs Kubernetes to Docker <3's Kubernetes.
Docker's conference (Dockercon) is happening right now so announcements coming from them are no surprise. A Kubernetes integration has probably been in the works for a while.
It seems more likely to me that the CRI-O 1.0 announcement was a tactical move from Red Hat to hijack the conversation during Docker's own conference. CoreOS did the same thing 3 years ago when they announced rkt, trying to capitalize on Dockercon as a time to make a bunch of noise for themselves. Docker themselves have been no perfect angels in this regard, for instance with their infamous "accept no imitations" shirt at Red Hat's conference, I'm just calling it as I see it.
Disclaimer: I worked for Docker, Inc. for 3 years.
The only problem I didn't solve yet is debugging python code running in kubernetes using pycharm. If I run a container in pycharm using "Debug..." dialog it launches inside "docker context" rather than "kubernetes context". For example, I can't connect to kubernetes services via their ClusterIP - the container launched via pycharm does not see it. The only solution I found is using docker compose to set up an environment similar to kubernetes and using docker compose from pycharm. Hopefully, this announcement from docker will simplify this story.
Yes, but it will be a while, I think, before they actually offer something based on it. I don't see any service based on it at this point. AWS ECS is based on Docker.
Kubernetes currently has a lot of traction in Enterprise world. Even the CEOs of the big corps currently know what Kubernetes is. That's why you need to support it in some way if you want to continue to compete. At least for the time being.
I think technically and in some regards also politically the Docker people are smarter though. In the most pro-docker comment I answered with "don't start to ignore k8s yet" and here I feel like saying "don't start to ignore docker(-swarm) yet".
The battle is ongoing and these two are both possible leaders.
Ehh. I think the writing is more then on the wall with Swarm. K8s is a better solution, and at this point. Docker + K8s should be the standard. Fragmentation is bad.
Go with OVH, unlimited resource usage (including traffic) and they allow you to create/own your own private network (vRack) of dark fiber. Look into using multiple points of presence that they offer. If you don't need it nownow, wait for OVH to offer local US machines rather than just geolocated IPs.
You can do this well with bare metal servers or with a number of whatever dedicated and/or shared cloud stuff they offer.
When people first started using Docker containers, we were promised things would run identically in dev and production - no more "but it worked on my laptop" issues. Then the rise of orchestrators meant that there again became a significant difference between running an app locally (in compose) and in production (on Kubernetes). Docker for Mac/Windows will now bridge that gap, giving me a k8s node to run against in dev.
Whilst Kubernetes has provided a great production orchestration solution, it never provided a great solution for development, meaning most users kept developing with Docker and Compose. It's great to see these worlds now coming together and hopefully leading to a first-class solution all the way from dev to prod.