Hacker News new | past | comments | ask | show | jobs | submit login
Kind - run local Kubernetes clusters using Docker (k8s.io)
216 points by supdatecron on March 16, 2020 | hide | past | favorite | 72 comments



We recently used Kind for a k8s workshop. We provisioned a beefy cloud server and ran 15 3-node Kind clusters on it, so everyone had it's own k8s instance without having to install anything locally. It worked absolutely great for this purpose.

I wrote some scripting around it so people can claim their own cluster via SSH. I'm planning to write a post about it soon and make the code available.


Please do, I'd like to give a talk like that once Covid is over


I’ve tried minikube, microk8s, the one bundled with Docker Desktop for Windows, k3s and Red Hat CodeReady. Of these I had the best experience with Kind (by far) and the worst experience with CodeReady (also by far).

The thing I like most with Kind: Being inside Docker makes Kind very ephemeral. Every time I start it up I get a fresh cluster. I know where everything is and it doesn’t contaminate my machine.

Since some of the authors are on the thread I would like say thank you. I really appreciated the recent improvements to kubectl-integration and the addition of local storage.

In the future I would like it to be easier to play with pod and network policies, reduced cluster startup time and reduced node image sizes.

Keep up the good work!


Out of curiousity, as I'm thinking of looking at CodeReady for openshift, what problems did you have there?


OP is right about CodeReady - I’ll unhesitatingly say it’s a pos. It’s way too heavyweight for even a high end laptop. It’s single node only. It’s falls over if you enable monitoring unless you can give it 8 cores and 12GB, then it sort of works but is too slow. The 3 times I tried deploying the provided samples - they didn’t work out of box. It also requires you to download new release every month I think - no in place updates I think.


It used too many resources for my computer. It took about 10 minutes to start a cluster and once the cluster was up and running my computer had a hard time performing any additional tasks - like having IDE open and compiling source code.

In comparison k3s takes seconds to start a cluster. Kind takes about a minute. Neither will consume resources to a point where my computer becomes unusable.

I reported my experience to Red Hat and they replied that it was to be expected.

EDIT: Found the issue https://github.com/code-ready/crc/issues/617.


I'm a great fan of kind, it's made my life so much easier for a couple of use cases.

1) I run a training course on container security. We moved from using straight kubeadm on the student's VMs to using kind clusters. the advantage here being we can customize different clusters for different scenarios by providing a kind config file on start-up. We can also have multiple clusters running on a single VM easily with no interferance between them.

2) when evaluating software or trying out a feature, it's really nice to be able to spin up a test cluster in < 2 minutes and try it out, then it's just "kind delete cluster" to get rid of it again.

when I compare it to other options (e.g. minikube, microk8s etc) it subjectively feels less "magic" to me, in that it's just one or more Docker containers, running kubeadm, so as long as you understand those two things, you can get a picture of what's going on.


I've recently started prototyping our move to k8s - and my recommendation is stay away from minikube, k3s and kind. Kind looks the best on paper. But canonical has done great with https://microk8s.io/

I'd love to hear why anyone preferes any other solution for local development/experimentation.


Hi, kind author :-)

microk8s is really cool! We wanted kind for development of kubernetes itself and I don't think microk8s was around at the time.

One difference besides being able to build & run arbitrary Kubernetes versions is being able to run on mac, windows, and Linux instead of only where snap is supported.

We're paying more attention to local development of applications now, expect some major improvements soon :-)


That's great news. In my experience kind was a bit resource heavy - but more importantly didn't seem to have clear documentation that was geared towards local testing (for users/consumers of k8s).

Great news if improvements are coming.


I've spent 2 years now developing a pretty decent sized stack on k8s. I've used microk8s, minikube, and Docker Desktop's built-in kubernetes distro for a while. I feel like docker desktop worked the best for me.

However, I've wasted so much time over the past 2 years trying to figure out why something wasn't working, only to find out it was because there were differences between the k8s distro I was using, and our production system. Ultimately I found the best solution was deploying exactly what we run in production on some spare bare metal I had laying around (after adding a hundred gigs of RAM).

Luckily we have a production setup that is designed to run on-prem, so this was an option for me. Regardless, I think having as close to production as possible will make your life easier.

That being said, I still might try this project out.


What problems led you to that preference?


Trying to find a dev setup that was feature complete and as similar as possible to production, while still running locally.

As a sibling comment mentions, there are a number of differences between distributions/implementations - and especially when new to k8s it's way too easy to waste time trying to figure out why something doesn't work.


I've had the same experience, absolutely love Microk8s, though I am hopeful about k3s and k3d (k3s in Docker). As of right now they "mostly" work, but unfortunately that's enough to break things.

Also, you can't beat the one-line snap install for Microk8s.


Could you talk about what breaks in k3s/k3d ?

We have been considering having a desktop -> production cluster on k3s.


I just tried to set up the same stack of ~7 services I had running on Microk8s locally, and a few things went wrong in the process. Couldn't get it running.

My kubectl-fu is not strong enough to fix it, so for me that was a dealbreaker.

Though I am super passionate about k3s and support the hell out of everything Rancher Labs does, so by no means did it leave a bad taste in my mouth.


One caveat on my previous comment - if you use any one of these in production (I guess k3s is the most likely one there) - then I think using it for dev should be fine. The biggest issue is differences between versions - we're deploying to managed k8s in azure, and need a dev environment that works similarly.


Hmm..do you mean that k3s does not track the cloud deployments closely ?

So the dev -> production experience is less than ideal ?

We are planning to use k3s for local development and deploy to EKS...so this is interesting.


Yes, that's been my experience so far - trying things (now) on microk8s and managed azure k8s.

The main thing is that documentation for microk8s and the design seems aimed at "behave as/pretend to be a real k8s" - including things like ingress.


I use a Minikube cluster w/ KVM as the driver for my self-hosted Gitlab CI/CD and it's worked flawlessly. Wonder what issues you encountered to recommend against it.


Why do you recommend to stay away from k3s ? There are now cloud services for hosted production clouds on k3s - e.g Civocloud


If you use k3s in prod, you might be well off with k3s for dev.

But I still think microk8s is easier to spin up on a workstation.


Wondering why. Is it because of command line ? Or anything else.

I'm new to k8s (I'm on swarm right now) and looking for ease of dev setup than anything else


I'll just say that I found end user documentation for microk8s to be nice and friendly. And that k3s (the little I looked) felt maybe a little too much like administering and running a production k8s cluster. We're not planning to do that; what I needed was something that worked easily for prototyping and experimenting - and could be run mostly via kubectl (and/or helm) just like a managed cluster.


Would love to know why you are saying "stay away from x"? Based upon what?


Because microk8s is only for Linux. So Win/Mac developers cannot use it at all.


You mean that's not a feature? ;)

Seriously though, this is a valid point. Still, I believe you could run it in a vm? I'm not on win/Mac, and I'm not sure if that would make sense.


K3s is a lot easier to get working on rhel and fedora. Canonical tends to build things in ways that makes it barely work on Ubuntu and completely fail everywhere else. Same with lxc. I was a bit upset rhel dropped it in 8 and then I tried to get it running and saw the horror show and decided to rather look at rootfs podman.


Might take a look at the native systemd-machined if you want full blown OSE’s.


I did, but podman with --rootfs is nicer IMO.


So does this mean you can run containers in containers orchestrating other containers. Containers must really be the holy grail of serverless and cloud "nativeness".


Seems reasonable to me. Outside of some weird edge cases and some "technically..."s a container is just a process with its own namespace and file system, and maybe it's own IP. If we didn't have shared-filesystem, shared-namespace, shared-ports processes for historical reasons, who would be clamoring to add them? Why wouldn't you run everything in a container, container-scheduler included?


isn't it more accurate to say, rather that just a process, a process group with its own process numbering?


Technically you can define which namespaces to inherit and which ones to create "from scratch" at process initialization time. (Actually there's an unshare() syscall that does it, but clone() is the standard way to create new namespaces and new processes in them, plus there's setns() to put a thread into some other namespace given a fd pointing to that NS.)

So, namespaces are task level things in the kernel. (Every thread is a task, and by default every process has one thread, so every process is also at least one task.)

https://elixir.bootlin.com/linux/latest/source/include/linux... (That's where the task_struct starts and it has an nsproxy member.)


I mean it's probably not something you want to do in production, but it's a godsend for developing and testing configs.


kind is incredible. It's the best option for local multi-node clusters and very fast. No hypervisor needed, only docker.


Appreciate the feedback :-)

I think the "best" depends on what you're doing to be honest, (e.g. if you only develop on ubuntu, check out microk8s too! they have some good ideas, eg focusing on straight-forward support for a local registry instead of side-loading) and there's a _lot_ of room to improve kind, but the vote of confidence is still very nice to see :-)


It's the best in my book :) KIND + CAPI are truly magical. I appreciate all your hard work on the project, thank you!


"No hypervisor needed, only docker", same as k3s


I like KinD, but find k3s much faster to bootstrap and lighter-weight too. Rancher have gone GA with it and provide commercial support, Darren Shepherd also tracks the latest k8s release very closely.

Linux -> k3s (build a cluster or single node via https://k3sup.dev) MacOS/Windows -> k3s (runs k3s in a Docker container, and is super fast)

That said, if you're hacking on K8s code, then KinD has nice features to support that workflow. KinD is only suitable for dev, k3s can run in prod also, try both, compare. They are both easy to use.


[k3d](https://github.com/rancher/k3d) runs k3s in docker. k3d is more lightweight than kind.


Used kind + skaffold for 6 months and it was pretty solid. However, eventually switched to k3d and tilt and feeling like this combo is amazing. Cluster takes 2 seconds to create now.


Sweet; glad you like it. I'd love to hear more. (Disclaimer: Tilt CEO here)


Kind has been a godsend for me. When you've got a 16Gb MBP with both Docker and K8S running, re-using the Docker virtual machine makes a big difference in memory and CPU usage. Thanks to the team!


I really wanted to use kind but the fact that it loses all the data after restart/sleep of computer keeps me from using it.

I’m developing Kubernetes controllers and the Custom Resources represent the bits of cloud infrastructure ( https://crossplane.io ). So when I lose the kind cluster, I have to go and delete each and every resource in AWS :( I am unhappily forced to use minikube until support comes to kind.


loses all the data because you have to start a new cluster? The data should be persisted...

If this refers to https://github.com/kubernetes-sigs/kind/issues/148, the good news is that we're most of the way there and I'm going back to work on this now, ideally out in a v0.8 in the next week or so.


what are the advantages over minikube?


I switched my local "lab" setup from minikube (which was in use for a long time) to kind recently. The main reasons were

None of us run on Linux which means we're all using VMs for our containers and we all use Docker Desktop for various things. That meant we're running extra local VMs for no good reason. With kind I can just use the one vm for all the container things.

But the real reason for the actual switch was I just kept running into things that minikube couldn't do and Kind could, as well as having things I had decided to ignore like the fact that minikube does everything on one node which is 100% unnatural for kubernetes and I had multiple cases where this setup blinded me to problems that would occur in a real cluster.

3) I've also found I prefer the configuration/customization approach of kind over minikube though admittedly that's kind of a small thing.

Ultimately I find kind is a better simulator for the purpose of prototyping future cluster changes as well as use as a local "lab" for diagnosing services in a "production like" environment 100% under your control.


We use Kind. I think one of the best thing about Kind is that

- You can run it on Github actions. So, you can test in your CI pipeline.

- You can run any recent version of Kubernetes.

- Kind can start a Kubernetes cluster under a minute on a developer machine.


You can technically get Minikube running in Actions, but it's a bit annoying.


author here, some background:

kind was originally built for developing kubernetes itself, as a cheaper option for testing changes to the core components.

it wasn't really meant to compete with minikube et. al, but complement for differing usage, but you may now find it useful as a lightweight option with a slightly different feature set.

it's also the only local cluster that is fully conformant as far as I know, because conformance tests involve verifying multi-node behavior, at the time minikube did not support - building kubernetes from a checkout and running it - docker based nodes - multiple nodes per cluster

These days they've gotten more similar, we're both shipping docker and podman based nodes.

I think one of the most interesting things about kind is that the entire kubernetes distro is packed into a single "node" docker image, it's very easy to work with fully offline.


FWIW, I did a comparison of k8s in Docker, KinD, and minikube last week ... https://seroter.wordpress.com/2020/03/10/lets-look-at-your-o...


I really wish you had used a regular service definition when testing KinD. The omission reduces the usefulness of your comparison. I want to choose a local k8s cluster that is as close to production as possible. And I want my local deployment configs to be as close as possible to production.

You say that "ingress in kind is a little trickier than in the above platforms" with no explanation.

I feel disappointed and frustrated. :(


Sure. I cheated. I specifically didn't feel like setting up extraPortMappings (https://github.com/kubernetes-sigs/kind/issues/808), and then create an ingress controller (https://kind.sigs.k8s.io/docs/user/ingress/). Not difficult, just not turnkey availability like the other two.


Read through it. Do you have a sort of TL;DR summary of pros/cons? A matrix I can use to make a decision as a new k8s user.


Good suggestion.

For me, use Docker if you want k8s started up every time you start Docker, and easy ingress. I don't love having a cluster always running, so I'm keeping the k8s function off by default.

Use kind if you want multi-node clusters, and a production-like simulation of your environment.

Use minikube for a straightforward dev experience, where you have control over k8s version, resource allocation, and don't need meaningful configuration of the control plane.


AFAICT, minikube requires a full VM, whereas this doesn't seem to. So it should, in theory, be much more lightweight.


Minikube now has both a docker[1] and a native driver[2]. The docker driver is derived from Kind.

1. https://minikube.sigs.k8s.io/docs/reference/drivers/docker/

2. https://minikube.sigs.k8s.io/docs/reference/drivers/none/


That is pretty neat but it is nice to have a tool that had it built natively, rather than having to search through 30 different provider flags with Minikube. Hopefully there documentation has gotten better but I'd generally trust the innovation behind the Kind tool than the Minikube devs who are late to the game.


this is simply a container with a docker runtime in it, it's haeavy in that sense


actually we run containerd and I think you will find it is comparatively lightweight and fast compared to most options :-)

it's certainly heavier than _not_ using Kubernetes


you can simulate multiple nodes for example, if you want to experiment with node selectors or test having a multi-node setup, it's quite easy with kind


I tend to recommend minikube to new comers as it provides easy addons like ingress, image registry, load balancer via minikube tunnel. You can run minikube with 2GB Easy when only dealing with one node. Once your familiar then I will recommend Kind when you need more than one worker node and test scenario that require multiple nodes or if you know your way to install and configure addons yourself


As a chromebook user with crostini, I'm wondering if this is going to work for me - as neither k3s, nor minikube, or minishift did (due to limitations in crostini).


k3s author here. With user namespace and rootless support it is getting closer to k3s running in crostini, but nobody is really working on this. I was a big fan of crostini when it came out but the insistence on lxc and user namespaces makes it too limited and wouldn't recommend it if you work with containers.


unfortunately not.

figuring this out would make a lot of people happy, but it doesn't rank highly for our current use cases versus other work.

https://github.com/kubernetes-sigs/kind/issues/763


Be warned, Kind is incredibly heavy weight. It wants over 8GB on my laptop just to start, and pins cores for 10 minutes.


This shouldn't happen, can you please file a bug with more more information about your environment and specific usage?

I run kind in the minimum docker for mac spec which is one core / 1gb and it performs just fine. We've worked hard to make it lightweight, including a KEP upstream for slimming down the binaries.


That's not my experiece with kind at all. Running in a Linux VM on a 3 year old laptop and my kind clusters start in < 2 minutes.


It doesn’t need that much memory is only that consumes less memory for fast development of Kubernetes and extensions/CRD/controllers


Are you sure you are talking about Kind and not minikube for instance? for me Kind is the most efficient way of running a real cluster on my machine, a bare cluster merely takes 600MB for RAM in my case and the creation takes too long only in the first time because it downloads the docker images.


And people complain about electron! 600mb for a rest api and a bunch of networking hoopla over containerd, what a mess Kubernetes is...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: