I getting lost in this orchestrator world. Can somebody explain the use case for Minikube? Or Microk8s? All claim to be certified as "perfectly like kubernetes" and show they can be used in production, but are people using them? Why?
Kubernetes is considered too complex to deploy for mere mortals.
Here's rough list of things that I had to do, because I'm doing right now exactly that: trying to deploy kubernetes.
1. Prepare server by installing and configuring containerd. Few simple steps.
2. Load one kernel module. Configure ip forward sysctl.
3. Install kubernetes binaries from apt repository.
4. Run kubeadm init
5. Install helm (this part is optional, but makes things simpler): download binary or install it from snap.
6. Install network plugin like flannel or calico.
7. Install nginx ingress plugin.
8. Install storage provider plugin. Also you should have some storage service like NFS server, so Kubernetes can ask for storage.
If you want single-node Kubernetes, I think that should be enough. May be #6 is not even necessary. If you want real cluster, you would need to tinker with load balancer for which I don't have clear picture right now. I'm using external load balancer.
If you're using docker, my understanding is that containerd is already installed.
I actually spent few weeks trying to understand those parts and I have only shallow understanding so far.
With that in mind, simple kubernetes solutions probably have its place among those who can't or don't want to use managed kubernetes from popular clouds.
I have no idea about those simple kuberneteses though.
My opinion is that vanilla kubernetes is not that hard and you should have some understanding about its moving parts anyway. But if you want easy path, I guess it's something worth considering.
Flannel and Calico are responsible for assigning pod IPs, so you need them even on a single node.
One main reason you'd want to run minikube or kind is also that these clusters are easy to reproduce and don't pollute your system's network namespace and sysctl.
For Load Balancer in your case you would probably provision MetalLB in place of the cloud specific LB solutions that cloud providers deploy. It’s somewhat straightforward, though the steps I believe are specific to each network provider (flannel, calico etc)
Maybe a bit too hacky, but if you only plan to use nginx-ingress + HTTPS (and don't have spare /24 IPs around), then you can set up nginx on each node, run a script that generates a nginx config every few minutes (use the stream module to forward port 80 and 443 TCP/UDP to the ingress nginx)
Then add the IP addresses of the nodes as a wildcard DNS.
If you used docker compose and getting frustrated with the lack of some important features, using these lightweight kubernetes distributions are actually great. Blue/green deployment, a whole bunch of storage volumes supports, and load balancer with automatic letsencrypt supports, and great secret management (ability to mount secrets as files/directories inside a pod is a killer feature) are the reason I use kubernetes instead of docker compose for my side projects even though I ignore the rest of kubernetes features.
> All claim to be certified as "perfectly like kubernetes"
because they are. they are directly built from the go sources, all are wrappers around the meat of k8s. (which are the various control loops packaged into services like api-server, kubelet, schedulker, controller-manager, etc ... and etcd itself)
minikube does a big monolithic build for convenience. (it can do this because all involved components are in pure go.)
microk8s is also a distribution of k8s.
almost all distributions have convenience features to help you with installation/setup. but all they do is what "the hard way" setups do. fetch/copy binaries, generate/sync keys, setup storage (devicemapper, btrfs volumes, whatever), setup wrappers that then start the binaries with the long list of correct arguments, and set them up to start when the node starts (usually by adding systemd services or something).
In a sense they are distributions. Ubuntu and Fedora can also both do it all, and it's no clear where you'd use one vs the other. You're in the hands of different people.
Well also, minikube is meant to facilitate development and testing. We use minikube for local dev, tests, etc. and everything transfers over well to the production k8s cluster
um, they aren't missing anything (but see below). they are k8s, just as you rarely run the Linux kernel without userspace.
so if you want to get the genuine original mainline experience you go to the project's github repo, they have releases, and mention that the detailed changelog has links to the binaries. yeey. (https://github.com/kubernetes/kubernetes/blob/master/CHANGEL... .. the client is the kubectl binary, the server has the control plane components the node binaries have the worker node stuff), you then have the option to set those up according to the documentation (generate TLS certs, specify the IP address range for pods (containers), install dependencies like etcd, and a CNI compatible container network layer provider -- if you have setup overlay networking eg. VXLAN or geneve or something fancy with openvswitch's OVN -- then the reference CNI plugin is probably sufficient)
at the end of this process you'll have the REST API (kube-apiserver) up and running and you can start submitting jobs (that will be persisted into etcd, eventually picked up by the scheduler control loop that calculates what should run where and persists it back to etcd, then a control loop on a particular worker will notice that something new is assigned to it, and it'll do the thing, allocate a pod, call CNI to allocate IP, etc.)
of course if you don't want to do all this by hand you can use a distribution that helps you with setup.
microk8s is a low-memory low-IO k8s distro by Canonical (Ubuntu folks) and they run dqlite (distributed sqlite) instead of etcd (to lower I/O and memory requirements), many people don't like it because it uses snaps
k3s is started by Rancher folks (and mostly still developed by them?),
there's k0s (for bare metal ... I have no idea what that means though), kind (kubernetes in docker), there's also k3d (k3s in docker)
Worth mentioning there's a middle path, namely kubeadm. That's the "sanctioned" way to bootstrap clusters without going full from scratch and many other distributions actually use it internally.
I don't see why you would use Minikube in production (nor have I ever heard anyone do this), but Minikube is exceptionally helpful for local development, when you want to test against a real Kubernetes API server (as well as test any of your desired orchestration for your component).
This is a great use case I've found as well. If you have a product that is deployed to K8s, the ability to create clusters on demand for testing, whether local or otherwise, is awesome.
Also edge of the network deployments where you want consistency with datacenter deployments but don't have a lot of local compute resources or are otherwise limited.
I am adding another executor to a workflow engine. Minikube is a huge help for my dev work (I always test against a Local Real Instance, which is what minikube is).
It's helped on more than one occasion to show that a prod k8s instance lacked a feature or was misconfigured.
I've only used Minikube, kind, and k0s as sandboxes for production kubernetes deployments in the cloud (i.e. EKS). Given I'm already using Docker Desktop on my mac laptop though, the easiest thing to do is just use its built-in kubernetes. It works pretty well, and obviates the need for any of these micro-kubernetes distros.
Minikube is in a different category, alongside kind. These clusters are meant to be disposabale for development, and at least for kind, can't be updated easily.