I have been knee deep in deployment space for post 4 years. It is pretty hard problem to solve to the n-th level. Here's my 2 cents.
Single machine deployments are generally easy, you can do it DIY. The complexity arises the moment you have another machine in the setup, scheduling workloading, networking, setup to name a few, starts becoming complicated.
From my perspective, kubernetes was designed for multiple team, working on multiple services and jobs, making operation kind of self serviced. So I can understand the anti-kubernetes sentiment.
There is gap in the market between VM oriented simple deployments and kubernetes based setup.
IMO the big draw of running K8S on my home server is the unified API. I can take my Helm chart and move it to whatever cloud super easily and tweak it for scaling in seconds. This solution from the post is yet another config system to learn, which is fine, but is sort of the antithesis of why I like K8S. I could see it being theoretically useful for someone who will never use K8S (eg not a software engineer by trade, so will never work a job that uses K8s), but IMO those people are probably running VM’s on their home servers instead since how may non software engineers are going to learn and use docker-compose but not K8S?
Anecdotal, but anyone I know running home lab setups that aren’t software guys are doing vSphere or Proxmox or whatever equivalent for their home usecases. But I know a lot of old school sysadmin guys, so YMMV.
I agree with you. It is an anti-thesis that is why it is marketed as anti-kubernetes toolset.
You cannot avoid learning k8s, you will end up encountering it everywhere, whether you like it or not. It is the tech-buzz word for past few years followed by cloud native and devops.
I really thinking if you wish to be great engineer and truly respect new general tools in generally, you have to go through the route setting up proxmox cluster, loading images, building those VM templates etc. Jumping directly on containers and cloud you kind of skip steps. It is not bad, you do miss our on few foundational concepts, around networking, operating systems etc.
The way I would put it is - A chef who is also farming their own vegetables a.k.a setting up your own clusters and deploying your apps VS a chef who goes to high-end wholeseller to buy premium vegetables does not care how it is grown aka. developers using kubernetes and container orchestration, PaaS.
I’ve been working on using k3s for my home cluster for this exact reason. I run it in a vm on top of proxmox, using packer, terraform, and ansible to deploy. My thought process here is that if I ever want to introduce more nodes or switch to a public cloud I could do so somewhat easily (either with a managed k8s offer, or just by migrating my VMs). I’ve also toyed with the idea of running some services on public cloud and some more sensitive services on my own infra.
I have been doing k3s on a Digital Ocean droplet and I would say k3s has really given me an opportunity to learn some k8s basics without truly having to install and stand up every single component of a usable k8s cluster (ingress provider, etc) on my own.
It took a bit to figure out setting up an https cert provider but then it was pretty much off to the races
I use kind with podman running rootless, it only works on systems with cgroup2 enabled. But it's very cool. Conventional k8s with docker has a number of security gotchas that stem from it effectivly running the containers as root.
With rootless podman k8s, it is easy to provide all your devs with local k8s setups without handing them root/sudo access to run it. This is something that has only recently started working right as more container components and runtimes started to support cgroup2.
Agreed, but I made this because I couldn't find a simple orchestrator that used some best practices even for a single machine. I agree the problem is not hard (Harbormaster is around 550 lines), but Harbormaster's value is more in the opinions/decisions than the code.
The single-file YAML config (so it's easy to discover exactly what's running on the server), the separated data/cache/archive directories, the easy updates, the fact that it doesn't need built images but builds them on-the-fly, those are the big advantages, rather than the actual `docker-compose up`.
What is your perspective on multiple docker compose files, and you can do docker-compose up -f <file name>. You could organise in a day that all the files are in the same directory. Just wondering.
That's good too, but I really like having the separate data/cache directories. Another issue I had with the multiple Compose files is that I never knew which ones I had running and which ones I decided against running (because I shut services down but never removed the files). With the single YAML file, there's an explicit `enabled: false` line with a commit message explaining why I stopped running that service.
Might be I'm missing something, but I often go the route of using multiple Compose files, and haven't had any issue with using different data directories; I just mount the directory I want for each service, e.g. `/opt/acme/widget-builder/var/data`
I understand your problem. I have seen solve that with docker_compose_$ENV.yaml. You could set ENV variable and then the appropriate file would be called.
Secondly, there's also Hashicorp Nomad ( https://www.nomadproject.io/ ) - it's a single executable package, which allows similar setups to Docker Swarm, integrates nicely with service meshes like Consul ( https://www.consul.io/ ), and also allows non-containerized deployments to be managed, such as Java applications and others ( https://www.nomadproject.io/docs/drivers ). The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.
There are also some other tools, like CapRover ( https://caprover.com/ ) available, but many of those use Docker Swarm under the hood and i personally haven't used them. Of course, if you still want Kubernetes but implemented in a slightly simpler way, then there's also the Rancher K3s project ( https://k3s.io/ ) which packages the core of Kubernetes into a smaller executable and uses SQLite by default for storage, if i recall correctly. I've used it briefly and the resource usage was indeed far more reasonable than that of full Kubernetes clusters (like RKE).
Wanted to second that Docker Swarm has been an excellent "middle step" for two different teams I've worked on. IMO too many people disregard it right away, not realizing that it is a significant effort for the average dev to learn containerization+k8s at the same time, and it's impossible to do that on a large dev team without drastically slowing your dev cycles for a period.
When migrating from a non-containerized deployment process to a containerized one, there are a lot of new skills the employees have to learn. We've had 40+ employees, all who are basically full of work, and the mandate comes down to containerize, and all of these old school RPM/DEB folks suddenly need to start doing docker. No big deal, right? Except...half the stuff does not dockerize easily requires some slightly-more-than-beginner docker skills. People will struggle and be frustrated. Folks start with running one container manually, and quickly outgrow that to use compose. They almost always eventually use compose to run stuff in prod at some point, which works but eventually that one server is full. This the is the value of swarm - letting people expand to multi-server and get a taste of orchestration, without needing them to install new tools or learn new languages. Swarm adds just one or two small new concepts (stack and service) on top of everything they have already learned. It's a god send to tell a team they can just run swarm init, use their existing yaml files, and add a worker to the cluster. Most folks start to learn about placement constraints, deployment strategies, dynamic infrastructure like reverse proxy or service mesh, etc. After a bit of comfort and growth, a switch to k8s is manageable and the team is excited about learning it instead of overwhelmed. A lot (?all?) of the concepts in swarm are readily present in k8s, so the transition is much simpler
We currently have one foot in Docker swarm (and single node compose), and considering k8s. One thing I'm uncertain of, is the state of shared storage/volumes in swarm - none of the options seem well supported or stable. I'm leaning towards trying nfs based volumes, but it feels like it might be fragile.
Sure. Our solution so far has been both simple and pragmatic - the main DB's do not live inside containers. It's a bit of manual ops, but it works for us. All the 'media' in the stacks I am dealing with is minor enough to serve over a custom API e.g. no massive image/audio/etc datasets where files need to be first class citizens.
We generally avoid mounting volumes at all costs. The challenge of mapping host uid:gid to container uid:gid (and keeping that mapping from breaking) proved painful and not worth the effort
Nomad also scales really well. In my experience swarm had a lot of issues with going above 10 machines in a cluster. Stuck containers, containers that are there but swarm can't see them and more. But still i loved using swarm with my 5 node arm cluster, it is a good place to start when you hit the limit of a single node.
> The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.
1. IIRC you can run jobs directly from UI now, but IMO this is kinda useless. Running a job is simple as 'nomad run jobspec.nomad'. You can also run a great alternative UI ( https://github.com/jippi/hashi-ui ).
2. IMO HCL > YAML for job definitions. I've used both extensively and HCL always felt much more human friendly. The way K8s uses YAML looks to me like stretching it to it's limits and barely readable at times with templates.
One thing that makes nomad a go-to for me is that it is able to run workloads pretty much anywhere. Linux, Windows, FreeBSD, OpenBSD, Illumos and ofc Mac.
Recently I've been moving both my personal stuff and work stuff to Nomad and HCL is just SO much nicer than the convoluted YAML files that Kubernetes needs.
I know HCL gets a lot of hate around here but I find just dandy and I'd rather write HCL over YAML any day of the week.
Also to just put another thumbs up for Nomad because it's been absolutely fantastic for us and has been so much easier to manage and deploy. We've begun testing edge deployments with it as well and so far looks very promising.
Consul and Vault are also two Hasicorp products I can't live without at this point either. Vault is probably one of the first things I deploy now.
Anyway rambling, I think a lot of people just immediately jump to Kubernetes without really ever giving Nomad a look first.
Hmm, what are your thoughts on the Compose format (that's used in Docker Compose and Docker Swarm)?
I agree that the format Kubernetes uses is probably a bit overcomplicated, which is why I've also sometimes used the aforementioned Compose format with tools like Kompose to save some of my time.
In that regard, it's perhaps not the fault of YAML itself (even though the format has its own complexity problems), but rather of the tool using it.
What personally keeps me away from HCL for the most part is the fact that a lot of the time you can find Compose files for various pieces of software online and only have to change some parameters for the most part to get stuff running vs having to rewrite all of it in a different format.
Yeah, you are right. It is not exactly YAML's fault but how it is used. I am ok with using it with ansible/salt/compose. But k8s is stretching it to it's limits and makes it cumbersome to use.
And yeah, there is not much examples of HCL files for different software, but most of the time there is not much to adapt. Usually when i need to run something, i just take some random HCL file i have and substitute the docker image, edit the amount of resources and add storage and config via templates. And done. It might be confusing/intimidating at first, but like with anything after some time you should be ok.
I have been toying with the notion of extending Piku (https://github.com/piku) to support multiple (i.e., a reasonable number) of machines behind the initial deploy target.
Right now I have a deployment hook that can propagate an app to more machines also running Piku after the deployment finishes correctly on the first one, but stuff like green/blue and database migrations is a major pain and requires more logic.
Yes, I did look into Nomad. Again, specification of application to deploy is much simpler than kubernetes. But I think operational point of view you still have the complexity. It has similar concepts and abstractions like kubernetes when you operate a nomad cluster.
I think Juju (and Charms) really shine more with bare-metal or VM management. We looked into trying to use this for multi-tenant deployment scenarios a while ago (when it was still quite popular in the Ubuntu ecosystem) and found it lacking.
At this point, I think Juju is most likely used in place of other metal or VM provisioning tools (like chef or Ansible) so that you can automatically provision and scale a system as you bring new machines online.
Sadly there is very little activity aiming at bare metal and VMs nowadays. If you look at features presented during couple of past months, you will find mainly kubernetes. Switching from charms to operators. But kudos to openstack charmers holding on and doing great work.
Single machine deployments are generally easy, you can do it DIY. The complexity arises the moment you have another machine in the setup, scheduling workloading, networking, setup to name a few, starts becoming complicated.
From my perspective, kubernetes was designed for multiple team, working on multiple services and jobs, making operation kind of self serviced. So I can understand the anti-kubernetes sentiment.
There is gap in the market between VM oriented simple deployments and kubernetes based setup.