It's super simple and it works for you - yeay! great!
If it will continue to work for you - double yeay!
However, as soon as I read, I saw a lot of red flags:
- Do you really want to copy from a development computer to your production? No staging at all? ("go test" doesn't mean that you have 0 bugs)
- Are you really sure that everything works exactly the same on different versions of GoLang? (Hey, a new guy in your company just installed unreleased Go 1.17, build on his notebook and pushed to production)?
- That VM with systemd died at 1am. No customers for you until 7am (when you wake up)
BTW. I am not saying that you should do Docker or CICD. The thing which I am saying that when you cut out from your process too much, you are increasing risks. (As an example, you didn't remove unit tests part. Based on "Anything that doesn’t directly serve that goal is a complication" you probably should have. However, you decided that it would be way too much risk)
Exactly my reasons: I don‘t use Docker because it‘s great that i can (could?) scale the universe.
In my case, I simply use Docker because it is so easy to set up n HTTP services listening on port 80/442 on the same server and then put a reverse proxy (traefik) with Let‘s Encrypt in front of it. I don‘t need to worry about port conflicts. I don‘t need to worry about running Nginx and Apache on the same host. I don‘t need to worry about multiple versions of Go/PHP/Dotnet/[insert lang here].
Still, I can‘t scale (single machine) but I don‘t need to. I don‘t have a failover, because I don‘t need one. But I have so much simpler management of the dozen services I run on the host. And that‘s worth it IMHO.
I think it‘s always about the right tool for the job. And I think if the OP does work with an automated script and scp, there‘s nothing wrong with that. Because that also adds reproducability to the pipeline and that‘s just such an important point. As long as nobody ssh‘s into prod and modifies some file by hand.
100%. For the startup we're starting/working on now, we're running microk8s on bare metal (dedicated servers).
What you describe is a big reason for it. Once you have k8s set up, services can be very easily deployed with auto TLS, and basic auth or oauth2 is really simple as well.
So we're big believers in vertical scaling, but still use k8s (microk8s or k3s) for these kinds of benefits. An additional benefit of this is that it makes scaling/transitioning to a bigger/managed k8s easy down the road.
It might sound like overkill, but it takes about 10 minutes to set up microk8s on a ubuntu lts server. It comes with nginx ingress. Install cert manager (another 5 mins) and you don't even need traefik these days. All configs are kept in Git.
After you've spent weeks / months reading blogs, installing dependencies, tweaking parameters, playing around deployments and figuring out new names for the same old stuff, debugging that extra whitespace in the yaml file only to figure out oww, you could use helm charts for deployments which is eerily similar to how jsp looked like 15 years ago and everything that was wrong with templating deployments than scripting it.
Then it takes about 4 mins.
And now you get to do brown bag sessions with your baffled team members! Yay!
But only till kubernetes evicts your pods for being too resource "hungry". Gotta keep prod up folks. Better grab a coffee and kubectl ("kube-cuttle" is it?) into prod to check why the pods have restarted 127 times.
All these “it takes 10 minutes” should really be “it takes 10 minutes plus several hours/days/weeks actually learning how to run and maintain all this stuff”.
Except now your system can handle the inevitable server going down without taking the entire site offline. It DOES solve the problem of a single host failure causing an outage. Yes, there are other types of outages you can have, but it certainly does reduce the occurrence of outages significantly.
Are you really trying to suggest that people can't use Kubernetes to increase their reliability?
Yeah, I guess I am. It's adding whole layers of complexity and configuration to the system. I understand that those layers of complexity and configuration are designed to make the system more resilient, but it depends on everyone getting everything right all the time. The "screw-up surface" is huge.
Ever seen a large system that has it's own job server and scripts for orchestration/deployment? Application code that checks the status of it's peers and runtime env to determine what should run? All glued together with decades old perl and bash with no documentation.
Leave your nice clean K8s deployment paradise to cruft up for decades, and will it be any better? I doubt it - there'll be old Dockerfiles and weird bits of yaml that shouldn't work but do, and upgrading a version of anything will break random things.
So yes, I think I would prefer the decades of crufty perl and bash to decades of crufty outdated yaml. At least the bash scripts have a hope of doing what they say they do, and are likely to still execute as intended.
One can certainly create an HA cluster over some infrastructure set up by kubernetes, just as well as one can take a bunch of physical servers, set them up by hand, and create an HA cluster with them. K8s isn't adding anything to the availability.
> Docker alone cannot solve this category of issues anyway.
Docker does comes with an orchestrator out of the box, it's called Docker Swarm. You may not use it, but it's there and it's up to you to use it or not. It's extremely simple to setup, a single command on the manager and another one on the worker. It support healthcheck, replication, etc.... all super simples to setup too.
Sure doing all theses will takes, what 30 minutes? Instead of the 5 he took for his deployment, but it does solve that issue, natively, out of the box.
Oh and my docker image always have the "blessed" [insert environment here] version, thus everyone always use it while testing locally. If you need to update it, anyone can do it easily, without any knowledge of the build server environment, nor any special access to it.
- Staging: THe world ran well with people pushing PHP files onto live environments and it will continue to run well long after Docker gets replaced with something else.
- Versioning: It's pretty easy to ensure you have same versions on the same platform.
- Systemd: None of this means he does not have Pagerduty or similar setup.
Why do I say all of this? Because i ran really good businesses with similar architecture to his back in the day. Sure, I run docker now, but sometimes we tend to overcomplicate things.
If you have one app, and one server. There is No good reason to run a layer of Docker on it. None.
Elixir, Ruby, PHP, Node -- if your business has a monolith and can run on one server, guaranteed there is less to worry about when you remove Docker.
> THe world ran well with people pushing PHP files onto live environments
No, it didn't. The world didn't fall apart, but it absolutely burned out lots of people who had to deal with this irresponsible way of doing things.
The way we do things is much, much better now. It is more complex, but that can be worth it. Don't romanticize a past that was absolute hell for a lot of people.
Source: inherited and maintained many of these dumpster fires.
No, but they will have some combination of declarative infrastructure, build scripts with error messages, and Docker images as a starting point.
I still maintain some of my 10-year-old code, by the way. Once I got it to build and deploy with modern tools, it has been much, much easier to keep it updated with patches and the latest server OS.
>Elixir, Ruby, PHP, Node -- if your business has a monolith and can run on one server, guaranteed there is less to worry about when you remove Docker.
For Ruby at least you run into the problem of keeping all the development environments the same. It's not insurmountable by any means but it's a constant nagging annoyance. Especially so once we start talking about working on multiple projects that may be using different versions of Ruby Postgres etc. Being able to do a docker-compose up and have the exact same environment as production is huge.
There artifact is a single statically linked binary/executable, not a docker container. They can build that binary once and pass it along a deployment pipeline i.e dev, test, prod changing config parameters for each environment but the exact same code running.
Systemd just like the various container runtimes supports auto restarts + logging. You can have the same alerting tools hang off your logs etc etc.
The fact they are not using Docker does not mean they can't have a proper build/deployment pipeline. The fact they are dealing with a single static executable makes building a pipeline and robust deployment far simpler as they have far fewer moving pieces.
If the author was deploying python, javascript, ruby apps where you don't get a static executable artifact or fat jar with all dependencies bundled then Docker would make sense.
I've been struggling with this for years now. Every time I ask the question "why do I need Docker when Go produces static binary executables?" I get some reasonable answers, but nothing along the lines of "you can't do X without Docker".
I totally grok the need when your deployable is a few hundred script files for a very specific runtime and a large set of very exact dependencies. But that's not the situation with Go.
And now //embed too. Adding all my templates, static files, everything, to the binary. Ship one file. Awesome
Yeah, there isn't really anything that you just can't do at all without Docker. The question is whether it's a net positive or negative on your architecture as a whole. Eg, I deploy some Go apps using docker (basically build the executable and then build a docker image that just contains that executable). Looking at just the single application, it's pure overhead to add Docker vs just deploying the binary executable somewhere and running it. But in the overall context of my setup, since I'm running other apps as well that are written in other languages/frameworks and have different characteristics, it's a huge net positive for me to have a single uniform interface for deploying and running them. Everything gets pushed to the same container registry the same way, is versioned the same way, can handle service discovery the same way (by listening on the docker port), can do canary deploys the same way, etc. I can use docker-compose to bring up a dev environment with multiple services with a single command. I can deploy the container to Cloud Run or ECS or an equivalent if that makes more sense than running servers.
I've just being doing this process for my product now. I have a bunch of deployment scripts that control the test, build and deploy of the app.
I can't run them on a Docker container (the final hurdle was that Docker can't run systemd). So my choice was to either add Docker to my production servers, or drop Docker and use Vagrant instead for localhost VM for dev.
Again, I couldn't see what Docker was adding to the mix that was of value - it would be an additional layer of configuration, complexity and failure on the production servers. It wouldn't save anything if the server went down, it would complicate attempts to restart the app if that crashed, and it gives us... what?
Again, I get it for Rails or Django builds (and similar) where the environment is complex and dependencies have to be managed carefully and can conflict horribly. But I just don't have that problem. And it's a real incentive to stick to the one language and not introduce any more dependencies ;)
In my opinion whether those are red flags depends entirely on the context. How many people work on the project, code base size and age, etc.
I feel these days projects are often starting out with way too much complexity and shiny tools. It should not be about cutting out things, but instead about adding things at the point they really add value.
No of the points you list have anything to do with docker.
1. No I don't, I still would potentially not use Docker in many cases (but I would use a CI, which might or might not run in a docker image, but it's not the same as deploying docker images).
2. Depends on the language I'm using, for some languages I would be afraid of accidental incompatibilities. For others I'm not worried and would be fine if roughly the same OS is used in CI and production.
3. Can happen with docker too, on the other hand VM or auto non-VM restarts exists independent of Docker. I'm not sure why you mention systemd here, it has very reasonable "auto restart if not alive" features.
Through then in the end I'm increasingly drifting more to use images, but I really don't want to use docker in production. But then I can do what people normally expect from "using docker" without docker, e.g. by using podman or other less "root" heavy ways to run VM's with appropriate tooling for reliability (which yes can be systemd+rootless podman in some cases).
They have a CI job that builds and pushes code. So it does not matter what the new guy did on his laptop.
I am not sure if you are going for “monitoring” or “redundancy” in your dead VM example, but docker by itself cannot provide either of those. You need some solution either way.
I use docker daily because our app is big and has tons of system dependencies, but I have no other choice. We need fast version rollback/update and regular approach of installing deb packages will not work well. I dream of nixOS and sylabs, but my org is not going to switch to radical new technology. But if someone can set up the system so they don’t need docker - more power to them, I can only be envious.
It is not NixOS vs Docker, it is NixOS vs Docker/Ubuntu.
Most of the day-to-day OS problems don't come from Docker, they come from base linux distribution. And Ubuntu was released in Oct 2004, and it is in big part Debian, which was released in 1993.
There is a big advantage when you run the same OS on developers' desktops and on server. It is also great when all sorts of weird third-party packages already know and support your OS.
This is the advantage of Docker I suppose -- it does not "get in your way" and lets you use the same things you used to do before it.
> I am not sure if you are going for “monitoring” or “redundancy” in your dead VM example, but docker by itself cannot provide either of those. You need some solution either way.
You are the second one that say this here... that explain why Docker Swarm lose traction versus Kubernetes, that's a makerting issue.
I just setup a Docker Swarm cluster recently, in 5 minutes it was redundant. I didn't add any other software, just the basic docker swarm command that shipped with it.
I actually didn't needed the redundancy part at all, but I wanted a second server that could ping the first one and send me an alert. I was going to simply put it on a machine, with systemd like him, but it was just as easy to run it on the machine using Docker. Hell I could run it on both even more easily than doing that twice using systemd... I don't even know how to use systemd now that I mention it....
However, as soon as I read, I saw a lot of red flags:
- Do you really want to copy from a development computer to your production? No staging at all? ("go test" doesn't mean that you have 0 bugs)
- Are you really sure that everything works exactly the same on different versions of GoLang? (Hey, a new guy in your company just installed unreleased Go 1.17, build on his notebook and pushed to production)?
- That VM with systemd died at 1am. No customers for you until 7am (when you wake up)
BTW. I am not saying that you should do Docker or CICD. The thing which I am saying that when you cut out from your process too much, you are increasing risks. (As an example, you didn't remove unit tests part. Based on "Anything that doesn’t directly serve that goal is a complication" you probably should have. However, you decided that it would be way too much risk)