I've wondered a lot about what the value prop of Docker is for simple Go applications and agree that if you can get away with it, ignoring Docker can be a good way to go.
One thing a Docker container gets you for a simple Go app is the ability to run it on any platform that runs container images (like ECS); that's more flexibility than you get with something that needs a proper VPS to run. Everything knows how to run a Docker image.
But if you don't care about stuff like ECS, and you're not trying to harden or isolate your runtime, and you don't do anything funky with networking --- this describes lots of apps --- sure!
For simple applications Docker doesn't provide much value at all. Unless of cause you have multiple of these simple applications, and they each have different requirements, but aren't large enough to efficiently utilize an entire VM.
The Go applications that can be distributed as a single binary, SpringBoot application, or a C program packaged up as a static binary or in a .deb/.rpm package, those application may not benefit much from Docker.
Where Docker does help is when it comes to distributing applications written in languages that honestly don't have great ways of distributing them. As much as I love Python, deploying to production can be a little tricky. Docker allows you to know exactly what is currently in production.
For many developers, and operations people, the thing they really want is the orchestration tools Docker/containerization brings. Even if it's just Docker-compose, you're now easily able to recreate entire stacks in different environments. That's providing real value for almost all cases.
> One thing a Docker container gets you for a simple Go app is the ability to run it on any platform that runs container images (like ECS); that's more flexibility than you get with something that needs a proper VPS to run. Everything knows how to run a Docker image.
It is kind of bonkers that we've gotten to a stage where more things know how to run a container than know how to run a statically linked binary.
"Running a binary" is still super easy. All you need to do is take account of log management, process isolation, networking configs (port forwarding), service discovery, health checking, data storage folders, fault-tolerant rollbacks, binary placement (putting the x86 binary on the x86 machine & arm on the arm machine), config files, environment variables, the working directory the process needs, etc.
Anything that "runs a binary somewhere" will essentially become docker given enough time. Docker is just "run this binary somewhere" everything above packaged into it.
Docker attempts to do all of these things. It may not do them well but it provides a framework/opinion on how this should all be managed:
1. binary + configs (PWD, files, ENV): baked into the image's Dockerfile
2. Secrets: passed in via env vars at runtime
3. Service discovery: docker-compose exposes container name as a hostname and provides a DNS record for this available
in all netns that are connected to the same network.
9. fault-tolerant rollbacks: if you tag images correctly and you are upgrading from v1 -> v2 you can just replace the tag with :v1 to rollback to the correct version which will also be cached on the machine. If your registry is down you can still rollback.
10. Process isolation: this is exactly what docker is. Process isolation implemented in the kernel (sometimes).
Docker doesn't have any infrastructure for managing configs. Nor secrets. It doesn't have log shippers or storage -- and "to file or syslog over TCP" are definitely not the recommended ways! The only thing it really "does" there is process isolation.
If everything compiled to a statically linked binary, sure. But most things don't compile to static binaries, and platforms are built to run things in all kinds of languages.
Meanwhile: Docker can be pretty insane as a runtime (we don't use it for that!) but it's not an insane packaging format.
Well, turning a static binary into a docker image is basically just adding some metadata on how to run it. Whereas turning an image into a static binary is much harder, so it makes sense the world standarized on the more flexible format.
This was exactly my thought. Sure you don't need Docker. But if you want to effortlessly ship your application to a number of cloud platforms where you don't need to concern yourself with the host-level security aspects, it's hard to beat a Docker image.
One thing a Docker container gets you for a simple Go app is the ability to run it on any platform that runs container images (like ECS); that's more flexibility than you get with something that needs a proper VPS to run. Everything knows how to run a Docker image.
But if you don't care about stuff like ECS, and you're not trying to harden or isolate your runtime, and you don't do anything funky with networking --- this describes lots of apps --- sure!