Now if we could just get some sort of hash consensus around what is in root.tar.xz. I feel like we are all blindly trusting large binary blobs as the core of our systems without any reproducible builds or peer auditing.
You might be interested in distroless[1] base images.
The repo links to a talk that goes into more depth, but the basic idea is to a use minimal language-specific base for your runtime instead of e.g. statically linking all of ubuntu into your image.
The base images are built with bazel's docker rules[2], so you get reproducible builds.
I don't run Docker in production, but I'd say it's the infrastructure. Docker images seem to be turning into the universal package format for distribution, CI, orchestration, resource limiting, etc. If you need to run a Go service which you to scale horizontally and mix with other projects (possibly dependencies), it's just easier to stuff your binary into a Docker image.
But let's think logically: with Go, you have a single binary file, that will run on basically any distribution of Linux, with no external dependencies.
With Docker, you need a lot more than that, and in the case of a Go binary, you have no benefit.
I'd suggest reading through https://thehftguy.com/2017/02/23/docker-in-production-an-upd... for an idea of "Docker in production". Sure, we aren't all running HFT systems, but the issues he documents aren't really specific to HFT - they're more related to having a piece of software you can rely on to work.
Can containers in the generic sense be a useful tool for certain tasks? Sure.
Is Docker the "omg lets put bread around this meat and call it a sandwich" epic moment? No.
The rise in mindshare of Docker is IMO not coincidentally linked to the rise of the bad kind of DevOps: where management fires ops, and gets developers to run their infrastructure.
"I don't need to understand how <insert common Linux infrastructure software> works, I can just run 2 docker commands and it will download me a working image from the internet. What do you mean who created the image and can I trust it? This is the Internet, of course anything I download is trustworthy."
What you say is true for container images and putting an ELF binary as the only file in an image doesn't make much sense if your only purpose is to run it without any other requirements.
However don't forget that Docker also gives the user an interface for running processes in a namespace and cgroup!
People have been running servers and installing software for a long time before Docker came around, and will be for a long time after it's flavour of the month appeal wears off.
npm already installs dependencies locally by default.
I already generally avoid Java apps, for bigger reasons than packaging, but setting a custom $CLASSPATH for the JVM to load your dependencies isn't really that difficult.
You're setting up a pretty big strawman. You asked why would someone use Docker for Go, I replied to that. I'm fully aware of the problems Docker has had and that's why I avoided using it when I set up our productions SaaS platform, preferring to use just Ansible and Upstart.
And you write "in the case of a Go binary, you have no benefit", but completely failed to respond to the benefit I gave. It seems like you were just goading someone to justify a rant. That's not nice.
> but completely failed to respond to the benefit I gave
You mean this:
> Docker images seem to be turning into the universal package format for distribution, CI, orchestration, resource limiting, etc
That isn't a "benefit". That's "some people are using it for some things". The ridiculous part is, the way e.g. CI shops use Docker, goes against the whole point of Docker. They use Docker to provide lightweight VM's which the project build/test/whatever script can run in (which usually involves installing build/test dependencies), when Docker's whole "thing" is one-process per Container.
Why CI shops don't use LXC/LXD for that, is beyond me.
> It seems like you were just goading someone to justify a rant. That's not nice.
Not at all. You mentioned using Docker with Go. I asked why, and you responded "I don't use it in production... but". Which is what my whole point was about.
> That isn't a "benefit". That's "some people are using it for some things".
It's not something intrinsic to Docker, but you asked "what exactly is the point of docker with a golang project?" and being able to use those thing out-of-the-box is absolutely a reason to use Docker. Something like Kubernetes is not easily replaced in-house.
> They use Docker to provide lightweight VM's which the project build/test/whatever script can run in (which usually involves installing build/test dependencies), when Docker's whole "thing" is one-process per Container.
Thanks to layering, you can install those build/test scripts and dependencies without affecting the base image.
Also, they don't use LXC/LXD because - everyone else is using Docker! Like I said, there's a real advantage to having a single standard image.
> Not at all. You mentioned using Docker with Go. I asked why, and you responded "I don't use it in production... but". Which is what my whole point was about.
Like I said, I don't think your previous post actually replied to mine. Only this one did.