Don't feel bad. You've discovered the charming little fact that the registry API will report compressed size and the Docker daemon will report uncompressed size.
You're only playing that 40mb once though. Multiple containers sharing the same parent layers will not require additional storage for the core OS layer.
Do you get all developers to agree on which base image to build all their services from?
I heard about this "oh, it's shared, don't worry" thing before. It started with 40MB. Now that supposedly shared image is half a gig. "Don't worry, it's shared anyway". Expect when it isn't. And when it is, it still slow us down in bringing up new nodes. And guess what, turns out that not everyone is starting from the same point, so there is a multitude of 'shared' images now.
Storage is cheap, but bandwidth may not be. And it still takes time to download. Try to keep your containers as small as possible for as long as possible. Your tech debt may grow slower that way.
As it happens, you're describing one of the motivations for Cloud Native Buildpacks[0]: consistent image layering leading to (very) efficient image updates.
Images built from dockerfiles can do this too, but it requires some degree of centralisation and control. Recently folks have done this with One Multibuild To Rule Them All.
By the time you're going to the trouble of reinventing buildpacks ... why not just use buildpacks? Let someone else worry about watching all the upstream dependencies, let someone else find and fix all the weird things that build systems can barf up, let someone else do all the heavy testing so you don't have to.
Disclosure: I worked on Cloud Native Buildpacks for a little while.
In production, the smallest box has half a gig of RAM.
In development, it's indeed a single box, usually a laptop.
> all developers to agree on which base image to build all their services from
Yes. In a small org it's easy. In a large org devops people will quickly explain the benefits of standardizing on one, at most two, base images. Special services that are run from a third-party image is a different beast.
Technically it's closer to 8. Which is quite a bit. That said even. If it's relatively very different, absolutely speaking its 40mb which is very little even if you have to transfer it up.
The base image may be small, but all the packages and metadata are large, and the dependencies are many. Alpine always leans to conservative options and intentionally removes mostly-unnecessary things. So average image sizes are higher with an Ubuntu base compared to Alpine.
As far as I can tell, the recommended way to run several processes in an Ubuntu container is under supervisord. The default Ubuntu containers don't even include an init system.
I've only ever done it when I needed to run exactly two processes- porting an older piece of software that was not originally designed to run inside containers, and orchestrating the separate processes to be able to communicate and run in separate containers didn't seem like worth the effort.
I’ll try not to be opinionated, but starting an app inside Ubuntu typically has 50+ processes.
In most cases with Alpine-based containers, the only process is the one that you actually want to run.
Add to that that modern Ubuntu uses systemd which greatly exhausts the system’s inotify limits, so running 3-4 Ubuntu-containers can easily kill a systems ability to use inotify at all, across containers and the host system. Causing all kind of fun issues, I assure you.