Any virtualization solution is going to require you manage an operating system. One of the goals of contanerization is for developers to only work with the application.
This. Only 2G, not 200M. I try to get people to package application plus dependencies yet this is what they do every time. Every single time.
Plus they always base it on images from the Internet, so bascially we trust some stranger with root privileges to all our data. Not always on the same image of course.
Yes, this is a fundamental issue with Docker and other container systems that work with raw disk images as their basic unit of information. I have implemented a container system for the GNU Guix package manager that doesn't have this image bloat problem because it doesn't use opaque disk images. We store packages in a content-addressable storage system, which allows us to know the precise dependency graph for any piece of software in the system. Since we know the full set of software needed for any container, we are able to share the same binaries amongst all containers on a single host via a bind mount.
The binaries have to run somewhere. Container evangelists love to espouse the purity of running any container on any dock OS, and this is as true as being able to migrate VMs between any host - it still comes down to what the application/VM needs from the underlying OS/hardware.
Mostly it comes down to lack of experience with containers so far, and lack of tools.
Most apps need very little from the underlying OS if you actually take the time to e.g. set up a toolchain with a build container that you then move the build artefacts out of to install into the final container. Instead you see a lot of containers that in effect include all the build dependencies and a nearly full OS pulled in by that.