Historically it was hard to distribute software. You can't just copy over .elf files since these need dynamic libraries. Dependency hell is real, people invented .deb to solve many problems, but debs were always intended to be installed in global scope, making it hard to package user software.
Roll forward and nowadays with namespaces you can "containerize" also disk, which means shared libs. Docker images are a better delivery mechanism than raw elf files, or even debs. Hosting docker images is inherently cheaper than virtual machines. I think Heroku were first large service to realize that.
Security without context is ambiguous and vague. To the parent comment - which asks about shipping and paying for "M" processes, docker is a reasonable (if not great) solution as container/namespace/process isolation are all, one way or another, sharing the same kernel and have mostly the same benefits/drawbacks.
I think "Security" in this context would roughly mean "able to run code from >1 user as securely (or more) than if they were running on separate VMs". Which AFAICT docker & linux cannot provide, but something like triton can.
Docker is just regular processes limited by a bunch of Linux kernel isolation mechanisms, which means that you're subject to potential kernel exploits, which would allow a "neighbor" container to run code outside the container and then control your own.
There are some ways of mitigating this, but the simplest one would be for the provider to run a VM for each container, then you get the security guarantees of regular VMs (though you still have to trust the provider to keep the OS up to date).
Is a docker container simply a process or is it more heavy weight than that? It certainly can be, so isn't this characterization that it's merely a "process" a bit disingenuous?
Processes, not process, and I was talking in terms of security, but even in terms of performance, yes, it mostly is. There are some Docker features that can be more expensive (NAT and layered filesystem), but they are optional. A "Docker container" itself is just a group of processes to which the kernel applies a different policy than the default.
I'm not sure what that link is supposed to show, can you be more clear?
Historically it was hard to distribute software. You can't just copy over .elf files since these need dynamic libraries. Dependency hell is real, people invented .deb to solve many problems, but debs were always intended to be installed in global scope, making it hard to package user software.
Roll forward and nowadays with namespaces you can "containerize" also disk, which means shared libs. Docker images are a better delivery mechanism than raw elf files, or even debs. Hosting docker images is inherently cheaper than virtual machines. I think Heroku were first large service to realize that.