My question is: why am I paying cloud providers for virtual machines with some imaginary virtual cpu count (or dynos or whatever) when I could be paying for M process running my server executable (capped at N threads per process)? Why can't I just write a server, bundle up the exe and assets, and run it somewhere? Why do I have to futz about with admining, patching, and hardening the OS when that's not what I care about?
Edit, to clarify, it just seems like an os with the capability to host a large number of user processes as here would really allow an order of magnitude reduction in hosting cost. Ie if a machine can host 1,000,000 paying accounts vs 10 vps/containered apps.
> I could be paying for M process running my server executable (capped at N threads per process)?
Isn't this essentially AWS lambda or shared hosting? The trade offs will of course be their limitations, max execution times, lack of flexibility, and vendor lock in.
I found that AWS was too expensive because of their bandwidth. Its cheaper to just get a dedicated server with 5-10x the capacity that I need.
Doing what you said requires developer time which is more expensive than hardware, which is probably why AWS is so expensive. Some day that price will probably come down but for now its probably cheaper to just throw hardware at the problem.
> do I have to futz about with admining, patching, and hardening the OS
I think this is a problem [partially] addressed by configuration management. For example download some pre-made ansible configuration for a lamp server, and change just the settings you care about. Or load up someone's docker image (you'd still need to do updates)
Not to sound cheeky but it sounds like you want to pay for timeshare on a mainframe.
(and I agree, it'd actually be nice if cloud providers framed cost based on processes count not vCPUs. Heroku kind of does that with their concept of "dynos")
Given the incessant push towards treating the web as a front end for "cloud apps", timeshare is back baby. Only that now the terminal is replaced with the web browser.
There is also Hyper.sh (https://hyper.sh/) that offers the same service as Azure Container Instances and it seems to work pretty well for a side project I'm trying it on.
Azure Container Instances were only available for testing through their web shell last I checked a while ago.
Historically it was hard to distribute software. You can't just copy over .elf files since these need dynamic libraries. Dependency hell is real, people invented .deb to solve many problems, but debs were always intended to be installed in global scope, making it hard to package user software.
Roll forward and nowadays with namespaces you can "containerize" also disk, which means shared libs. Docker images are a better delivery mechanism than raw elf files, or even debs. Hosting docker images is inherently cheaper than virtual machines. I think Heroku were first large service to realize that.
Security without context is ambiguous and vague. To the parent comment - which asks about shipping and paying for "M" processes, docker is a reasonable (if not great) solution as container/namespace/process isolation are all, one way or another, sharing the same kernel and have mostly the same benefits/drawbacks.
I think "Security" in this context would roughly mean "able to run code from >1 user as securely (or more) than if they were running on separate VMs". Which AFAICT docker & linux cannot provide, but something like triton can.
Docker is just regular processes limited by a bunch of Linux kernel isolation mechanisms, which means that you're subject to potential kernel exploits, which would allow a "neighbor" container to run code outside the container and then control your own.
There are some ways of mitigating this, but the simplest one would be for the provider to run a VM for each container, then you get the security guarantees of regular VMs (though you still have to trust the provider to keep the OS up to date).
Is a docker container simply a process or is it more heavy weight than that? It certainly can be, so isn't this characterization that it's merely a "process" a bit disingenuous?
Processes, not process, and I was talking in terms of security, but even in terms of performance, yes, it mostly is. There are some Docker features that can be more expensive (NAT and layered filesystem), but they are optional. A "Docker container" itself is just a group of processes to which the kernel applies a different policy than the default.
I'm not sure what that link is supposed to show, can you be more clear?
When FreeBSD moved from 16 bit PIDs to 5 digit PIDs back in about 1999, I got the impression that one reason for not using the full 32 bit space was compatibility with tabular formatting in lots of userspace tools.
> With the commits made today, master can
support at least 900,000 processes with just a kern.maxproc setting in
/boot/loader.conf, assuming the machine has the memory to handle it.
They are just four bits away from hitting a really big number.
Edit, to clarify, it just seems like an os with the capability to host a large number of user processes as here would really allow an order of magnitude reduction in hosting cost. Ie if a machine can host 1,000,000 paying accounts vs 10 vps/containered apps.