What I notice about all your examples is that they are stack problems that an application could not, on its own, have fixed. In the current model, each part of the stack has an independent lifecycle, creating shear points and hidden security flaws.
If the application can control the whole stack, then the application author can fix it.
Automating test and install just puts you back where you started: with a gigantic test matrix that will impose non-trivial drag on the whole application's development.
And it's not necessary. It's just ... not. necessary.
> If the application can control the whole stack, then the application author can fix it.
You are right, but the other point is that it becomes the application authors responsibility to fix it.
If you're bundling apache httpd with your app, and there's a security flaw and a new version released, it becomes your responsibility to release a new version of your app with the new version of httpd.
If there are 1000 apps doing this, that's 1000 apps that need to release a new version. Instead of the current common situtation, where you just count on the OS-specific package manager to release a new version.
Dozens of copies of httpd floating around packaged in application-delivered VMs is dozens of different upgrades the owner needs to make, after dozens of different app authors provide new versions. (And what if one app author doesn't? Because they are slow or too busy or no longer around? And how does the owner keep track of which of these app-delivered VM's even needs an upgrade?)
You're describing what you see as the advantages of the shared hosting scenario and in the blog post I linked, I explain why I think that business will be progressively squeezed out by VPSes and SaaS.
In any case, there's no difference in kind between relying on an upstream app developer and an upstream distribution. You still need to trigger the updates.
And you might have noticed that stuff is left alone to bitrot anyhow.
I am not talking about an app that is distributed to be installed "on" OS X, BSD, Illumos etc.
I am talking about an app that is packaged to run "on" Xen, VMWare, or maybe docker (LXC) for some cases. Or zones for others. Or jails. Whatever.
The point is that you, the application designer, ask yourself, "what happens if I have total architectural discretion over everything from the virtual hardware up?"
But, rather than the panacea you envision, what I think would actually happens is you end up with a lot of people doing substandard OS release engineering jobs, neglecting security patches, etc.
Or...
Cargo culting around a small number of "thin OS distributions", which is substantially the same as what we have today.
Heck, "total architectural discretion over everything from the (virtual) hardware up" is pretty much the definition of an OS distribution. Am I missing the point here? Is there something about this other than the word "virtual" slapped on there that's unique from what we have now?
Consider that a lot of applications, when shipped in VMs or containers, needs very, very thin slivers of a full OS. Especially in things like an LXC container which can easily be set up to share a subset of the filesystem of the host.
E.g. many apps can throw away 90%+ of userland. So while they need to pay attention to security patches, the attack surface might already be substantially reduced.
And LXC can, if your app can handle it, execute single applications. There doesn't need to be a userland there at all other than your app.
Now, it brings its own challenges. But so does trusting users to set up their environments in anything remotely like a sane way.
> Is there something about this other than the word "virtual" slapped on there that's unique from what we have now?
Yes: virtual machines and VPS hosting make it possible to bypass shared hosting. That means you needn't write apps which have to aim for lowest common denominator.
Edit: I agree that the approach I'm advocating introduces new problems. But obviously I think that it's still better than the status quo, which is largely set by path dependency.
I think you just end up moving the work around. Not sure the current concentration of security at a few points (distros) has scaled. Most web application developers do not use a distro stack anyway for much. Most of the security issues in a distro apply to stuff you don't use, although it may be installed. Traditional Unix was a much more minimal thing.
If the application can control the whole stack, then the application author can fix it.
Automating test and install just puts you back where you started: with a gigantic test matrix that will impose non-trivial drag on the whole application's development.
And it's not necessary. It's just ... not. necessary.