It's very expensive. Site isolation in Chrome explodes memory pressure on the system. Language enforced confidentiality is a lot less resource intensive.
Processes? They don't need to be expensive, as proven by Linux. Trivially cheap enough to do one per site origin in a browser, anyway.
> Site isolation in Chrome explodes memory pressure on the system.
How do you figure? Code pages are shared, after all. Only duplicate heap would be an issue, but shared memory exists and can mitigate that if there's read-only data to be shared.
So what memory pressure is "exploded"?
> Language enforced confidentiality is a lot less resource intensive.
Not at all clear-cut or self-supporting. What resource(s) is it less intensive on, and what are you using to support such a claim?
CPU time is a resource, too, after all. All this software-injected mitigations and maskings aren't free.
Yes, but various things that require relocations may not be. That can include code, but definitely includes data like C++ vtables, as a simple example. Just to put a number to this, for Firefox that is several megabytes per process for vtables, after some work aimed at reducing the number.
There are ways to deal with that by using embryo processes and forking (hence after relocations) instead of starting the process directly; you end up with slightly less effective ASLR, since the forking happens after ASLR.
> So what memory pressure is "exploded"?
Caches, say. Again as a concrete example, on Mac the OS font library (CoreText) has a multi-megabyte glyph cache that is per-process. Assuming you do your text painting directly in the web renderer process (which is an assumption that is getting revisited as a result of this problem), you now end up with multiple copies of this glyph cache. And since it's in a system library, you can't easily share it (even if you ignore the complication about it not being readonly data, since it's a cache).
Just to make the numbers clear, the number of distinct origins on a "typical" web page is in the dozens because of all the ads. So a 3MB per-process overhead corresponds to something like an extra 100MB of RAM usage...
The experience of literally every browser vendor does not support your claim that it is 'trivially cheap'.
Problems worth thinking about:
Sharing jitted code across processes (including runtime-shared things like standard library APIs) - lots of effort has gone into optimizing this for V8.
Startup time due to unsharable data. Again lots of effort goes into optimizing this.
Cost of page tables per process. (This is bad on every OS I know of even if it's cheaper on some OSes).
Cost of setting up process state like page tables (perhaps small, but still not free)
Cost of context switches. For browsers with aggressive process isolation this can be a lot.
Cost of fetching content from disk cache into per-process in-memory cache. This used to be very significant in Chrome, they did something recent to optimize it. We're talking 10-40 ms per request from context switches and RPC.
Most importantly the risk of having processes OOM killed is significant and goes up the more processes you have. This is especially bad on Android and iOS but can be an issue on Linux too.
ASLR and other security mitigations also mean you're touching some pages to do relocation at startup, aren't you? You're paying that for dozens of processes now.
Those are all costs of doing multi-process at all. Once you've committed to that (which every browser vendor did long before spectre was a thing), doing it per site-origin doesn't significantly change things.
As for the actual problems, many of those are very solvable. Startup time, for example, can be nearly entirely eliminated on OS's with fork() (and those that don't have a fork need to hurry up and get one) - a trick Android leverages heavily.
Cache round-trips in chrome were historically 10-40 ms, you could see it in devtools. I have old profiles. You're thinking the optimal cost of a round-trip, not the actual cost of routing 1mb assets over an IPC pipe