Part of the point of OpenGL is to offload as much computation as possible to the GPU. There's a lot of very cool stuff you can do with barely any CPU usage at all, so even if you're using JavaScript it shouldn't make much difference.
(High-end games still need a lot of CPU power too, of course)
Well, two things. First off, because it has to be safe in the browser, there's no direct memory access. Which means if you need to dynamically write CPU data to buffers WebGL is going to add a lot of overhead. The trend with newer APIs (DX12/Metal/Vulcan) the last few years has been direct access to avoid API overhead, and it's unlikely the browser+javascript will allow for that kind of memory usage (since it's unsafe). So that's one limitation that's unlikely to be addressed.
There are other limitations, like for example, Windows has no native OpenGL ES support, and so all your API calls get translated to DirectX9 (via ANGLE in chrome and firefox), so now you're actually going through two APIs.
Another consideration is that, excluding pure viewers, you're likely to be doing a lot of other expensive calculations in a 3d scene and javascript is poorly suited to that kind of work.
The trend with newer APIs (DX12/Metal/Vulcan) the last few years has been direct access to avoid API overhead
Hmm, is that really true? I haven't used those latest APIs yet! I thought good practice was still to keep the data on the GPU as much as possible, and avoid fine-grained sharing with the CPU. There's plenty you can do with static data.
I agree that JS/WebGL is not quite there yet for big-name games, but it's surely pretty close. And asm.js and eventually WebAssembly can help claw back CPU performance.
Rather than technical concerns, as a game developer I'm more concerned by the fact that nobody is willing to pay for anything on the web. If and when WebGL goes mainstream for games, it'll be even more of an ad-encrusted "freemium" race to the bottom than the various mobile app stores already are.
Well, with regard to non-technical concerns, the only reason I can think of to compile your game for a web browser (as opposed to native) would be for convenience/no-install. The main advantage of the browser being you have access just by navigating there. If you're putting up a pay wall you're kind of killing the convenience/easy-access thing anyway. If you want the traditional model of pay-for-game->get-game the browser doesn't do a lot for you.
You could argue it's easier to target one platform (the browser) rather than multiple platforms, but I would argue that: A) engines like unity can already handle that for you, B) having worked for a company that was using asm.js + webgl for a product, I can tell you that browser is a really annoying platform to target when you're using C++ and OpenGL. It works but it's incredibly hard to debug and browser updates tend to break things randomly and a lot more frequently than you might think. I'd only use that stack if it gave you a competitive business edge.
Maybe go back to 'demos' as in the days they were distributed on (collection) CDs. I remember playing the Diablo II demo over and over...
But this time demo (of a single level for instance) is offered in the browser, and when that is over one can "go to next level" (popping up a payment form) or "play demo again".
Get people hooked before even asking them to buy.
"Dynamic buffer data is typically written by the CPU and read by the GPU. An access conflict occurs if these operations happen at the same time; [...] For the processors to work in parallel, the CPU should be working at least one frame ahead of the GPU. This solution requires multiple instances of dynamic buffer data"
It's still zero copy, but they're not literally peeking and poking the same buffer at the same time.
So WebGL "just" needs to offer zero-copy access to buffers from JavaScript. That seems at least possible...? TypedArray would be the key component, and that already exists. Whether all those pieces have been put together in the right order yet, I dunno.
Anyway, low-overhead access to the GPU from JavaScript seems possible in principle.
Lower-end GPUs like Intel's integrated graphics and console hardware are doing this, but high-end desktop GPUs do not- they still have separate memory spaces (though they may share some memory).
Part of the point of OpenGL is to offload as much computation as possible to the GPU. There's a lot of very cool stuff you can do with barely any CPU usage at all, so even if you're using JavaScript it shouldn't make much difference.
(High-end games still need a lot of CPU power too, of course)