According to webglstats.com WebGL is supported on 96% of browsers/devices. I wonder what is a maximum possible coverage to be achieved by WebGL2 with current hardware (If I understand correctly some WebGL capable devices will never be able to support WebGL2 due to hardware limitations).
On PC it's probably no difference. WebGL 1 is a really low baseline -- its based on an ES spec from 2007. ES3/webGL2 isn't super cutting edge. In terms of mobile I dont think webgl is that useful anyway (too much overhead), but pretty much every high end device newer than 2013 should support it.
Part of the point of OpenGL is to offload as much computation as possible to the GPU. There's a lot of very cool stuff you can do with barely any CPU usage at all, so even if you're using JavaScript it shouldn't make much difference.
(High-end games still need a lot of CPU power too, of course)
Well, two things. First off, because it has to be safe in the browser, there's no direct memory access. Which means if you need to dynamically write CPU data to buffers WebGL is going to add a lot of overhead. The trend with newer APIs (DX12/Metal/Vulcan) the last few years has been direct access to avoid API overhead, and it's unlikely the browser+javascript will allow for that kind of memory usage (since it's unsafe). So that's one limitation that's unlikely to be addressed.
There are other limitations, like for example, Windows has no native OpenGL ES support, and so all your API calls get translated to DirectX9 (via ANGLE in chrome and firefox), so now you're actually going through two APIs.
Another consideration is that, excluding pure viewers, you're likely to be doing a lot of other expensive calculations in a 3d scene and javascript is poorly suited to that kind of work.
The trend with newer APIs (DX12/Metal/Vulcan) the last few years has been direct access to avoid API overhead
Hmm, is that really true? I haven't used those latest APIs yet! I thought good practice was still to keep the data on the GPU as much as possible, and avoid fine-grained sharing with the CPU. There's plenty you can do with static data.
I agree that JS/WebGL is not quite there yet for big-name games, but it's surely pretty close. And asm.js and eventually WebAssembly can help claw back CPU performance.
Rather than technical concerns, as a game developer I'm more concerned by the fact that nobody is willing to pay for anything on the web. If and when WebGL goes mainstream for games, it'll be even more of an ad-encrusted "freemium" race to the bottom than the various mobile app stores already are.
Well, with regard to non-technical concerns, the only reason I can think of to compile your game for a web browser (as opposed to native) would be for convenience/no-install. The main advantage of the browser being you have access just by navigating there. If you're putting up a pay wall you're kind of killing the convenience/easy-access thing anyway. If you want the traditional model of pay-for-game->get-game the browser doesn't do a lot for you.
You could argue it's easier to target one platform (the browser) rather than multiple platforms, but I would argue that: A) engines like unity can already handle that for you, B) having worked for a company that was using asm.js + webgl for a product, I can tell you that browser is a really annoying platform to target when you're using C++ and OpenGL. It works but it's incredibly hard to debug and browser updates tend to break things randomly and a lot more frequently than you might think. I'd only use that stack if it gave you a competitive business edge.
Maybe go back to 'demos' as in the days they were distributed on (collection) CDs. I remember playing the Diablo II demo over and over...
But this time demo (of a single level for instance) is offered in the browser, and when that is over one can "go to next level" (popping up a payment form) or "play demo again".
Get people hooked before even asking them to buy.
"Dynamic buffer data is typically written by the CPU and read by the GPU. An access conflict occurs if these operations happen at the same time; [...] For the processors to work in parallel, the CPU should be working at least one frame ahead of the GPU. This solution requires multiple instances of dynamic buffer data"
It's still zero copy, but they're not literally peeking and poking the same buffer at the same time.
So WebGL "just" needs to offer zero-copy access to buffers from JavaScript. That seems at least possible...? TypedArray would be the key component, and that already exists. Whether all those pieces have been put together in the right order yet, I dunno.
Anyway, low-overhead access to the GPU from JavaScript seems possible in principle.
Lower-end GPUs like Intel's integrated graphics and console hardware are doing this, but high-end desktop GPUs do not- they still have separate memory spaces (though they may share some memory).
They tend to black list GPUs due to graphic issues and we all know how often those drivers get updated.
With native code you get to work around driver bugs.
With WebGL you need to guide your users how to manually white list the GPU and still don't have control all it is given to the driver, meaning it can still fail.
I would love to start using it, but most WebGL content I try to view fails (some error message that it is not supported). This is true on the last three (different) computers I have used (all some recent Windows/Chrome combination). Sometimes it can be enabled via some flag, but that is useless for real world solutions.
I'm still waiting for Chromium to add a per-site switch for WebGL (just like for JS, cookies, Flash, etc). Enabling WebGL globally seems like a disaster waiting to happen, even with sanitisation and sandboxing.
I'm still not fully clear with #1. If your device only supports WebGL1, but the site uses getContext('webgl2'), you'd get an error right?
So if your application can work with WebGL1, you should use that if you want to reach maximal audience?
Also, is it all or nothing? Could certain parts of WebGL2 work and others not be supported? Could you get away with using webgl2 if you don't use certain advanced features?
(Yes. To everything. Except "Could certain parts of WebGL2 work and others not be supported?" -- The point of having a spec is that if your graphics driver advertises support for WebGL XX, then you can be sure the host is capable of running all the features of XX, short of bugs in the driver.)
I liked the After the Flood https://playcanv.as/e/p/44MRmJRU/ linked to in the post, but was disappointed by the lack of shadows cast by the leaves and that blowing leaves intersected and passed through each other. Its still very distinctly computer generated.
That's true of literally every computer-generated video. Side-by-side camcoder footage shows no contest in terms of realism.
I devoted several years to this, and there are a bunch of reasons why that ends up being the case. The main issue is, think of how many atoms there are. Now realize the actual number of atoms is for all intents and purposes infinitely larger than whatever you thought of.
Ok, so we try to approximate. Right away you run into problems. What we call "light" is just a set of photons entering our eyes. Each photon has a wavelength. Our eyes do a compression algorithm in our retina -- our brains do not receive a "count" of the number of photons entering our eye. Instead, our retina cells themselves are responsible for coming up with an average of all the photons hitting it, and sending a signal to our brain: that's color.
Anyone who has any graphics experience whatsoever will know that it's an extremely common operation to calculate lighting by multiplying a light's color with an RGB texture. (The leaf's texture, say.) Now, tell me: What does it mean to "multiply" two colors together?
Nothing! It's fake! It has no basis in reality whatsoever. It literally is an astonishing approximation that happens to work reasonably well.
But our brains know the difference, inchoately, when you look at a real video vs a computer-generated video. You can tell that there's something almost indescribably "off" about the CG video. And I suspect it's because the final product consists of a series of approximations that work well in isolation but are not quite perfect when it comes to simulating reality.
Combine that with a lack of detail in CG video due to the fact that there are a gazillion atoms in real life and only a few million triangles in CG life, and you will always end up with experiences that are very distinctly computer generated.
Defeatist! Call me names! Say we can do better! Say that the latest advances in graphics and raw computing horsepower will solve these concerns. Maybe they will. But most importantly, I dare you to get so frustrated with our lack of ability to generate truly realistic computer video that you take on the problem yourself, and dive into the field with your whole heart. Surpass me. Surpass what everyone thinks is possible. Question the fundamental assumptions that the entire field is based on: that `N dot L x diffuse` is even slightly reasonable. Create your own "diffuse" textures that stores counts of photons bucketed by wavelength, measured from a real-life data source, instead of RGB colors. Try fifty things that everyone dismisses as unpromising, because whatever the final solution looks like, it's going to be unlike anything we're doing right now.
Cracking the problem of realistic video isn't going to happen with incremental improvements of our current techniques. Remember this rule of thumb: A video that's indistinguishable from real life will appear equally real to both humans and animals. A cat, for example, would think your video is real, if it's viewing your video on a monitor calibrated to cat eyes instead of human eyes.
It's more productive not to take the field too seriously, and to fall in love with the endless puzzles and challenges that graphics programming affords. Once you accept that the game is to generate something that looks cool, not that looks real, everything falls into place.
Most of the progress in the field came from the pursuit of photorealism and trying to simulate better what actually happens in reality.
The path tracing approaches in modern raytracers are actually very similar as computation to Feynman's path integral formulation of quantum electrodynamics (QED). The big difference is that in the "classical" approach one integrates real values (radiance) and in the QED one integrates complex values (amplitudes) whose magnitude describe the probability of a given wavelength hitting the camera sensor.
It's seductive to believe that if we just use a clever model, we can create something that looks real.
It's not that simple. By definition, the closer that your engine looks to real life, the less flexibility artists have. If a scene looks perfectly real, artists would not be allowed to change anything at all, because any change would make the scene look less real.
Therefore, no matter what kind of clever mathematics you use, or any algorithm you come up with, your flexible art pipeline will torpedo your ambitions of creating a CG video indistinguishable from real life.
This is a fundamental limitation that I don't think has been fully appreciated, or at least isn't emphasized in literature. I think most people can't really believe or accept that no one, anywhere, has ever successfully created a CG video of a complex scene that is capable of fooling all human observers 100% of the time. (If a set of human observers are asked "Is this video real, or computer-generated?" then their responses should be no better than random chance.)
Your research looks promising! It would be interesting to pair an analytic solution with some hypothetical art assets that were somehow generated from reality, or otherwise fully capture all of the variables in a real-life object (i.e. the textures are more than simply RGB values).
The original film had a very renowned detailed model 15m long and weighing 10 tons. It was quipped that it would have been cheaper to lower the Atlantic.
Now imagine in your remake you choose to raise a virtual photo-realistic Titanic. Your artists and critics are not going to complain that it is too realistic.
Realistic rendering is strived for even when the scene being rendered is conjured up by artists.
Certainly. But the point is, if a scene is conjured up by artists, the material they work with -- the RGB textures they create, the specular maps, etc -- do not in any way match real life, or real objects. They make those textures with the goal of getting an engine to display results that look sufficiently good.
That means if you're interested in creating a truly realistic CG video, you have no hope of succeeding if you use an art pipeline. If it's true that a truly realistic CG video will only be created by data that matches real life, then artists must not be allowed to conjure up the data that you use. The data has to come from real-life sources.
There is a contradiction here, and I think it's worth accepting and embracing it. Once you accept that true realism isn't the goal, then you can focus on making CG look cool.
Physically Based Rendering[0] is big these days. Materials can be acquired using special scanners[1]. The result is that you can have a library of materials that behave in predictable ways in a wide variety of situations. Artists still have a lot of freedom, they can chose materials and change lighting, just like in a real photo shoot.
Color is not created by the "count" of photons, but by the wave length of each photon.
Multiplication is a simplification but it does have basis in reality (amount of light that bounces versus amount absorbed or converted to an invisible wave length). What is fake is the perception of colors other than the three wave length intervals we see.
Another approximation is assuming all surfaces, at the microscopic level, are flat or uniformly rough. Microscopic features that change how colors are reflected are not simulated.
Currently, the main road block with believable simulation is physical interaction between objects. There is just too much to simulate, even with the (incorrect) assumption that most objects are 100% rigid. Also the movement of organic things, esp. humans.
>Color is not created by the "count" of photons, but by the wave length of each photon.
The count of photons (of the same wave length) though helps the eye and sensors to better match colors. In other words, low light scenes have less color information.
> In other words, low light scenes have less color information.
The amount of color — the relative fractions of light of different wavelengths — is the same. Low light scenes look less colorful to us humans because our more sensitive photo receptors, rods, do not detect color. Only cones do, and those don't work as well in low light.
If you ever do any low light photography, you'll be surprised to discover how richly saturated the night is. Our eyes just can't detect it.
The problem with radically different approaches is that GPU hardware is optimized for a specific set of techniques. Unless a new technique can work well with current GPUs, it has small chances of being adopted.
Solving the problem at all is so hard that the first person to do it will become famous, and then the industry will follow. It's analogous to the Wright brothers inventing the airplane: back then, people could fly, but the industry was based around using balloons and gliders.
It's fair to say that a solution that can't be parallelized might not be deployed to real-time simulations like video games for quite a long time, but Hollywood will definitely use it.
> The last ten years have seen a dramatic shift in this balance, and path tracing techniques are now widely used. This shift was partially fueled by steadily increasing computational power and memory, but also by significant improvements in sampling, rendering, and denoising techniques. In this survey, we provide an overview of path tracing and highlight important milestones in its development that have led to it becoming the preferred movie rendering technique today.
So when you get wowed by the next new film's stunning level of realism, you can reflect on how soon path finding might find its way into high-end gaming rigs too ;)
Right; what I'm pointing out is that your comment isn't saying "if only we could raytrace in WebGL, we would have better graphics". It's saying "both raytracing and forward rendering (and so on) have problems, and we need new techniques entirely if we want to model reality accurately".
It's very important to avoid photos altogether when dealing with the question of realism. Our brains process videos differently than still-frames. It's why video compression is very different from image compression.
In other words, it's unfair to say that a given graphics technique looks real because it generates photorealistic still-frames. Our target is videorealism, not photorealism. I fell into this trap myself: it's so tempting to start with still frames and think that results are encouraging just because they look good. But string those still-frames together into a video and it'll be obvious why it's artificial, assuming it's a rendering of a fairly complex scene, which is equally important. But then you get into questions of whether the art assets were carefully prepared to match the properties of a real-life object, etc, which is why everybody starts with cubes and spheres, which also happen to be objects that you never see in real life. (When's the last time you were in a room composed of perfect cubes and spheres?)
> It's why video compression is very different from image compression.
Is it? I was under the impression that I-Frames in current video compression formats are encoded rather similarily to current still image formats. (JPG's comparable lack of sophistication notwithstanding, but c'mon, it's old. :) )
Usually I-frames are a small minority of the frames. Most frames encode motion information. So the decoder takes picture data from previous frames and moves it according to the motion vectors. Then, if the result was deemed too inaccurate by the encoder, there will be a little more picture data for the current frame. Having accurate motion vectors is more important than having perfect color reproduction.
>It's very important to avoid photos altogether when dealing with the question of realism. Our brains process videos differently than still-frames. It's why video compression is very different from image compression.
That's a moot point though, as we do want to make 3D realistic photos too, not just videos.
Right, it's photorealistic, but the scene content in that example has been carefully chosen to make the task feasible. It's clean and shiny and abstract.
First off - this is a great thing to hear; I'm glad that WebGL is advancing.
That said, there are a few voices hear bemoaning the fact that this update still isn't up-to-par with the latest native API abilities or whatnot.
These kind of comments remind me of comments complaining about artificial intelligence not being useful because it can't do x/y/z (ie - something that a human can do), and so it is worthless.
Never mind the fact that when x/y/z tasks are demonstrated, the goal posts are moved to a/b/c...
I see the same thing here with WebGL. Personally, I feel this is unfair. Instead of complaining about what it can't do, instead one should work with it and push it to do what it can do. I dare-say that in the hands of talent, the current WebGL could easily do things that would be almost considered "impossible". By talent, I mean those who really push the capabilities of hardware and software, pulling tricks and such to maximize the apparent capability (demoscene).
Even with that taken off the table, though - WebGL (even 1.0) can still do things and represent ideas and worlds in enough detail to be fun for a game or other use. I feel that those who complain will always complain. Instead, those who can - use it! So what if it isn't perfect; I dare say that a Infocom's Zork is a better game than many of today's triple-A FPS shooters or whatnot! Graphics aren't everything; if you know what you are doing, you can make a compelling and fun game today without them - there are plenty of recent IF titles to attest to that.
I sometimes wonder if the complaints aren't excuses or something; that developers are complaining about WebGL because they don't want to move the the platform of the web browser (for some reason)? So any excuse not to develop on the platform is a valid excuse? Perhaps. For me, what I see in WebGL is extreme promise; in fact, it holds it right now, with WebGL 1.0 - if more people would just develop for it.
Maybe I'm just biased, having grown up with computers from a time when the idea of 3D graphics was either simple wire-frames or other slow software rendering - or you dropped many thousands to tens-of-thousands of dollars on hardware to get output that wouldn't even be worthy of the old-school Quake engine. For me, what I see with WebGL blows me away, despite it not being "realistic". It's realistic enough for me to see compelling action and stories developed with it.
...and with native API access? Well - what little I have seen (because I run *nix and you don't see much there) - it's amazing. One day it will be on the browser too. But what we have already available, in the right hands, should be more than enough. So quit complaining, and get to creating.
/personally wish I had the talent to work in this field...
> I dare say that a Infocom's Zork is a better game than many of today's triple-A FPS shooters or whatnot! Graphics aren't everything;
Interesting point of view, certainly. But the majority of game purchasers do care about graphics. Great graphics suggest (often incorrectly, I'll admit!) great production quality, and in the minds of many this is a 1:1 mapping to the monetary value of a product.
So selling a game that's accessible via a URL is a bit of a barrier on the commercial front. I guess itch.io works here.
On the technical/noncommercial side, for more high fidelity games I'd argue guaranteeing a stable framerate (and not having the rest of the browser chug while the game's running) and being able to use multi-threading would be a great help.
Once WebAssembly gets threading support, I'd expect to start seeing more intensive games. Web workers are... okay, but hardly convenient. I genuinely believe multi-threading support is more important than a better graphics API. (Typical engines cross-compiled from C++ then basically only have to deal with targetting mobile-quality graphics with a few edges knocked off & workarounds.)
As for Zork: You don't need WebGL for Zork. (Indeed, embedding text in a WebGL scene (as opposed to overlaying it) is very hard work.)
So WebGL is a suitable technology when you need interactive 3D that can't be prerendered, but you don't need impressive 3D; you're happy with results from a decade ago with worse performance. It's not a bad prospect for me as a prospective indie developer, but the tooling is noticeably more difficult than in native solutions.
And the WebGL API is much harder to use than glBegin(), glVertex3f()..., glEnd() from the bad old days... but that's just the influence of OpenGL ES.
Suggested path: write something in C against bgfx ( https://bkaradzic.github.io/bgfx/examples.html#metaballs ), then pray to kripken. And if Emscripten doesn't work just put some binaries on itch.io. Even the subset of gamers interested in indie games expect to download binaries.
--
Edit: You say you wish you had the talent to work in this field. If you're a fan of IF, you certainly have the technical talent to do that :) And writing is fun, but extremely time-consuming. Good grief it's time-consuming.
If you can deal with some mildly heinous gameplay, consider buying & playing Sunless Sea. The writing is transcendental, and the universe is brilliant.
Every time I see something done with it, versus what native code can do on the exact same hardware, I reach the conclusion it is only useful for DX 9 class graphics or just plain prototyping.
Geometry Shaders allows you to generate a variable number of vertices based on an input vertex. This is necessary in some cases where you generate models procedurally (e.g. particle animations, terrain generation).
It's possible in theory to allocate a lot of vertices up front (up to the maximum that could be generated). If, for example, an input vertex can generate up to 15 vertices (as is the case of the marching cubes algorithm), you could just create those 15 vertices upfront and set the values of the one you don't need to something that will make it not render (e.g. degenerate triangle). But then your actual vertices become a lot more sparse and performance drops because of all the wasted work done on the throwaway vertices.
the majority of users don't go out of their way to disable scripts so i can totally understand it when site owners refuse to waste time (= money) catering to the needs of the small fraction of users that do