People don’t give web browser engines enough credit sometimes. WebGL performs wonderfully. 2D Canvas performs very well. Even hardware accelerated transforms on DOM elements perform damn near flawlessly even on slow devices.
Unfortunately almost every web site out there does a poor job of showcasing those abilities.
Mainly because browsers have come up with the great idea of black list GPUs, so WebGL only performs wonderfully if the user happens to have the right combination of hardware, if it starts at all.
Then they wonder why they don't have any issues playing native 3D games and move on to another Website.
I’ve noticed a lot of people with integrated graphics as well as a decent card often have the browser set running on the bad GPU. Also a surprisingly large number of people have turned hardware acceleration off! Usually fixing this sorts out any performance issues they have with WebGL.
Yep. The list is here - https://chromium.googlesource.com/chromium/src/gpu/+/master/... It includes things like Intel HD 3000, which is the GPU in all 2011 models of the Macbook Air. It's quite annoying but maybe better than having browsers crash horribly when the graphics driver throws.
Yes, it is true, specially for the regular consumers that don't have any idea of what we are talking about here.
WebGL game engines could already been the new Flash for casual games market, but it is just better, from business sense and support, just to ship native mobile games instead.
Isn't there any application to check for compatibility and enable WebGL hardware acceleration on popular devices? Just like people knew how to install the Flash plugin, they could install this app to set up their browser, and use it as a hub to discover supported games.
Agreed. I was noodling about with a basic boardgame project a few years back, on an ancient laptop with crappy Intel IGP, and was a bit startled to see an unoptimized full-board canvas render taking microseconds rather than milliseconds.
A world of software built on layers upon layers of abstractions had lead most people to forget just how fast computers can be if you let them. Mobile phones could do this https://www.youtube.com/watch?v=kqz5ehun-o0ten years ago! Whenever I hear about laptops struggling to move simple 2D elements around it makes me a cranky old man.
I appreciate it because I'm interested in this kind of thing (not necessarily just games, but graphical custom interfaces in general) and getting confirmation on why it was slow (on powerful hardware) is good to know.
In the days where writing 500 lines of C code just to draw an untextured triangle are advertised as the future, something like this does look like an engine to me, even if relatively simple.
It takes a bunch of effort to keep track of which nodes need to be redrawn (and in which areas of the screen).
If you end up needing to redraw most (or all) of the screen anyway, that effort is wasted. The overhead of it can even end up slowing things down.
Just assuming everything needs to be redraw from the start saves the effort, and is optimal in the case where everything does need to be redrawn.
For games in particular, this is usually a good assumption. If you're wrong, you end up slowing down less complex cases but they can usually afford it. When you're right, you're optimal for the most demanding cases. You're usually trying to hit a frame rate and care a lot more about hard cases falling below the target than how far above target easy cases are.
And there are many common cases in games where you do need to redraw everything anyway. E.g., once you start scrolling or any other kind of full-screen effect. (In the old, pre-PGU days you often actually would scroll the image in a buffer and only redraw newly the exposed area through the scene graph, but there's really not an advantage to that these days and doesn't help you with any other kind of full screen effect.)
Games typically need to redraw the entire screen anyways. Imagine a sidescroller where even a 1px movement requires the entire background and every object on top to be redrawn, or in 3D even a tiny camera move shifts the position of every triangle on the screen. Even very simple games without a lot of movement often have animations, where you then have to worry about transparency so you may have to redraw the region under the animated element anyways even of it doesn't move, etc. You could do a diff hoping for the happy occasion of a mostly-static frame, but a) that gives a performance penalty to the majority of frames that would need a total or near-total redraw anyways, and b) would lead to either unstable FPS or pointlessly hitting the FPS cap.
There are exceptions to this, for example a visual novel or a turn-based strategy game, but those are often satisfied to just eat the performance penalty for the ability to use standard tooling, or can use engines or standard UI toolkits like React that can do a temporal diff. For example, I've written several text-based games [1] that just use the DOM because using canvas or WebGL wasn't necessary. I've written other games that use a hybrid approach of absolutely-positioning temporally diffed DOM over a frame-by-frame redrawn canvas, to get the best of both worlds [2]. And I've also written games in e.g. Unity where it's easier and performant enough to use ImGui and redraw it every frame. Ultimately it cones down to what's practical and what's performant enough.
There used to be a time when the game would render to a buffer slightly larger than the screen, then blit the right portion of that buffer to the screen. This meant that scrolling just consisted in blitting from a different offset. Composing the background image, then, would only require changing the bits behind animated sprites (so you would usually intersect the old and new position of each sprite with the background tile grid to find which tiles to draw again).
All of this changed with the advent of 3d hardware acceleration, but I remember doing it up until 2005 to get significant performance gains (on bad hardware).
But 2D sidescrollers have traditionally just "paned" the already drawn/rendered (buffer?) and only redraws the new bits. In some cases this caused artifacts to stick to the screen. Specifically thos was not unheard of regarding games made with KnP, TGF and MMF.
I guess my understanding of an HTML canvas is that it takes less work to "translate these 1000 objects 5 pixels left" than "draw these 1000 objects"
e.g. mozilla's tips for canvas performance state: "Avoid unnecessary canvas state changes."
And:
"Render screen differences only, not the whole new state."
That's a typical quality of game engines, they just blast the whole screen away. If their design is similar to CreateJS' they likely walk the tree and reduce it to a flat array of Canvas instructions. I imagine reconcilling with a cache structure is expensive for whatever reason.
mobile and desktop devices, because although html is supposed to already be cross platform, in theory, when it comes to games you have to make some assumptions about whether a user is using a mouse or a touch screen or a keyboard, specific screen sizes, testing on obscure browsers running linux and old versions of gecko and things start to break
This is great, I'll have to port my small board game from Python into this so as to be able to run it cross platform. It's a little bit frustrating working with the tkinter Python GUI and not being able to "export" the game to other platforms.
This is really good, a colleague of mine has always been interested in creating a simple game, but he hasn't got the time for the big engines and all that other stuff.
Looks like a simpler version of Pixi.js without webgl support. I would imagine its probably nicer to use for small games when you don't need webgl performance.
(Also if the demo is single threaded and not frame rate capped it probably eats as much of a single core as it can, and that's a dual core cpu so 50% checks out)