Hacker News new | past | comments | ask | show | jobs | submit login

Is this approach novel? For instance is Apple's approach to native UI rendering doing UI rendering on the CPU, or using a 3D renderer?



Apple has an incredibly fast software 2D renderer (Quartz 2D), and limited GPU 2D renderer and compositor (Quartz Compositor). Doing PostScript rendering on the GPU is still an active research project. And Raph is doing some of that research!


That was in the past.

Apple has one of the best systems for drawing 2D and it is accelerated on the GPU. It is used by Apple Maps and it made the Offline maps of it much better than Google's.

But Apple is using trade secrets here, they are not publishing it so everyone could copy it.


Do you have a source for this? I would like to no more.


The author mentions PathFinder, the GPU font renderer from Servo a lot, so there do seem to be existing systems that do things in that way.

I'm not 100% sure though about Apple and other's approach though - definitely when compositing desktop environments were new, the way it was done was to software render UI elements into textures, and then use the GPU to composite it all together. I assume more is being done on the GPU now but it may not actually be all that performance critical for regular UIs (he talks about things like CAD which are more performance sensitive).


He's describing roughly the feature set of Flash, which is a system for efficiently putting 2D objects on top of other 2D objects.


No I mean is the approach of doing this on the GPU actually novel


Games often use the GPU for their 2D elements. It's inefficient to do a window system that way, because you have to update on every frame, but if you're updating the whole window on every frame anyway, it doesn't add cost. As the original poster points out, it does run down the battery vs. a "window damage" approach.


Yes but games typically use standard rasterization to render 2D elements. My question is whether using compute to "simulate cpu rendering" here is a novel approach.


Depends on what you mean by novel. No other “mainstream” API that implements the traditional 2D imaging model popularized by Warnock et al. with PostScript is implemented this way, except for Pathfinder. Apple does all 2D drawing operations on the CPU and composites distinct layers using the GPU. This does a lot more work on the GPU.


What about Direct2D? Surely Windows counts as mainstream? The docs are from 2018, https://docs.microsoft.com/en-us/windows/win32/direct2d/comp...


> Rendering method

> In order to maintain compatibility, GDI performs a large part of its rendering to aperture memory using the CPU. In contrast, Direct2D translates its APIs calls into Direct3D primitives and drawing operations. The result is then rendered on the GPU. Some of GDI?s rendering is performed on the GPU when the aperture memory is copied to the video memory surface representing the GDI window.

I can’t say for certain but I think the main point being communicated here is that Direct2D uses 3D driver interfaces to get its pixels on the screen. Not necessarily that it renders the image using the GPU. I could be wrong.


What about skia?


Skia renders paths on the CPU. There was a prototype of a GPU based approach called skia-compute but it was removed a few years ago. I believe some parts of skia can use SDFs for font rendering, but that's only really accurate at small sizes.


The skia-compute project is now Spinel, and is under Fuchsia. It is very interesting, perhaps the fastest way to render vector paths on the GPU, but the code is almost completely inscrutable, and it has lots of tuning parameters for specific GPU hardware, so porting is a challenge.

Skia has a requirement that rendering of paths cannot suffer from conflation artifacts (though compositing different paths can), as they don't want to regress on any existing web (SVG, canvas) content. That's made it difficult to move away from their existing software renderer which is highly optimized and deals with this well. Needless to say, I consider that an interesting challenge.


Wow the codebase looks quite small for such ambitions!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: