Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think your confusing the RenderThread with GPU layers. There's only 1 rendering thread per app and it handles all rendering work done by that app. It's really no different than pre-M rendering other than a chunk of what used to be on the UI thread is now on a different thread. The general flow is the same.

The new part is that some animations (basically just the Ripple Animation) can happen on their own on that thread, but it doesn't use a GPU layer for it nor a different OS composition layer.



> but it doesn't use a GPU layer for it nor a different OS composition layer.

Really? As so often, there was a lot of talk about doing that beforehand, and it wasn’t discussed at all later on, so I had assumed that this had been done. Interesting that this didn’t happen.

What’d be the reason for that? Animating objects on a static background seems a prime case for GPU layers. Or was it the issue with the framebuffer sizes being too huge again?


Think about what the static background actually is. It's probably either an image (which is already just a static GL texture, no need to cache your bitmap in another bitmap), or it's something like a round rect which can actually be rendered faster than sampling from a texture (since it's a simple quad + a simple pixel shader - no texture fetches slowing things down). In such a scenario a GPU layer just ends up making things slower and uses more RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: