We probably see things differently. As I understand it, this is exactly the use case for "big/little" microarchitectures. Take a number of big fast cores that are running full bore, and a bunch of little cores that can do things for them when they get tasked. So far they've been symmetric but with chiplets they needn't be.
Yes, for 'computational' loads. I've read though UI/UX benefits the most from fastest response times. I'm talking about the cores which actually draw the GUI the user sees/uses being optimized for the task at the highest possible rate. Then have a pool of cores for the rest of it.
You are talking about the GPU? Okay, really random tidbit here; When I worked at Intel I was a validation engineer for the 82786 (which most people haven't heard about) but was a graphics chip that focused on building responsive, windowed user interfaces by using hardware features to display separate windows (so moving windows moved no actual memory, just updated a couple of registers) to draw the mouse, and to process character font processing for faster updates. Intel killed it but if you find an old "Number9 video card" you might find one to play with. It had an embedded RISC engine that did bitblit and other UI type things on chip.
EVERYTHING that chip did, could in fact be done with a GPU today. It isn't, for the most part, because window systems evolved to be CPU driven, although a lot of phones these days do the UI in the GPU, not the CPU for this same reason. There is a fun program for HW engineers called "glscopeclient" which basically renders its UI via the GPU.
So I'm wondering if I misread what you said and are advocating for a different GPU micro architecture or perhaps an integrated more general architecture on the chip that could also do UI like APUs?