Integrated GPUs exist. Wouldn't it make more sense that the "high value" content should not be exposed to any external GPU? Then we can treat those integrated ones as part of the "TEE". That's my speculation, waiting for details.
This is the question I had about this. The reason this design works per the article is that the GPU memory is inaccessible to the OS, so the decrypted content cannot be stolen.
With a unified memory architecture, is the shared GPU memory inaccessible to the CPU?
With the proper MMU settings, yes, the CPU can definitely be denied access to some memory area. This is why devices like the raspberry pi have that weird boot process (the GPU boots up, then brings up the CPU), it's a direct consequence from the SoC's set-top-box lineage.