To add to what you said, it’s also nice to be able to keep it within one API that’s platform agnostic when possible.
Sure we’ve had the ability to keep the pipeline on GPU for awhile, but it usually required platform specific API bindings to convert to a platform specific descriptor (handles on windows, IOSurface on macOS, dmabuf on Linux), which you then had to pull into a platform specific decoder/encoder API (DXGI, WMF, AVFoundation, VAAPI, etc.), and then all of that again but in reverse to get the surface back into your 3D API.
This whole thing just makes life easier for everyone.
Exactly. The cross-platform descriptor dance is one of those things that's invisible to people outside the pipeline but eats an absurd amount of dev time. You end up writing the same conversion logic three times for three platforms, each with its own failure modes and version quirks, and none of it has anything to do with the actual codec work.
Having Vulkan as the single surface for both the compute and the rendering side means one memory model, one synchronization story, one set of bugs to chase. That alone is worth the effort even before you get to the performance wins.
Sure we’ve had the ability to keep the pipeline on GPU for awhile, but it usually required platform specific API bindings to convert to a platform specific descriptor (handles on windows, IOSurface on macOS, dmabuf on Linux), which you then had to pull into a platform specific decoder/encoder API (DXGI, WMF, AVFoundation, VAAPI, etc.), and then all of that again but in reverse to get the surface back into your 3D API.
This whole thing just makes life easier for everyone.