Hacker News new | past | comments | ask | show | jobs | submit login

There are two steps in the task of hw assisted video playback: 1) video decoding, i.e. going from compressed stream to uncompressed, offscreen video frame, and 2) compositing video with all the other surfaces the video playback application uses -- the decoded frames are usually put into OpenGL (or Vulkan, whatever) texture and then composed with the rest.

When you do 1) on the GPU (well, not exactly GPU, but video decode block on the videocard, but that's not really important now), you end up with decoded frame in VRAM. Reading it back to system RAM, just to push it back to VRAM elsewhere is expensive (if the graphic is not UMA, then you go via PCIE back and forth) and unnecessary in the end, it is more efficient to have some way to directly share content from 1) into 2) already in VRAM.

For that, both subsystems must support some way of sharing memory buffers. VA-API (for 1) and Mesa (for 2) support DMA-BUF, so that's the reason why it is used here.




Though Vulkan has video decoding built in, so eventually everything will hopefully unify to use that and we can avoid needing multiple separate subsystems depending on vendor...


What codecs are part of the spec? Or is it just the interface, and nothing is actually guarenteed to work?


It looks like [0] there currently exist extensions for H.264 and H.265. I don't know anything about Khronos politics, but I expect there's nothing that would stop e.g. Google from proposing similar extensions for VP9, AV1, etc., besides the need to actually get it implemented. (The H.264 and H.265 extensions have only AMD, Intel, and NVIDIA employees listed as Contributors.)

[0]: https://www.khronos.org/registry/vulkan/specs/1.2-extensions...


VP9 decode and AV1 decode/encode is planned: https://github.com/KhronosGroup/Vulkan-Docs/issues/1497#issu...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: