Hacker News new | past | comments | ask | show | jobs | submit login

I can't comment on Reaper specifically, but to those who haven't used audio applications they often have more in common with a game than a regular desktop application.

The software is often generating live displays in response to interactive or live input (ie. the audio being synthesised, processed or played); everything from fancy spectrograms to meters showing levels.

GPU features like texturing works great if you can upload everything you need in advance from the CPU, and then re-use it again and again on the GPU side. However, in the case of these sorts of applications that input is one or many audio streams which are constantly changing.

I'd bet you /could/ implement more of the processing on the GPU itself. But audio apps like this are often multi platform and based on existing codebases or plugins. Pixel-pushing to the screen seems rather boring, but it typically throws up fewer bugs; I'd say it makes more commercial sense to spend time working on audio features than debugging GPU issues on a wide range of platforms.




I'm not saying it's easy to write shader code for all of these things to offload the bulk of the drawing work to the GPU, but it would provide a huge boost to performance.

OpenGL makes for fairly portable code, especially if you're doing something extremely narrow like rendering on a 2D texture canvas. It's only when you start to use exotic OpenGL features that you quickly get into trouble.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: