Hacker News new | past | comments | ask | show | jobs | submit login

Just to be perfectly clear here because I'm not sure you're just using my post as a soapbox or if you have misunderstood my argument: I agree that it's clear what real-time means in this context. I disagree that "usually fast enough" guarantees failure for a VST, because in the case of VST, "usually fast enough" is the only guarantee the host operating system will offer your software.

It's not "theorycrafting" to say that real-time music software running in a preemptive multitasking operating system without deterministic process time allocation will have to suffer the possibility of occasional drops. It happens in practice and audio drivers have to be implemented to account for the bulk of it, and the VST API is designed in such a way that failure to fill a buffer on time needn't be fatal.




It usually doesn't happen in practice unless you're doing a lot of other things at the same time. Which you shouldn't be.

Of course audio is block buffered over (mostly) USB, and as long as the buffers are being filled more quickly than they're being played out, the odd ms glitch here and there is irrelevant.

As real-time systems Windows, MacOS and Linux are terrible from a theoretical POV, and they're useless for the kinds of process control applications where even a ms of lag can destroy your control model.

But with adequate buffering and conservative loading they work well enough to handle decent amounts of audio synthesis processing without glitching - live, on stage.


> It usually doesn't happen in practice unless you're doing a lot of other things at the same time. Which you shouldn't be.

> Of course audio is block buffered over (mostly) USB, and as long as the buffers are being filled more quickly than they're being played out, the odd ms glitch here and there is irrelevant.

As I've noted earlier in the thread. In fact, that the only thing you can offer under such circumstances is that "it usually doesn't happen" because "it's usually fast enough" is my entire point.

> As real-time systems Windows, MacOS and Linux are terrible from a theoretical POV, and they're useless for the kinds of process control applications where even a ms of lag can destroy your control model.

You could employ the same strategies to process control problems where latency is not a problem so much as jitter. You don't, because unlike a music performance an occasional once-in-a-week buffer underflow caused by a system that runs tens to hundreds of processes already at boot can actually make lasting damage there.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: