Hacker News new | past | comments | ask | show | jobs | submit login

It is completely irrelevant, given the context. The only, only, only thing real-time means here is “can be run on a live signal passing through it” rather than “is a slow, offline effect for a DAW”. No hard real-time, no soft real-time, no QNX, no pulling out the college compsci textbook. There IS real-time in that sense in DSP, it just isn’t in a VST plugin.

I’ll repeat again that any compsci theorycrafting is not the concern here, and real-time has a very specific meaning in DSP. Computer science does not own the concept of real-time, and the only people tripping over the terminology are those with more compsci experience than DSP. I appreciate everyone trying to explain this to me, but (a) I understand both, and (b) this is like saying “no, Captain, a vector could mean anything like a mathematical collection, air traffic control should learn a thing or two from mathematics.”




Just to be perfectly clear here because I'm not sure you're just using my post as a soapbox or if you have misunderstood my argument: I agree that it's clear what real-time means in this context. I disagree that "usually fast enough" guarantees failure for a VST, because in the case of VST, "usually fast enough" is the only guarantee the host operating system will offer your software.

It's not "theorycrafting" to say that real-time music software running in a preemptive multitasking operating system without deterministic process time allocation will have to suffer the possibility of occasional drops. It happens in practice and audio drivers have to be implemented to account for the bulk of it, and the VST API is designed in such a way that failure to fill a buffer on time needn't be fatal.


It usually doesn't happen in practice unless you're doing a lot of other things at the same time. Which you shouldn't be.

Of course audio is block buffered over (mostly) USB, and as long as the buffers are being filled more quickly than they're being played out, the odd ms glitch here and there is irrelevant.

As real-time systems Windows, MacOS and Linux are terrible from a theoretical POV, and they're useless for the kinds of process control applications where even a ms of lag can destroy your control model.

But with adequate buffering and conservative loading they work well enough to handle decent amounts of audio synthesis processing without glitching - live, on stage.


> It usually doesn't happen in practice unless you're doing a lot of other things at the same time. Which you shouldn't be.

> Of course audio is block buffered over (mostly) USB, and as long as the buffers are being filled more quickly than they're being played out, the odd ms glitch here and there is irrelevant.

As I've noted earlier in the thread. In fact, that the only thing you can offer under such circumstances is that "it usually doesn't happen" because "it's usually fast enough" is my entire point.

> As real-time systems Windows, MacOS and Linux are terrible from a theoretical POV, and they're useless for the kinds of process control applications where even a ms of lag can destroy your control model.

You could employ the same strategies to process control problems where latency is not a problem so much as jitter. You don't, because unlike a music performance an occasional once-in-a-week buffer underflow caused by a system that runs tens to hundreds of processes already at boot can actually make lasting damage there.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: