Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure I understand this. Why can't you just increase buffer sizes and write more data to them to avoid frequency of wake ups?

Edit: does this help? https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...




Because sometimes latency matters:

- You want a "ding" sound within Xms of the time a user performs some action.

- You want a volume change to happen within Yms of the user pressing the volume up/down keys

Without buffer rewinding, your buffer size when playing music cannot be longer than the smallest of such requirements.

With buffer rewinding, your buffer size can be very long when playing media, and if a latency-sensitive event happens, you throw away the buffer. This reduces wakeups and increases the batch size for mixing, which is good for battery life.

The PipeWire people seem fairly smart, so they are probably aware of this, but I'd like to see power numbers on say a big-little ARM system of PW compared to PA.


Low latency is a goal. From video conferencing to music recording it's very important.


Except that professional music recording is a domain in which low latency always trumps power consumption. So if a system based on such a tradeoff fails even once due to complexity of buffer rewinding or whatever, the professional musician loses.

Hell, Jack could be re-implemented as power-hungry, ridiculous blockchain tech and if it resulted in round-trip latency / 2 professional musicians would still use it.

Edit: added "complexity of" for clarification


In practice it can't though. You cannot do complex computation and still meet low latency as complex computation takes time which adds to latency. Also pros intend to use their computer, so something that complex leaves less CPU free for the other things they are trying to do.

in practice the only way this comes into play is pros are willing to fix their CPU frequency, while non-pros are willing to suffer slightly longer latency in exchange for their CPU scaling speed depending on how busy it is. It is "easy" to detect if CPU scaling is an allowed setting and if so increase buffer sizes to work around that.

Even for non-pros, low latency audio is important. You can detect delays in sound pretty quickly.


I think supporting the low latency usecase is a goal, but not the only one. As far as I understand it pipewire provides configurable latency.


Eh, just process all audio in a once-a-day batch job at 2am, it'll be great!


> Why can't you just increase buffer sizes and write more data to them to avoid frequency of wake ups?

Because then software volume will take too long to apply, and a new stream will have to wait (again, too long) until everything currently in the queue has been played. PulseAudio tried to solve this with rewinds, but failed to do it correctly. ALSA plugins other than hw and the simplest ones (that just shuffle data around without any processing) also contain a lot of code that attempt to process rewinds but the end result is wrong.


Here's a decent (and corny) explanation video from Google on the matter:

https://www.youtube.com/watch?v=PnDK17zP9BI




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: