Linux audio can be a pain to get setup, but once you've got it setup you can do some pretty cool things. I set something up similar the other day with pulseaudio and a Bluetooth headset. I used it to route the headset channel output on mixxx to the Bluetooth headset and the master output through the speaker output. Sadly, the latency was too much to actually mix and beatmatch properly, but it was a fun exercise and just the fact I could do it was pretty cool.
Yes, but you need a different repo [1] for that. The default one supports only SBC, this one adds support for AAC, AptX, AptX-HD, and LDAC. There's third party packages for a myriad of OSes.
Pulse has latency adjustment so devices can compensate for latency.
When using combine module to output to multiple devices, do drop the sync latency fr the default 10s. Especially if it's a BT device. Turned it fr a nightmarish hell of constantly drifting terrible audio to just working multiple devices output for me.
Latency compensation, not magic latency elimination. The latter is not possible :-)
You can get audio to line up across different outputs when it is being played from a source with a large buffering capability, which is what PulseAudio does well. But when you're trying to get real-time audio to respond to user input (e.g. games, or DJing or playing music), latency compensation just gives you the lowest common denominator, and PulseAudio isn't really any good at guaranteeing low latencies for these use cases anyway. That's why for music production on Linux, you should be using JACK (or directly using ALSA from whatever source app you have, e.g. mixxx or Ardour).
In either case, the Bluetooth issue is inherent to the protocol and codec and technology stack there, and there's no way to fix it. It's really, really hard to do wireless audio with low enough latency for live music, especially on 2.4GHz where you need error correction and retransmits because the band is always so congested. I have a wireless microphone set on 2.4 using proprietary dedicated technology for this, and it still has noticeable latency (though just about usable for karaoke and such, but not ideal), because it has to introduce a fixed latency to allow for a retransmit budget in case of error.
Not the same person, but I get my audiobooks from Amazon Audible or my local public library.
In the case of Audible, the first listen usually occurs in the Audible app, but eventually I remove the DRM (converting it to mp3) and dump it in my Books folder, which automatically syncs to my phone with SyncThing. I often re-listen to audiobooks a year or two later, and subsequent listens happen in the Voice app. I like having my books stored locally so I can listen to them whenever, and it's also a safeguard if they ever disappear from Audible.
When I check out audiobooks from my library, I do it through the OverDrive program on my Windows computer, which lets you download in mp3 format. It's apparently on the honour system to delete the files once you finish it (which is shocking), but I have to admit that I usually don't. If I ever do re-listen to them, I just check it out from the library again, skipping the download step because I already have the mp3s.
I really wish there was a standard way of streaming audio over LANs that worked between devices (like phones, computers, and smart speakers). Bluetooth is such a pain, IMHO.
If you are willing to accept something obscure PulseAudio actually has the best thing. Look into the PulseAudio native module which just let’s you put a speaker on the network as a normal boring sound output that works as if it was connected to your computer.
- Install the zeroconf module on both server and clients.
- Enable the native module on the server and open the port and tell PA to publish an mDNS record.
- See the speaker just appear in your sound options. Play!
I've done this for years. It works especially well when you have more than one computer, mutually considering eachother their default output.
Playing something in your tmux session on the workstation, but want to go use the laptop in the kitchen? Just move the headphones.
I also use it since I brought the office workstation home, along with barrier (synergy fork), to use one set of peripherals for both of my workstations.
I'd actually like to stream from my Mac or a phone to an Alexa Echo Dot. Bluetooth usually works, but not always, and not needing the devices to be in close proximity would be a bonus.
I tried that and whilst it works for individual outputs, having it playing synchronised with other speakers was not ideal given the timing discrepancies that appeared.
If it's of any assistance, our (my employer, Linn) Songcast protocol is BSD licensed, and available for high precision uni/multicast networked audio. We use it in the core of our products for multi-room audio.
It is much more network focused than bluetooth though
I believe this is an issue with the Bluetooth codecs. The higher quality codecs use bandwidth dedicated to the mic to also transmit audio signals, and cannot be used for two-way communication. If you actually look at the active codec while just listening to audio, compared to the codec used when also transmitting mic audio back, you will see that they are completely different.
Pulse already automatically switches the profiles. The issue is that the good codecs are available in the A2DP profile which is sound output only. Pulse will automatically switch back and forth from the HSP profile when an application starts/stops using input from the headset mics, but in HFP you don't have access to the high quality codecs.
Exact same thing happens on Windows and Android and on Apple devices.
The only case where your linked command would be necessary is when depending on your Bluetooth devices & adapters the headset connects and then for some reason registers as HSP instead of A2DP - so you manually switch it back to A2DP to get HQ audio.
There's been a massive massive amount of effort being poured into profile support in hopefully the next major PulseAudio release. It should change everything, make it good.
to answer the q though, this work on bt codecs in Pulse won't directly help PipeWire but the leading the way effect is here: figuring out how to switch codecs & otherwise manage the device (how much bluez (linux's bt daemon) interaction is required, i wonder?).
I'm on a Mac, but if your hardware/setup suits it this may work. On Chrome/Zoom I set the mic to be from the on-board from the laptop and the sound output to be the bluetooth device. The audio profile then will change for the bluetooth to high-quality instead of the headset profile.
I still don't really understand Linux audio layers and how all this stuff works together, how do I configure it, and so on. Is there some good manual/blogpost/etc that would explain that?
> I still don't really understand Linux audio layers and how all this stuff works together
My simple understanding is that in the Linux audio stack you have the following layers:
- kernel-driver for local sound devices (provides exclusive access, no mixing/multiplexing).
- application-level audio-frameworks/APIs which are device-agnostic (pulse, ALSA, jack), can support, mix and multiplex multiple concurrent streams/applications.
- these different application-level frameworks can have several different back ends (for real HW, BT, virtual devices, network targets, etc, other Linux application-level audio-frameworks. )
(Someone correct me if I’m wrong)
Applications target a framework, but not all target the same, so the frameworks needs to be able to “plug together” so the speak.
This leaves the possibility if the following scenario:
App written for ALSA -> ALSA -> ALSA’s Pulseaudio backend -> PulseAudio -> Real hardware
PulseAudio in particular is very featured, modular and pluggable which is why most desktops Linux distros uses this by default these days.
It supports things like transparent network streaming between various kinds of targets and lets you compose “audio graphs” if need to.
And that’s probably why it was chosen for the experiment in this blog post.
ALSA is (confusingly) two things: a userspace library and API, complete with plug-ins and configurability and even mixing, and also the kernel API that underlies it to talk to the physical hardware.
So a typical chain ends up being legacy app -> ALSA -> PulseAudio -> ALSA -> kernel, because ALSA both frontends PA for legacy ALSA-only apps to use it, and also backends PA to talk to the real hardware.
But ALSA isn't the only backend for PA. For example, Bluetooth audio output will go via BlueZ and into kernel socket APIs instead, since Bluetooth audio is a network protocol, not something handled in the kernel. And then there's FFADO, a userspace driver for FireWire audio devices. And you can put JACK on top of that. And then you can put PulseAudio on top of JACK. Or you can out JACK on top of PulseAudio, but that'd be silly. And PA can go over the network, both from app to a remote PA daemon, and also from PA daemon to PA daemon. And there's also netJACK if you want to put that on a network.
And now PipeWire is replacing JACK and PulseAudio and impersonates their APIs and also ALSA and lets everything talk to everything else.
Basically it's complicated, but also very flexible. In the end though, there is usually one "default" setup that you get if you don't do anything on a typical distro, and that's, these days, apps -> [ALSA ->] PulseAudio -> {ALSA -> kernel, BlueZ -> kernel} for most typical use cases (native PA apps and ALSA apps, local audio and bluetooth backends).
> Or you can out JACK on top of PulseAudio, but that'd be silly
It doesn't give you low latency, but if you don't care about that this is a nice way of using JACK applications without interrupting other applications that might be using PulseAudio; for development it's pretty handy.
A friend want to do real time speech to text of phone calls for a call center client. They want to capture the customer side of audio which is listened by customer care agent. How to build the part to extract stream of audio from this call conversation. Once extracted it would be passed to speech to text streaming API to do inference.
It should certainly be possible to use a little device driver for MacOS that streams all output in some IP stream format. Then use any web radio client of choice on the android device. Latency will be terrible though.
Another one of the Linux projects that the cranky contakerous pissy segment of the FOSS world just loves to spit on & hate, PulseAudio.
The FUD keeps coming from within the FOSS world itself. But I keep seeing stuff like this, & thinking, this is such a flexible powerful tokit, pulseaudio- why are so few haters willing or capable of acknowledging that pulse does have serious upsides & vast intermixable capabilities? Maybe it's just me but I feel le Pulse Audio is another one of those epic works that is ceaselessly pointlessly shat upon while no one has anything remotely comparable.
Also though still eager as heck to replace it all & this use case (a frigging awesome ubicomp style use case) with PipeWire. Zeeeerrrooop ccoopppyyyy (to the tune of leroy jenkins).
PulseAudio sucked when it first came out, then gradually grew enough features and stability to make it worthwhile. The people still hating on it are just too boneheaded to appreciate things like your speakers muting themselves when you plug in headphones.
That said, it's still glitchy at times and can't do low latency properly, and JACK1 is inefficient, and JACK2 has a messy threading model, and none of them can do any-to-any latency compensation, so let's hope it all gets replaced by PipeWire and we solve this mess once and for all.
I think you are right. I started using linux about 6 years ago and I have never really had an issue with pulse audio. I have had loads of headaches trying to get JACK working which really shows me how smooth PA is to use.
I've had huge problems with PulseAudio over the last 14 years, but now there are only two left: it randomly hard-hangs a few times a day and it sometimes stops sending audio to a bluetooth speaker (or occasionally to headphones) while acting as if everything is fine. The latter shows up with speakers from two different manufacturers. Everything else I used to hate about Pulse has gotten better: weird audio distortion, clicking & crackling, skipping, messing up the soundcard enough to require reloading modules, getting stuck on mute, jumping to max volume when things were connected - thankfully all gone.
I've had the hang & audio interruption on Mint, openSUSE, an Ubuntu so I'm inclined to think it's something inside Pulse and I've got some hope that those last bits will get fixed. (I am aware that actually playing sound is the first responsibility of a sound system, but I'm being optimistic here.)
Nobody is going to be able to fix that hang unless they know the steps to reproduce. Can you record pulseaudio with rr and capture it happening? Then debugging it should be pretty easy.
I am still wondering if we'll loose all the network capabilities of PA in PipeWire, whereas I don't need its low latency capabilities. Right now, it looks like one will have to install both PA and PW with a bridge between them, which seems a rather bug-prone setup if you ask me.
Does JACK even have enough information to do any-to-any latency compensation? The only "Linux audio" application that I know of that really does this now is Ardour, and it was only in the last few years that it was implemented.
That's the problem, the JACK design does not allow for it. It's why Ardour is still gimped and can't actually do full latency compensation if you connect a track to more than one sink, because Ardour is built on JACK, and JACK can't handle that, and the non-JACK backends for Ardour are kept at feature parity. (You can work around this with sends instead of connecting track outputs, since Ardour handles those outside of JACK)
JACK in general provides latency information throughout the chain, but it does not add delay lines to compensate latency itself, so the problem is that when you connect one thing to more than one other thing, the latency number becomes ambiguous (a range) and you can't compensate for it any more.
I am annoyed by PulseAudio because I don't know how it relates to ALSA.
Every time I made an audio related project on a Raspberry Pi, be it to stream the incoming signal of a USB-mic via ffmpeg/avconv to a server, try to get a
ReSpeaker 2-Mics Pi HAT working, stream a MP3 radio stream to a speaker, anything related to sound, I'm always confronted with ALSA as if it were the default audio system on Linux.
But then something doesn't work, and I get referred to PulseAudio, and then I have these two audio systems somehow working together, and I don't know why, but somehow it works, and I'm left clueless about what is going on down there in the OS with all this sound thing.
The .asoundrc file, which is from ALSA, but apparently is used to set up some things in PulseAudio, or not, all this stuff has left me confused as hell.
If PulseAudio is so much better, then why is ALSA the default? Is it even the same, like a replacement?
At least this rant has pushed me now to seek explanatory YouTube videos on this matter, but I think that I'm not the only one who is confronted with this confusion.
> I'm always confronted with ALSA as if it were the default audio system on Linux.
Because it is as far as the kernel drivers for audio devices are concerned. You have to distinguish between that and the userspace part called libalsa.
> If PulseAudio is so much better, then why is ALSA the default? Is it even the same, like a replacement?
Pulseaudio can be seen as a replacement for libalsa while it still talks to the ALSA kernel drivers in usual setups. The fundamental conceptual difference is that pulseaudio is a sound server and libalsa is a library which is directly included into a program.
> The .asoundrc file, which is from ALSA, but apparently is used to set up some things in PulseAudio, or not, all this stuff has left me confused as hell.
This is actually the configuration file for libalsa. The pulseaudio stuff in there is configuration for a compatibility plugin shipped with pulseaudio which presents an audio device to the application using libalsa redirecting everything to pulseaudio instead of directly talking to the Kernel. That way applications without native pulseaudio support can still be used without issues while pulseaudio is running.
PA at least was full of bugs on arrival ten+ years ago. That's what started the fight. Poor response to criticism fueled it.
In short, don't replace working code with new until it is at least as good. While new stuff needs testing, folks react strongly when it lands on production systems.
That's definitely true as well - some responses by poettering in systemd issues in particular are unnecessarily condescending and very unprofessional.
Also, while systemd and pulseaudio were heavily pushed by many distros, alsa was and still is an option and systemd alternatives exist for those who care that much.
Compared to systemd I feel like most of the issues with pulseaudio circa 2009 were distro or application specific, like the current situation with Wayland. You could run Ubuntu and have PA eat all your RAM and not even work, but switch to Fedora and not have so many problems (performance was still unacceptable on the toaster I used back then, though). Whereas if you dislike how systemd works and Poettering’s often narrow minded approach, using a different systemd based distro probably won’t change your mind.
> using a different systemd based distro probably won’t change your mind
There's plenty of non-systemd based distros. Yes, they're way less popular, but that's because systemd works and does wonders if you know how to use it, compared to that mess of bash scripts inconsistent in quality and across distro implementations that we had before, so users and developers find value in it.
systemd has certainly made my job as a user, sysadmin and developer a whole lot more pleasant.
If you prefer OpenRC or s6, there's distros for you out there. I won't bother supporting them, but you're free to use them.
Granted, you get outcomes such as [1], but if you're fine with that, the choice is there.
Sorry if I wasn’t clear - I was saying that people who are unsatisfied with systemd vs earlier or current alternatives are usually better off using a distro based on openrc or runit or s6, as you suggest. Not that there is any lack of good alternatives. The point was that the most common complaints about systemd are indeed issues people have with systemd and not with poor packaging or something. This contrasts with pulseaudio where most of the early criticism was caused more by distros doing a poor job with it than the users having fundamental issues with the design of pulseaudio.
What does that pinephone issue have to do with systemd? Sounds more like a bootloader issue (or if not, the kernel lacking a feature) to me.
> The point was that the most common complaints about systemd are indeed issues people have with systemd and not with poor packaging or something. This contrasts with pulseaudio where most of the early criticism was caused more by distros doing a poor job with it than the users having fundamental issues with the design of pulseaudio.
Fair. I think many would contest that and say they indeed had issues with PulseAudio itself, not because you're wrong, but because the understanding as to why PA sucked for many is just not there and never was. I feel systemd is in a similar position. It's not poor distro packaging, bur it is rather a fundamental misunderstanding of what systemd even is.
From most of the complaints I've heard, they fundamentally misunderstand what systemd is even trying to do, there are certainly better ways of doing them, but I feel the debate is not even at the level where there's understanding of systemd and critiquing it.
In a sense, OpenRC is not even a direct competitor.
> What does that pinephone issue have to do with systemd? Sounds more like a bootloader issue (or if not, the kernel lacking a feature) to me.
It's not bootloader related or kernel related. There are other distros for the PinePhone, like Mobian, where suspend works properly. The issue is Alpine's init system can't do the job. systemd can.
The biggest problem with all the discussions is that 90% of the people don’t know what they are talking about. A lot of people (and I don’t mean you specifically) treat alsa and PulseAudio as interchangeable but they’re at different levels of the stack. Most PulseAudio installs run on top of Alsa and it’s almost always possible to skip the PA layer and output to Alsa directly so long as the applications were built against libasound as well (and you configure PA not to automatically open your Alsa device).
> A at least was full of bugs on arrival ten+ years ago. That's what started the fight.
Right, and the criticisms haven't been updated since.
> While new stuff needs testing, folks react strongly when it lands on production systems
How exactly do you test things if you never roll it out to production. Because "at least as good as the old" is a hard metric, especially given that PA solved a bunch of problems the old code never even attempted.
I think Ubuntu had a particularly bad first PA implementation and that did a lot of the damage. But being on repeat for a decade about how it sucks while it no longer does for the vast majority does not make these people look smart, that's for sure.
I haven't heard any complaints about PA in a long time. Sure, if you dig hard enough you could find some, but nothing like the ground swell of ten+ years ago.
I understand it is very hard to roll out something new and bug-free to production. It's quite easy to not bother, however.
Not only do they work, but when you need to advanced stuff it's generally easy & works on the first try. Making loopback devices, combining audio streams and/or redirecting streams, all sorts of wacky stuff. The hardest part is finding a good guide on Google that tells you the right commands/configuration options. Same with systemd.
To be fair, the root problem is that the Linux kernel sucks at audio, and everything downstream from that suffers. This sits squarely on the heads of Linus and the maintainers.
For goodness sake, they STILL haven't mainlined PREEMPT_RT support after almost what? ... 20 YEARS. (Although they say they're finally going to ... I'll believe it when I see it.).
You need low-latency for audio to not suck. That requires a kernel re-architecture to enable that.
OS X went through a major kernel revamp back in the 10.3/10.4 timeframe in order to stabilize audio latency.
> Another one of the Linux projects that the cranky contakerous pissy segment of the FOSS world just loves to spit on & hate, PulseAudio.
There isn't one single anti-PulseAudio comment in the thread, and yet you start a top comment complaining about them, that you fill with insults (and get approved and upvoted for that). I wonder in which camp truly sit the always 'cranky', 'pissy', 'spiting', 'pointlessly shitting' 'haters'.
Some problems are hard and there is no good solution but only solutions that suck less than others. Imo pulseaudio and systemd are in this category. So it's easy for haters to find faults in them.
Here is a recent update from Christian Schaller on PipeWire [1]. It looks quite promising; it's especially good to see that PipeWire comes with shims implementing the ALSA, PulseAudio, and Jack API, so it should be a drop-in replacement.
There's a significant segment of the community that seems to be afraid of anything that fundamentally rethinks an old subsystem, I believe in large part because it forces them to learn new skills and they're been able to get paid rather handsomely thus far with their teenage-learned Unix skills.
It's a form of gatekeeping.
In my house, PulseAudio powers an entire wireless speaker setup, (except the speakers are actual high-quality bookshelfs, rather than say an Echo and I don't have to wiretap my house).
systemd is another project they love to hate despite what came before being an unreadable, inconsistent mess of variable quality shell scripts that came nowhere close to what systemd provides with its declarative services.
To be fair, while systemd really does provide something that SysV init never could, it is hardly without flaws. In the past, it has had several severe security problems (up to and including privilege escalation vulnerabilities) that the developers have not always handled well. It's a "move fast and break things" kind of software, which in the realm of essential operating system components is not a good quality.
I agree on pulseaudio though, my only real gripe with that is documentation.
On top of that PulseAudio appeared to be quite buggy at the beginning for quite some amount time, even after distributions started defaulting to it. I have read that most of those issues were actually driver bugs but that did not stop it getting a bad reputation.
I think this bad reputation is also part of the reason why the hate on systemd escalated that much.
I mean the module-virtual-surround-sink written 8 years ago [1] doesn't even work properly[2]. Even worse the convolution algorithm is a naive implementation with O(n^2) instead of O(nlog(n))[3]. 8 years later it appears the issues are finally being worked on.[4]
Bluetooth audio generally uses one of two "profiles"; there's A2DP, and generally sounds fine. I've not noticed the compression. A2DP is one-way, audio output only.
There's also HSP, which is two-way (mic+headphones, essentially), and it sounds like a potato.¹
(I have no idea why this is, either. My brother once "solved" the quality issues of HSP by getting a second headset, putting both headsets on his head, putting one in HSP and using the mic from that one, and putting the other in A2DP. It worked fine, aside from it wasn't comfortable and it looked ridiculous. He was on Windows, but I have the same issues in Linux and OS X.)
Short answer is no. I'm not really sure why so many try to hate on bluetooth audio. But don't take my word for it, hear for yourself. You can first read up some on it[0] Then get the needed tools to encode audio into the exact same format that your headphones would see [1]
Hopefully, you have an uncompressed wav file to test with. Unfortunately, the sbcenc command wants the wav in the uncommon big-endian format. Not worries, just do: "ffmpeg -i test.wav test.au" and it will convert it for you. Now feed that .au file into sbcenc:
"sbcenc -B 16 -s 8 -j -b 51 test.au >test.sbc"
That is the default scheme for "bluetooth high quality". Personally I have not found a difference in the sound. Of course note that the audio encoding is adaptive, so if you put on your bluetooth headphones and walk 50 ft away, yea it might sound bad...
My real problem with BT is how unreliable it is. I'd like to find someone who never had an issue pairing some earphones or headphones and had it working flawlessly every time.
There are many variants of BT audio and they don't all work the way you would expect. It's still not rare to end up with lag if your combination of hardware, software and receivers don't have compatible technologies like aptX.
For me the unreliable and lengthy pairing process is the major pain. Takes out the earbuds, wait at least 10-20s to get them connected, then wait another 10s for the phone to realize it needs to output to the earbuds. More often than I'd like, I have to put back the buds in their case and take them out again to force a reconnection because the phone or one of the buds said it was connected but isn't getting any sound...
And I'm not talking about a cheap Chinese Android phone with a pair of crap $20 earbuds. I'm talking about iPhone 11 Pro with Sony 1000XM3.
It's not the equipment, had similar issues with an iPhone 8 and Bose headsets, and with various dongles and computers.
A friend recently thought his new Sony WH-1000XM4 headphones were not working and asked me to have a look. Took me 15 minutes to hook them to Windows properly. Turns out there are multiple devices that appear (but not at the same time) during the pairing process and if you connect to the BLE stack, you don't get the headphone functionality... (what functionality you get is a mystery) and of course, no mention of that in the documentation.
Wireless audio is important enough that we shouldn't have to settle for these subpar experiences. Although BT has made progress on the audio front by providing protocols with lighter overhead and latency, it's still not particularly suited for audio (no high quality transmission, mono/stereo only, no multicast, lengthy connection, ...)
>My real problem with BT is how unreliable it is. I'd like to find someone who never had an issue pairing some earphones or headphones and had it working flawlessly every time.
The first project I worked on was for elderly care[0], with a fitness tracker we would connect with over Bluetooth Low Energy (BLE). The first things I had done was to request the manufacturer's communication protocol for that device and then abstract that away in a nice library so we could control the tracker with Python functions `tracker.start_ecg()`, etc.
The second part was dealing with all the weird connectivity problems, troubleshoot, and add helpful exception messages for others. Here's an example:
message = """
We were unable to connect to the device {}{}{}...
Bluetooth devices are tricky and even when you do everything
right, you can still get this exception. Here are a few things
to consider:
- Simply retry to connect.
- Check `rfkill list`: it will tell you whether your device
is blocked or not. If it is indeed blocked, you can run:
`\x1b[33mrfkill unblock bluetooth\x1b[0m` to unblock it.
- Check `hciconfig`: it will tell you whether your device is
down or up. If it is down, you can raise your device:
`\x1b[33msudo hciconfig [hci0|hci1] up\x1b[0m`.
- You can also restart the bluetooth service:
`\x1b[33msudo service bluetooth restart\x1b[0m`.
- Calling Bluetooth.connect with the wrong hci_device param
can result in this exception, too. Check which device is
active and call the method with that one.
- Take a deep breath, you'll probably get this often.
"""
I don't understand your comment. Are you referring to the error message? If yes, that is targeted to the developers using the library that controls the device. i.e: us. The library helps us talk to a device by abstracting away low level details.
It’s gotten a lot better recently. But you do have to remember to unpair the last device before trying to use a new device because you can only connect one thing at a time.
In case that wasn't a rhetorical question, I use a Sennheiser 4.50BTNC every day and it works flawlessly, both on my phone and on my Linux computer (through a generic USB adapter).
1. Audio quality is not as good as my older same price headphones (3.30 I think), even with noise cancelling turned off. Apparently the 4.40 is better in this regard at the cost of losing the noise cancelling feature.
2. It can connect to two devices at once in a mostly intelligent manner, e.g. your phone and laptop where the laptop is playing music but the phone takes priority for calls or noisy alerts or alarms. But if one of these devices disconnects, its going to interrupt your audio every 2 minutes to say "Disconnected" until you reboot the headphones and connect just your new device(s). This has caused trouble with my Surface Go waking up from sleep, connecting automatically, then going back to sleep and disconnecting, disrupting my music listening.
If you want to skip the intermediate file and do it all in one go, you can pipe directly to stdout with ffmpeg if you give it sufficient information to make up for the loss of file name hints. I’ve never used sbcenc, but .au defaults to signed 16-bit big endian which you can specify and pipe (so as to skip the intermediate file) as follows:
ffmpeg -i input.foo -f s16be - | sbcenc ...
EDIT:
gstreamer supports encoding to SBC directly, from whatever input format you have (its sbcenc plugin takes signed 16-bit LE PCM, but it can losslessly convert between audio/video formats on-the-fly):
This. I've noticed Bluetooth quality issues, and they are always either
- The source is using low bitrates. SBC is a poor codec, but at the high bitrates (bitpool values) it should be used at it sounds perfectly fine.
- Just shitty hardware/DSP. If your Bluetooth speakers sound different over bluetooth than over line in, then the DSP/DAC processing going on in them is what is screwing up the audio, not the codec.
"High quality" Bluetooth audio codecs are largely a marketing scam to get patent royalties, because it seems that when a standard actually manages to mandate a royalty free codec (a rarity these days!) companies just can't help but try to jam patented nonsense down people's throats anyway. I'm sure some products even deliberately gimp SBC just to make proprietary alternatives sound better.
LC3 is the new audio standard for Bluetooth 5.2 and up, but when I read the Bluetooth SIG claims it provides better quality at comparable bitrates to a long list of codecs that ended with "and Opus," I had to laugh and close the tab.
For those that don't know, Opus is pretty much the current gold standard for freely available audio formats that are at least partially designed to operate at the very lowest end of the bitrate spectrum while delivering comprehensible audio; unless LC3 is also using SILK but just under a different name, I'd be extremely shocked to find it actually beats Opus.
On that topic, when I get fully set up after my move, I'll have more time to devote to my A2DP Opus mode.
I think it's actually not crazy to decode Opus on Bluetooth chipset DSPs, even all the way up to the maximum bitrates.
One nice thing about this as well is that adding surround sound modes shouldn't be that hard, I think it would be the first surround sound A2DP configuration.
That test is indeed laughable, I read their materials, and it looks like they crippled the Opus encoder, which was already on old release before they used it. Opus 1.3 at an ordinary complexity is doubtless better than LC3, and there are actual encoders and decoders that you can just download and port yourself.
There is a fork of the PulseAudio Bluetooth modules that supports LDAC, AAC, AptX, and AptX-HD. LDAC should be supportable upstream, I think it's coming.
On a side note, LDAC is so laughable. I have to imagine the engineers working on it had a lot of laughs. It has all the "high resolution audio" mumbo jumbo that Sony has marketed to audiophools for decades, and it is hilariously inefficient.
This article is not about the "streaming" you are thinking about. It is about using your PC as wireless speakers, i.e. streaming your phone audio to your PC.
What does this have to do with streaming? Obvious use case would be: I am sitting on the couch and want some music played on my speakers.
Option 1: Get up from the couch, go to PC, put the music on, get back to couch. Listen a bit, change my mind, get up again, change track on pc, get back to couch.
Option 2: Have some kind of remote controls for the PC media player on my phone (e.g. mpd). Requires some setup and only for one specific app, but very similar.
Option 3: Use phone as normal and as output select the speakers. People commonly do this with bluetooth speakers (which have their own issues) or you instead use wifi as described in the fine article.
Your whole question is basically "why would anyone want wireless speakers?".