Hacker News new | past | comments | ask | show | jobs | submit login
Multi-channel Audio Part 2 (computer.rip)
85 points by kogir 8 months ago | hide | past | favorite | 32 comments



Somewhat tangential, but here's a cool somewhat open-source project related to Dolby Atmos:

https://cavern.sbence.hu/cavern/

https://github.com/VoidXH/Cavern

The visualizer, which is what I was _most_ interested in (along with software decoding) is written in C# and the rendering is done in Unity -- both things I valued & thought were cool. In theory, you could build a DIY multi-channel "receiver" with this type of software if given enough audio outputs (and/or put something like Dante to use).

I explored it a bit further but it's relatively cost prohibitive, especially if you want to do something like accept HDMI input, it gets messy. AFAICT, at least when I went down this research path a few months back, even finding & getting dev kits/boards with HDMI input (of semi-recent generation) was non-trivial & pretty pricey.


I'm having trouble figuring out what cavern is exactly. Does it somehow relate to Dolby Access? https://www.microsoft.com/p/dolby-access/9N0866FS04W8

That is, is it an audio renderer that does player side mixing into N channels? Dolby Access does it for headphones and for up to 7.1 surround systems

I'm new to this whole audio format thing and I'm just trying to figure out how things work, as all of the Dolby stuff is very "magic behind a licence fee"


I wonder if this could be adapted to spatially-tracked headphones, or even wasd+mouse


Atmos has put me in an awkward position. I have a proper home cinema setup (AVR, wired separate speakers including the ones that bounce sound off the ceiling) and listening to Atmos music on it is amazing. It's every bit as revolutionary as claimed.

However, I like to own music and that is simply impossible at the moment for most Atmos recordings. I would love to build a library of such recordings, preferably in physical form, and would happily spend quite a lot of money doing so. But Apple Music is basically the only way I can listen to anything.

I can't help but suspect this is entirely deliberate, an attempt to use this innovation to hasten the passing of the concept of owning music into the past.

Sadly, I also worry the move to streaming means an awful lot of music is eventually going to be lost forever.


I'm a grumpy old man, and no one can ever make me care about any audio transport fancier than analog stereo. To my dismay, it's getting hard to find TVs that can even provide usable stereo output without some kind of extra decoder box or something. Luckily, last time I bought a TV, I was still (barely) able to find one that had a headphone jack, which I use as a stereo line out.


Some TVs have a SPDIF output that you can connect to a D/A converter, but that's also an external box which you don't want. The thing here is that modern TVs typically have an integrated Class D amp for their speakers that has a direct I2S input for the DSP. The TV manufacturer doesn't bother adding in a seperate D/A chip on the board as it doesn't need it.

The good thing though is that those cheap $10 HDMI audio extractors work well for this use case if you have a playback device that outputs PCM over HDMI. As a side note those extractors are also a great way of getting 5.1 surround sound from a HTPC running the Dcaenc DTS encoder [1] into an old pre-HDMI AVR.

1. https://gitlab.com/patrakov/dcaenc


This might be off-topic, but I'd like to use this opportunity to complain that basically every modern home theater setup has like hundreds of milliseconds of audio latency.


In many cases, that's intentional and required.

Decoding, distributing, processing, and rendering across all the involved components can take in the order of 100's of ms. HDMI 1.3 and up has had a mechanism for equipment to communicate internal delays so audio remains time-aligned with the rendered image.

Some devices will also have manual overrides for this. If you are experiencing significant drift something is likely borked in the setup.


Is that the same 'every' that have wireless speakers or 'soundbars'?

Because I'm pretty sure mine doesn't. I hate it, I'm the first to point it out or only one bothered by it. I haven't done anything (other than run Audyssey) to stop it, it just hasn't been a problem as far as I can tell.

And I know you're exaggerating, anyone would notice 'hundreds of milliseconds', but still.


I get that audio latency matters for say phone calls (and I can rant about that getting only worse over time), but does it matter for home theater as long as it's known and adjusted for? Maybe it's a bit annoying if you pause and the audio doesn't stop right away, but otherwise, give me a 2 second audio latency and I don't care as long as you've got a/v sync so the sound and the picture line up where I'm sitting.


It makes a fancy HT setup unusable for gaming.


What's it matter if the DAC is inside the TV or out, beyond the one in the TV is probably a cheap piece of crap. The important thing is the TV support plain Jane 2 channel PCM for digital output. From there you can go anywhere and do anything without loss or codec compatibility nightmares.


For me it's because there is no other dac. TV to mixer to monitor speakers via 1/4" TS.


It's still TV to mixer to speakers in that case just the TV<->mixer portion is SPDIF digital instead of TRS analogue. I.e. the mixer is the DAC instead of the TV and the analog portion is limited to the actual connection to the speakers.

TV selection options go through the roof while analog loss and interference in the mixer is eliminated.


Next time I'm in the market, I'll consider this. I'll probably have to get a mixer then too, instead of using this half broken 30 year old one I have kicking around.


fwiw, you can get a higher quality output (if your tv has it) via optical and/or hdmi looking for the "PCM" output setting. on my recent tv (an LG) it was buried in the settings and greyed out until i turned off all the Ai processing bs. only then could my external stereo DAC work.


Is there a DAC that takes HDMI input? That's not big/expensive?


I gave up on getting audio passthrough to work reliably and just send PCM over HDMI. I don't think there are downsides to this, unless my computer is somehow worse at decoding DTS than my AV receiver?


Should be just fine, at the minor cost of losing some configuration via your receiver remote, as long as your PC can deliver multichannel PCM to your receiver. Until relatively recently there wasn't a common way to do this, but recent versions of HDMI can.

The main reason that passthrough is the norm is history - the connection to the receiver used to generally be S/PDIF or HDMI 1.x versions that had the same capability as S/PDIF, so you had to use Dolby or DTS to get the audio to the receiver. Otherwise you could only do two channels.

Actually a shocking number of PC motherboards and soundcards of that era have 7.1 worth of analog outputs, but I can't say anyone ever used them. I believe 7.1 analog outputs were required for Intel HD Audio compliance.


The only two things I can think of are Atmos (as mentioned in the article), and metadata for dynamic range compression (which you can do on the computer too but may be more convenient to control on the receiver).

I think the main reason for audio passthrough preference in the home theater crowd is seeing the DD/DTS logos light up on the receiver.


I'm sad that PC surround sound is (mostly) either multiple analog wires to plain old speakers, or HDMI to a receiver. HDMI mostly works, but it's not ideal, since running it through the video card and drivers introduces points of failure, and needs a monitor output to piggy back off of to work. (That's fine for a TV, but PC audio and video are separate concerns.) Why can't they use USB instead? Is the market too small? Receivers have had USB ports for years, but those are for playing MP3s off a flash drive. A PC isn't a flash drive.


_those are for playing MP3s off a flash drive._ I am reminded of the workaround people used to use for inputting streamed audio to their old car stereos through a cassette tape adapter. Could the computer emulate a flash drive somehow?


All the Atmos PR and descriptions talk about "objects," but they never say how those objects' sounds are separated from the others' in the datastream. How can, for example, 56 waveforms be carried independently in one stream?

The use of Atmos in music is just plain bad. How many pop recordings are actually mixed for Atmos? I can't imagine that it's as many as Apple is presenting "in Atmos" on Apple Music. So is there some post-processing BS going on, a la "Q-Sound" and other fake surround over the past few decades?

Here's an example of Atmos messing up music. It's too bad it happens, too, because the Atmos versions of songs seem to be less dynamically compressed: https://www.youtube.com/watch?v=xUgfp6mFG2E


Ad the part about cables:

If you have long cable runs I'd use an optical signal or a balanced line signal (this is why professional audio gear has balanced outputs and inputs with TRS 6.3mm or XLR-3 connectors).

There are simple adapters that allow you to send 4 balanced audio signals over existing ethernet connections. With CAT6 you can easily push balanced signals over a kilometer (long beyond the 100m treshold of actual CAT6 ethernet) without any noticable degradation.

If you have unbalanced signals from weak sources (vinyl needle?) you should keep the cable runs short, but even if the driver is good it can help to add a balun (passive or active) to run the thing balanced when the cable run is longer than 10 meters or is in a harsh environment (e.g. power chords with bursty loads emitting EMF).


VLC and other PC based software has always left me with just as many problems - if not more - regarding picture quality, as well as audio. The gold standard for me - this goes for three TVs going back 12 years - has always turned out to be to use the TV's own media player app, in conjunction with a solid DNLA server.

Otherwise it's gripes over finding the ideal combination of TV picture settings AND OS display settings. The TV is an OS of it's own, of course. How does one go about tweaking two sets of settings that overlap?


I’ve tried various methods for playing media files on my TV over the years. I’ve settled on an AppleTV with the Infuse app as my gold standard.

I used to use Kodi, but got tired of endless minor issues and UI skins that haven’t evolved since 2005.

PCs have also been left behind when it comes to HDR, Dolby Vision, and streaming options due to DRM.


Thinking of going the nvidia shield route as Apple TV isn't able to play back overhead Atmos content. Infuse mixes the Atmos objects into a PCM stream but that has no overhead channels.


Correct. If you have an atmos setup then the AppleTV will convert TrueHD atmos and DTS:X to PCM. It will play back Dolby Digital Plus atmos however, from web rips etc.

It’s a shame because everything else about the AppleTV is such a nicer experience.


> AppleTV with the Infuse app

This. It even remembers played status across devices. No need to guess which episode to watch when on the phone in bed.


Similarly, I wrestled for years with getting good results out of Kodi/XBMC. It was mostly good but never entirely reliable and took a lot of fiddling to get to that stage. I recently switched to Jellyfin with the Android TV client app on my TV, and so far it’s been better results with almost zero fiddling.


I try to buy whichever is considered the best at the time and consider a TV to be a 2000Euro upwards purchase. But every TV needs the settings tweaked and a lot of its processing turned off. The quickest way to get there is to start with the tweaks others have done from video forums. Get to that to a reference point.

Then buy a good source like an Apple TV for streaming, a BluRay HD if you like disks or a OSMC Vero to run kodi. They should require very little changes or setup.

I think the audio is more challenging.


> But what about object-based surround sound? I'm using that somewhat lengthy term to try to avoid singling out one commercial product, but, well, there's basically one commercial product: Dolby Atmos.

In theory, the recent(ish)ly standardized SMPTE 2098-2 bitstream protocol will allow for 3rd party encoders/decoders of object-based "immersive audio." In practice, 2098-2 is the bastard child of Atmos and DTS:X and I kind of doubt we'll ever see a FOSS decoder.

But anything's possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: