Suppose you have a 5 Mbps data budget, 1080p display, and 4k source material. You will get better quality by first downsampling the 4k to 1080p and then compressing and distributing the result. If you compress and distribute the 4k, followed by downsampling to display at 1080p, you cannot recover the color and/or motion information that was lost in order to fit all of those pixels into 5 Mbps.
However, if you have a 20 Mbps budget for the 4k to account for having 4 times as much original data, then there shouldn't be much of a difference in the downsampled 1080p video (ignoring peculiarities of the codec).
All this is not very relevant to the audio issue being discussed. It would be relevant if it were physically impossible to perceive the difference between 1080p and 4k video, and if watching 4k video potentially caused optical illusions. In that case, the only reason to prefer the 20 Mbps 4k stream would be if you planned to edit, mix, or zoom around in the video instead of simply watching it.
When it comes to audio, since size isn't as much of a concern as video, in most cases I would say "maybe I'll want to edit it someday" is strong enough reason to get the 24/192 material at a correspondingly high bitrate if it's available.
Of course your theory is quite sound, but I will point out that in practice most 4k streaming content uses an HEVC codec, while most 1080p streaming content uses an AVC codec, so you'll likely have much better results on your data budget with the 4k signal even if significantly downsampled to your display.
But that’s the exact issue that this entire article is missing!
It’s all about peculiarities of the codec!
The issue at hand is apple selling 24bit/192kHz versions of lossy AAC compressed files, compared to 16bit/44.1kHz versions of AAC files.
And the issue I was comparing with video was the same – with video, codecs enforce Chroma Subsampling, where the resolution for color is half that of the actual imagery.
In the same way, AAC and mp3 heavily reduce the bandwidth for the upper half of the frequency spectrum, spending like 90% of their available bandwidth on the lower half (with 44.1kHz, they prioritize the range between 4 and 8kHz, specifically, where speech is).
The entire topic is if using a codec that specifically cuts away the lower and upper parts of the frequency spectrum, increasing the frequency spectrum can improve quality. And yes, it does. Apple is selling AAC, not WAV. Which makes the entire article useless.
Yes, we should all focus on replacing 16bit/44.1kHz AAC with 16bit/44.1kHz FLAC instead of 24bit/192kHz AAC, and we all should focus on replacing 4:2:0 1080p mp4 with 4:4:4 1080p mp4 instead of 4:2:0 4K mp4 (the chroma subsampling issue I mentioned). But that’s not the reality we live in, and given the choice between 16bit/44.1kHz AAC and 24bit/192kHz AAC, I’ll choose the second.
However, if you have a 20 Mbps budget for the 4k to account for having 4 times as much original data, then there shouldn't be much of a difference in the downsampled 1080p video (ignoring peculiarities of the codec).
All this is not very relevant to the audio issue being discussed. It would be relevant if it were physically impossible to perceive the difference between 1080p and 4k video, and if watching 4k video potentially caused optical illusions. In that case, the only reason to prefer the 20 Mbps 4k stream would be if you planned to edit, mix, or zoom around in the video instead of simply watching it.
When it comes to audio, since size isn't as much of a concern as video, in most cases I would say "maybe I'll want to edit it someday" is strong enough reason to get the 24/192 material at a correspondingly high bitrate if it's available.