Hacker News new | past | comments | ask | show | jobs | submit login

> Your display is likely only 720p, or 1080p, but a 4K video on youtube will still look a lot better, although technically it should have no visible difference.

Not at all, bitrate differences are audible.

> And just like you need 4K 4:2:0 mp4 video to get even close to the quality of uncompressed 1080p 4:4:4 video, you also need far higher sampling rate and depth of highly compressed audio to get a quality similar to 16bit/44.1kHz PCM.

…how so?




> Not at all, bitrate differences are audible.

Even with the same bitrate, you’ll need 4K video to get quality comparable to uncompressed 1080p.

For example, mp4 defines that the brightness channel Y should be stored with full resolution, and the color channels Pb Pr should be stored at half resolution. So in a 4K mp4 video at lossless bitrate, the actual colors will only be stored at 1080p. This functionality is called Chroma Subsampling.

Audio Codecs do something very similar, causing these exact issues.


It doesn't matter what resolution the color is stored at, as long as the high frequencies are the same as the Y channel. You won't have lost anything.

Of course, cross-channel intra prediction would work better[1] but 4:2:0 is pretty good quality considering you can throw out 3/4 the pixels.

[1] https://people.xiph.org/~unlord/spie_cfl.pdf


That’s again the same issue. Nice theory, and theoretically is true, but all real algorithms also subsample high frequencies, and scale them back up with nearest-neighbor.

The same issue happens with audio. Nice theory, completely broken realistic implementations.


It's true that nearest neighbor is bad, but the important part is the YUV->RGB conversion. It distributes the missing high frequencies out of the Y channel into all three RGB images.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: