Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> High quality pixels at lower resolution can stand up to more processing for upscaling than garbage ones.

What does "high quality pixel" mean?



It may be easier to think about high-quality swathes of pixels, because obviously any one pixel can't be wrong in isolation, and even to the extent it is, who cares.

However, if you look at a swathe of pixels, say a 16x16 chunk, you can reasonably talk about with what fidelity that set of pixels represents the original signal, whatever that original signal was. You get into the whole perceptual coding thing which tells us that determining the human-perceived accuracy of a chunk of pixels is more than just adding up the differences, but you can take that as a suitable first approximation for thought.

High-quality pixels have minimal deviation from the source material. Low-quality pixels have significant deviation.

Tom Scott has a good video where turns up and down the quality live so you can see it in a single video: https://www.youtube.com/watch?v=r6Rp-uo6HmI The video's resolution never changes, and you could say the video's bit rate never changes (it probably does because adaptive coding saves YouTube a fortune, but conceptually you could say it doesn't), but the quality of the pixels sure does.


It means an actual pixel stored in data (such as in a key frame) rather than a generated pixel (such as in an interpolated frame). The amount of data generated for interpolated pixels is usually the bare minimum required; for some devices it's literally just a change in value from the previous frame.


Bit rate vs compressed (think of jpg artifacts)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: