Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I never understood the "eye resolution=screen resolution -> good enough argument".

First of all the cone spacing in the fovea is around 31arcsec or about half the arcmin you assume. IMO that is more relevant than the 20/20 vision number because that number is not based on any intrinsic quantisation of the visual system but rather mostly limited by blur which tends to be very much not gausian -> not an ideal low pass filter for most eyes.

Now consider the nyquist shannon sampling theorem that tells us that any signal we want to fully capture needs to be sampled at at least twice the frequency of the highest frequency of interest. So if we want to be able to fully represent any state of our visual system on a display we need at least twice the resolution of our visual system (ignoring for a minute that that assumes an ideal lpf which your eye is not as stated above). so already 4x your 1arcmin resolution number.

But that all quickly becomes rather theoretical when you look at jagged elephant in the room: aliasing and scaling! a lot of what we look at is either rendered with pixel precision being very prone to aliasing at scales that are much much larger than your pixel pitch (see this worst case demonstration https://www.testufo.com/aliasing-visibility) or image or video files that might be displayed at a size that isn't an integer multiple or fraction of its native resolution. scaling just like aliasing causes artifacts that go way beyond the scale of your pixel pitch and one way to mitigate the issue is to just have a very high target resolution to scale too. So yeah I don't thing "can't distinguish individual pixels" is a meaningful threshold and even way beyond that there is still benefit to be had even for those with less than perfect eyesight



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: