Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's no need to bicker about this. A 13 inch display at 4K has about 340 pixels per inch along the diagonal. At 1080p it would have about 170 pixels per inch.

Perfect vision is usually understood to be the ability to discern details of about 1/60 of a degree, or 0.29 milliradians. By the arc length formula (s=rθ), we find that the distance needed to precisely align this angle with the level of detail provided by a 1080p screen is about 20 inches.

In other words, if you sit closer than 20 inches away from your screen (perhaps not that unlikely for a laptop), you might be able to discern details beyond the 1080p level. This would only be possible for extremely contrast sensitive small details, like text in a small font size at 1:1 (no DPI scaling).

So... it's a bit complicated, but I suspect 1080p would be good enough for nearly everyone at 13 inches, but move up to 15 inches and I could see many people preferring 1440p.



This isn't a complete analysis.

High DPI screens are able to render text without aliasing, and that makes a significant difference, especially with less-than-perfect eyesight.

What you're saying is true insofar as I had to squint up to a 1080p screen at 13" to actually see the pixels. But everything was slightly blurry, and switching to a "Retina" (1600p iirc) screen fixed that.

Which was the lack of subpixel aliasing: you can make those screens look blurry as well, by turning it back on. But a 1080 without it just looks like it has janky text rendering.

An actual 4K screen is unnecessary at 13", imho, but the extra pixels are harmless except in battery life. I do detect a real difference in textual rendering between 170ppi and 227, because it's the difference between leaning on subpixel aliasing and turning it off while still getting good results.


I never understood the "eye resolution=screen resolution -> good enough argument".

First of all the cone spacing in the fovea is around 31arcsec or about half the arcmin you assume. IMO that is more relevant than the 20/20 vision number because that number is not based on any intrinsic quantisation of the visual system but rather mostly limited by blur which tends to be very much not gausian -> not an ideal low pass filter for most eyes.

Now consider the nyquist shannon sampling theorem that tells us that any signal we want to fully capture needs to be sampled at at least twice the frequency of the highest frequency of interest. So if we want to be able to fully represent any state of our visual system on a display we need at least twice the resolution of our visual system (ignoring for a minute that that assumes an ideal lpf which your eye is not as stated above). so already 4x your 1arcmin resolution number.

But that all quickly becomes rather theoretical when you look at jagged elephant in the room: aliasing and scaling! a lot of what we look at is either rendered with pixel precision being very prone to aliasing at scales that are much much larger than your pixel pitch (see this worst case demonstration https://www.testufo.com/aliasing-visibility) or image or video files that might be displayed at a size that isn't an integer multiple or fraction of its native resolution. scaling just like aliasing causes artifacts that go way beyond the scale of your pixel pitch and one way to mitigate the issue is to just have a very high target resolution to scale too. So yeah I don't thing "can't distinguish individual pixels" is a meaningful threshold and even way beyond that there is still benefit to be had even for those with less than perfect eyesight




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: