Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pixels are point samples, they do not have corners, they are not little squares. [1]

[1] http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf



Except when they are. No camera samples points, almost all are closer to the little squares model.


I am quite deep into dangerous half-knowledge territory here, but I think this is wrong. The optics of a camera will transform a point source according to its point spread function [1] and you are then integrating the contributions of all point spread functions overlapping a given sensor element, which often is a little square. So taking into account the optics, what you have actually sampled is not a square, you only integrated across a square. And each spot within the sensor element was illuminated with different light that got integrated together into a single pixel value, so you can not just turn around and say that each spot on the sensor element was illuminated with the same color, the one you got from integrating across the entire sensor element. If the scene you imaged had no frequency content above the Nyquist frequency [2], then you should be able to exactly reconstruct the illumination of the sensor, including at scales smaller than a single sensor element.

[1] https://en.wikipedia.org/wiki/Point_spread_function

[2] https://en.wikipedia.org/wiki/Nyquist_frequency




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: