Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They do? Just costs engineer hours to implement. So only fancier of potatoes get them.

The area of research that covers from HDR to smartphone array cameras is called Computational Photography, I think.



Well the demo I saw showed it was just a matter of taking a picture of a printed template and the software would figure out the distortion.

It shouldn't take too much effort to implement the correction data in the camera software.


Does the issue become individual corrections?

On the one hand, Lightroom corrects my lens distortion and vignette from a saved profile of that lens; but I’d think at a smartphone level, where the parts are smaller, they’re also less even?

In other words, camera lenses are small so manufacturing defects are more noticeable. Therefore, each smartphone requires a different correction.

I don’t know, I’m just guessing.


> Does the issue become individual corrections?

> In other words, camera lenses are small so manufacturing defects are more noticeable. Therefore, each smartphone requires a different correction.

Well, yes, and no, and yes. Small camera lenses are probably going to be more evident in certain defects. On the other hand, there are typically less elements in smaller camera lenses, which is nice as the less of those there are, the less the chance of other defects.

Taking a picture like mentioned above could absolutely create a 'per-camera/phone' correction profile. May not be able to correct every type of defect but I could see it being useful for some lenses. I know for Sony cameras, there have been a couple models where a large percentage of the copies produced have -one- soft corner.


Some of the smallest plastic lenses have geometries and optical properties which are impossible to make in glass or in larges plastic pieces. So smaller lenses don't have only disadvantages to them. (Even though I myself personally rather take a pound of glass any day over these things.)


Not 100% equivalent, but from what I’ve heard, this can be an issue when using full frame lenses on a crop sensor DSLR. The sensor gets everything from a smaller part of the lens, so defects that don’t show on a 35mm equivalent sensor can show at APS-C.


When you use a full-frame lens on a crop sensor:

* Vignetting always improves

* First-order distortion improves, second-order distortion sometimes gets worse.

* Corner sharpness usually improves, unless midway out on the frame was the weakest performance on full frame.

* Center sharpness (image-scale) drops due to the additional enlargement.

* Lateral chromatic aberration generally doesn't get significantly better or worse.

* Longitudinal chromatic aberration becomes more visible because of the additional enlargement.


I think Google Pixels are individually calibrated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: