Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well I thought they used ML to calibrate it (the correction matrix) but I am not sure about this.


It's not too complicated, at least in theory. Typically you decide a lens model ahead of time that has some unknown calibration parameters (eg. focal length, skew, radial/tangential distortion). For a given lens/sensor system, the calibration parameters can be determined ahead of time by taking pictures of a calibration pattern (checkerboard, AprilGrid, etc.). The calibration pattern provides some ground truth distances between multiple points in the image. An off-the-shelf non-linear solver is then used to solve for the unknown calibration parameters.

This usually provides acceptable results for applications like correcting lens distortion, even across multiple instances of the same lens. However, sometimes better accuracy is needed - either because the lens is very cheap and inconsistent between different examples or for applications like SLAM where better camera calibration translates directly into better results. In that case there are techniques like online calibration to tweak the calibration parameters on the fly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: