Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How can this not be rubbish? Your eyes are on average 6.3cm apart and these cameras on your phone appear to be about 1.5cm…

I don’t believe software can bridge that gap so I suspect it’ll look very strange when viewed in 3D…



Once every object in the frame is mapped to 3D coordinates, exaggerating the parallax is trivial (like rendering a video game for 3D glasses / VR), but to your point, figuring out what was behind those objects (to fill in the negative space that remains after translation) is a guessing game, equivalent to various smart fill / magic eraser features. Originally they used only context from the same photo, and nowadays they also use generative AI. Not always great results.


Only time will tell how well it works, are there other cameras with such small distances filming 3D video?


Keep in mind it also has a LiDAR sensor building a point cloud at the same time.


Phones are longer than 6.3cm, so put the other camera on the other end, with the added benefit of encouraging landscape-oriented images as well.


There are some small predators with excellent binocular vision who would like a word with you.


It's not about if predators have good eyes, it's much more about your brain expects to be processing depth information based on the difference between the two images (will having your eyes effectively 4cm+ closer together feel weird?). The images from each camera will clearly be different to your eyes but maybe your brain easily adapts.

Others have said there are algorithms allowing all the objects in a scene to be distorted/rerendered in such a way that this can be corrected for but I am extremely skeptical of this without evidence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: