Hacker News new | past | comments | ask | show | jobs | submit login

I would agree with you - but the 'infobox' (for lack of a better term) stays stationary over the same point of the building for the whole thing. I can't imagine the computations to keep it that centered based simply on the orientation of the phone would be either accurate enough or much faster than basic block/rectangle image processing.

After all, Google street-view does some simple image processing like that in-browser using JavaScript, and I'd imagine native iPhone code performance would approach that level of power.




With the accelerometer you can detect gravity (true down) and your angle relative to it. Then model the earth as a plane and keep the label a constant distance above the ground.

Seems like they did a good job of it though. It's really slick.


If they were doing feature tracking (following recognizable points in the video), this would far smoother. They don't even appear to be filtering, as their info boxes are shaking all over. Anyone demoing an AR app who moves the camera slowly is compensating for poor implementation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: